Today we celebrate not destroying the world. We do so today because 38 years ago, Stanislav Petrov made a decision that averted tremendous calamity. It's possible that an all-out nuclear exchange between the US and USSR would not have actually destroyed the world, but there are few things with an equal chance of doing so.
As a Lieutenant Colonel of the Soviet Army, Petrov manned the system built to detect whether the US government had fired nuclear weapons on Russia. On September 26th, 1983, the system reported five incoming missiles. Petrov’s job was to report this as an attack to his superiors, who would launch a retaliative nuclear response. But instead, contrary to the evidence the systems were giving him, he called it in as a false
Sometimes people will describe a donation as "counterfactually valid" or just "counterfactual". For example, you might offer to donate a counterfactual dollar for every push-up your team does.  The high-level interpretation is that you're doing something you wouldn't have done otherwise.
What does "wouldn't have done otherwise" mean?
The former is fully counterfactually valid (you caused impact) while the latter isn't counterfactually valid at all (the impact of the matching funds was unchanged by your donation).
Say I offer to make a...
tldr: I'm looking for undergraduate research assistants / collaborators to work on research questions at the intersection of social science and long-term risks from AI. I've collected some research questions here. If you’re interested in working on these or related questions, and would like advice or mentorship, please contact Vael Gates at email@example.com!
I'm a social scientist, and I want to contribute to reducing long-term risks from AI. I'm excited about growing the community of fellow researchers (at all levels) who are interested in the intersection of AI existential risk and social science.
To that end, I'm hoping to:
tl;dr: I am much more interested in making the future good, as opposed to long or big, as I neither think the world is great now nor am convinced it will be in the future. I am uncertain whether there are any scenarios which lock us into a world at least as bad as now that we can avoid or shape in the near future. If there are none, I think it is better to focus on “traditional neartermist” ways to improve the world.
I thought it might be interesting to other EAs why I do not feel very on board with longtermism, as longtermism is important to a lot of people in the community.
This post is about the worldview called longtermism. It does not describe a position on...
My contribution to the creative writing contest may be a bit heavy-handed for EA tastes, but I’d love to get feedback and edit the story as needed. Thanks!
You never thought you’d use the reset button until the day you did.
The button, an old family heirloom gifted by your parents on your eighteenth birthday, sat at the bottom of a box in your closet for most of your twenties. While you were laser-focused on maxing out your college grades and interning at company after company until you finally landed a good job, then sating a bit of your lifelong wanderlust with well-deserved world travels, the reset button lingered, half forgotten.
Yes, you made youthful mistakes. From time to time, you considered digging the button out and using it. But...
tl;dr - Average utilitarianism seems to have weird implications if we're averaging over time, instead of just over people. Is this discussed anywhere?
If we consider whether we'd prefer a society of 1 million blissfully happy people versus 2 million merely very happy people, we're in the realm of typical population ethics. However, if we instead ask whether we'd like to have 10 generations of blissfully happy people, or 100 generations of merely very happy people, it seems like a different question - not because of discounting, but because we might want to aggregate over time even if we average over people alive at any given time, or might want to average over time even if we sum over people alive at any given time, since these seem...