Petrov Day

Enter launch codes to destroy LessWrong.

(This is not an anonymous action)

Learn More

Recent Discussion

Petrov Day

Today we celebrate not destroying the world. We do so today because 38 years ago, Stanislav Petrov made a decision that averted tremendous calamity. It's possible that an all-out nuclear exchange between the US and USSR would not have actually destroyed the world, but there are few things with an equal chance of doing so.

As a Lieutenant Colonel of the Soviet Army, Petrov manned the system built to detect whether the US government had fired nuclear weapons on Russia. On September 26th, 1983, the system reported five incoming missiles. Petrov’s job was to report this as an attack to his superiors, who would launch a retaliative nuclear response. But instead, contrary to the evidence the systems were giving him, he called it in as a false

...

If it's not permissable for me to shut down the site, why is it permissible for Aaron to send unsolicited emails to 100 people inviting them to shut it down?

1EricHerboso19mOn the one hand, in order for MAD to work, decision-makers on both sides must be able to give credible threats for a retaliatory strike scenario. This is also true in this experiment’s case: if we assume that this will be iterated on future Petrov Days, then we must show that any tit-for-tat precommitments made are followed through. But at the same time, if LessWrong takes down the EA Forum, it just seems like wanton destruction to similarly take it down, too. I know that, as a holder of the codes, I should ensure that I’m making a fully credible threat by precommitting to a retaliatory strike, but I want to take precommitments seriously and I don’t feel confident enough to precommit to such an action. After giving this much thought, I decided to present the perhaps-too-weak claim that if the EA Forum goes down due to a LessWrong user pressing the button, I may press in retaliation. While this is not an idle threat, and I am serious about potentially performing a retaliatory strike, I am falling short of committing myself to that action in advance. I give more of my reasoning in my blog post on this [http://www.ericherboso.org/2021/09/honoring-petrov-day-by-not-pressing.html]. (Ultimately, this doesn’t matter, since others are already willing to make such a precommitment, so I don’t have to.)
2Peter Wildeford25mToo bad - I am committing to retaliating to establish a deterrent.

liloti237@gmail.com Anxiety pills Pain pills Depression medical care Anti-depression meds(ketamine HCl) CBD for anti stress All available for affordable prices

liloti237@gmail.com Anxiety pills Pain pills Depression medical care Anti-depression meds(ketamine HCl) CBD for anti stress All available for affordable prices

Bibliography

Crimmins, James E. (2013) Religious utilitarians, in James E. Crimmins (ed.) The Bloomsbury Encyclopedia of Utilitarianism, London: Bloomsbury Academic, pp. 475–478.

Pieters, Timo (2021) Doing good better: Combining ‘effective altruism’ and Buddhist ethics, European Academy on Religion and Society, April 22.

Riedener, Stefan (2021) Existential risks from a Thomist Christian perspective, Global Priorities Institute, University of Oxford.

Schifman, Ben (2021) EA for Jews - Proposal and request for comment, Effective Altruism Forum, March 28.

building effective altruism | Effective Altruism for Christians | social and intellectual movements

Originally written 2017-11-24; crossposted here after discussion on GiveWell Donation Matching

Sometimes people will describe a donation as "counterfactually valid" or just "counterfactual". For example, you might offer to donate a counterfactual dollar for every push-up your team does. [1] The high-level interpretation is that you're doing something you wouldn't have done otherwise.

What does "wouldn't have done otherwise" mean?

  • If you hire a mason to repoint your wall it's not something that would have just done on their own.
  • If you donate to a charity matching drive, the matching funds were very likely going to the charity regardless.

 

The former is fully counterfactually valid (you caused impact) while the latter isn't counterfactually valid at all (the impact of the matching funds was unchanged by your donation).

Say I offer to make a...

8Davidmanheim8hI'm probably a bit unusual in this regard, but I have different budgets for different things, so a counterfactual donation means spending $50 from my personal luxuries budget on a donation to that charity, which is in addition to the 10% of my net income that I donate otherwise. That keeps everything simple.

If you spend your personal luxuries budget in full every year, this sounds like #9, and I agree it's fine to call it counterfactual.

tldr: I'm looking for undergraduate research assistants / collaborators to work on research questions at the intersection of social science and long-term risks from AI. I've collected some research questions here. If you’re interested in working on these or related questions, and would like advice or mentorship, please contact Vael Gates at vlgates@stanford.edu!


 

Broader Vision

I'm a social scientist, and I want to contribute to reducing long-term risks from AI. I'm excited about growing the community of fellow researchers (at all levels) who are interested in the intersection of AI existential risk and social science. 

To that end, I'm hoping to:

  1. collect and grow a list of research questions that would be interesting for social scientists of various subfields and valuable to AI safety
  2. work with undergraduate students / collaborators on these
...

The comment about counterfactuals makes me think about computational cognitive scientist Tobias Gerstenberg's research (https://cicl.stanford.edu), where his research focuses a lot on counterfactual reasoning in the physical domain, but he also has work in the social domain. 

I confess to only a surface-level understanding of MIRI's research agenda, so I'm not quite able to connect my understanding of counterfactual reasoning in the social domain to a concrete research question within MIRI's agenda. I'd be happy to hear more though if you had more detail! 

1Vael Gates7hThanks so much; I'd be excited to talk! Emailed.
Sign up for the Forum's email digest
Want a weekly email containing the best posts from the past week? Our moderator Aaron sends out a weekly digest of recent posts that have a lot of karma/discussion or seemed really good to him, as well as question posts that could use more answers.

tl;dr: I am much more interested in making the future good, as opposed to long or big, as I neither think the world is great now nor am convinced it will be in the future. I am uncertain whether there are any scenarios which lock us into a world at least as bad as now that we can avoid or shape in the near future. If there are none, I think it is better to focus on “traditional neartermist” ways to improve the world.

I thought it might be interesting to other EAs why I do not feel very on board with longtermism, as longtermism is important to a lot of people in the community.

This post is about the worldview called longtermism. It does not describe a position on...

Thanks! I'm not very familiar with Haidt's work, so this could very easily be misinformed, but I imagine that other moral foundations / forms of value could also give us some reasons to be quite concerned about the long term, e.g.:

  • We might be concerned with degrading--or betraying--our species / traditions / potential.
  • You mention meaninglessness--a long, empty future strikes me as a very meaningless one.

(This stuff might not be enough to justify strong longtermism, but maybe it's enough to justify weak longtermism--seeing the long term as a major concern.)... (read more)

3Mauricio8hThanks! I can see that for people who accept (relatively strong versions of) the asymmetry. But (I think) we're talking about what a wide range of ethical views say--is it at all common for proponents of objective list theories of well-being to hold that the good life is worse than nonexistence? (I imagine, if they thought it was that bad, they wouldn't call it "the good life"?)
2MichaelStJules8hI think this would be pretty much only antinatalists who hold stronger forms of the asymmetry, and this kind of antinatalism (and indeed all antinatalism) is relatively rare, so I'd guess not.

My contribution to the creative writing contest may be a bit heavy-handed for EA tastes, but I’d love to get feedback and edit the story as needed. Thanks!

 

You never thought you’d use the reset button until the day you did.

The button, an old family heirloom gifted by your parents on your eighteenth birthday, sat at the bottom of a box in your closet for most of your twenties. While you were laser-focused on maxing out your college grades and interning at company after company until you finally landed a good job, then sating a bit of your lifelong wanderlust with well-deserved world travels, the reset button lingered, half forgotten.

Yes, you made youthful mistakes. From time to time, you considered digging the button out and using it. But...

I actually read the protagonist as 'probably suffering from radiation poisoning, might be about to literally die from the next bomb or the building collapsing' as of the moment before they hit the reset, so I would see such planning as irrational rather than sensible - a little information might help, but not if it risks your life (which is what you're thinking about if you're selfish) or the fate of the world (which is what you're thinking about if you're selfless).

1Joshua Ingle13hInteresting thoughts, thanks for your input! I'll think about how to incorporate the feedback.

tl;dr - Average utilitarianism seems to have weird implications if we're averaging over time, instead of just over people. Is this discussed anywhere?

If we consider whether we'd prefer a society of 1 million blissfully happy people versus 2 million merely very happy people, we're in the realm of typical population ethics. However, if we instead ask whether we'd like to have 10 generations of blissfully happy people, or 100 generations of merely very happy people, it seems like a different question - not because of discounting, but because we might want to aggregate over time even if we average over people alive at any given time, or might want to average over time even if we sum over people alive at any given time, since these seem...