An adapted excerpt from What We Owe The Future by Will MacAskill is now live in The New York Times and will run as the cover story for their Sunday Opinion this weekend.

I think the piece makes for a great concise introduction to longtermism, so please consider sharing the piece on social media to boost its reach! 

133

0
0

Reactions

0
0
Comments14


Sorted by Click to highlight new comments since:

From the comments in the NYT, two notes on communicating longtermism to people-like-NYT-readers:

  1. Many readers are confused by the focus on humans.
  2. Some readers are confused by the suggestion that longtermism is weird (Will: "It took me a long time to come around to longtermism") rather than obvious.

Re 2, I do think it's confusing to act like longtermism is nonobvious unless you're emphasizing weird implications like our calculations being dominated by the distant future and x-risk and things at least as weird as digital minds filling the universe.

Good points, though it's worth noting that the people who comment on NYT articles are probably not representative of the typical NYT reader

I'm also a bit surprised at how many of the comments are concerned about overpopulation. The most-recommended comment is essentially the tragedy of the commons. That comment's tone - and the tone of many like it, as well as a bunch of anti-GOP ones - felt really fatalistic, which worries me. So many of the comments felt like variations on "we're screwed", which goes against the belief in a net-positive future upon which longtermism is predicated.

On that note, I'll shoutout Jacy's post from about a month ago, echoing those fears in a more-EA way.

which goes against the belief in a net-positive future upon which longtermism is predicated

Longtermism per se isn't predicated on that belief at all—if the future is net-negative, it's still (overwhelmingly) important to make future lives less bad.

I can't unread this comment:

"Humanity could, theoretically, last for millions of centuries on Earth alone." I find this claim utterly absurd. I'd be surprised if humanity outlasts this century.

Ughh they're so close to getting it! Maybe this should give me hope?

Basically, William MacAskill's longtermism, or EA longtermism is trying to solve the distributional shift issue. Most cultures that have long-term thinking assume that there's no distributional shift such that no key assumptions of the present are wrong. Now if this assumption is correct, we shouldn't interfere with cultures, as they will go to local optimums. But it isn't and thus longtermism from has to deal with weird scenarios like AI or x-risk.

Thus the form of EA longtermism is not obvious, as it can't assume that there's no distributional shift into out of distribution behavior. In fact, we have good reasons of thinking that there will be massive distributional shifts. That's the key difference between EA and other culture's longtermism.

Here's a non-paywalled link available for the next 14 days.

Nice article, thanks for linking (and Will for writing).

Unfortunately some people I know thought this section was a little misleading, as they felt it was insinuating that Xrisk from nuclear was over 20% - a figure I think few EAs would endorse. Perhaps it was judged to be a low-cost concession to the prejudices of NYT readers?

We still live under the shadow of 9,000 nuclear warheads, each far more powerful than the bombs dropped on Hiroshima and Nagasaki. Some experts put the chances of a third world war by 2070 at over 20 percent. An all-out nuclear war could cause the collapse of civilization, and we might never recover.

Hmm, I don't read it that way. My read of this passage is: the risk of WWIII by 2070 might be as high as somewhat over 20% (but that estimate is probably picked from the higher end of serious estimates), WWIII may or may not lead to all-out nuclear war, all-out nuclear war has some unknown chance of leading to the collapse of civilization, and if that happened then there would also be some further unknown chance of never recovering. So all-in-all, I'd read this as Will thinking that X-risk from nuclear war in the next 50 years was well below 20%.

I also don't think NYT readers have particularly clear prejudices about nuclear war (they probably have larger prejudices about things like overpopulation), so this would be a weird place to make a concession, in my mind.

Great read!  Am I the only one who heard Will's Scottish brogue in my ear as I was reading?

Does anyone know whether the essay is also published somewhere else (preferably without a pay-wall)? I have a NYT subscription but apparently some friends of mine can’t access it.

A thing that sometimes works to get around paywalls is to add 'archive.is' after the https, like so:

https://archive.is/www.nytimes.com/2022/08/05/opinion/the-case-for-longtermism.html

I've found '12ft.io' works similarly, fwiw. Per its FAQ, it shows the cached version of the page that Google uses to index content to show in search results

Curated and popular this week
 ·  · 1m read
 · 
Garrison
 ·  · 7m read
 · 
This is the full text of a post from "The Obsolete Newsletter," a Substack that I write about the intersection of capitalism, geopolitics, and artificial intelligence. I’m a freelance journalist and the author of a forthcoming book called Obsolete: Power, Profit, and the Race to build Machine Superintelligence. Consider subscribing to stay up to date with my work. Wow. The Wall Street Journal just reported that, "a consortium of investors led by Elon Musk is offering $97.4 billion to buy the nonprofit that controls OpenAI." Technically, they can't actually do that, so I'm going to assume that Musk is trying to buy all of the nonprofit's assets, which include governing control over OpenAI's for-profit, as well as all the profits above the company's profit caps. OpenAI CEO Sam Altman already tweeted, "no thank you but we will buy twitter for $9.74 billion if you want." (Musk, for his part, replied with just the word: "Swindler.") Even if Altman were willing, it's not clear if this bid could even go through. It can probably best be understood as an attempt to throw a wrench in OpenAI's ongoing plan to restructure fully into a for-profit company. To complete the transition, OpenAI needs to compensate its nonprofit for the fair market value of what it is giving up. In October, The Information reported that OpenAI was planning to give the nonprofit at least 25 percent of the new company, at the time, worth $37.5 billion. But in late January, the Financial Times reported that the nonprofit might only receive around $30 billion, "but a final price is yet to be determined." That's still a lot of money, but many experts I've spoken with think it drastically undervalues what the nonprofit is giving up. Musk has sued to block OpenAI's conversion, arguing that he would be irreparably harmed if it went through. But while Musk's suit seems unlikely to succeed, his latest gambit might significantly drive up the price OpenAI has to pay. (My guess is that Altman will still ma
 ·  · 5m read
 · 
When we built a calculator to help meat-eaters offset the animal welfare impact of their diet through donations (like carbon offsets), we didn't expect it to become one of our most effective tools for engaging new donors. In this post we explain how it works, why it seems particularly promising for increasing support for farmed animal charities, and what you can do to support this work if you think it’s worthwhile. In the comments I’ll also share our answers to some frequently asked questions and concerns some people have when thinking about the idea of an ‘animal welfare offset’. Background FarmKind is a donation platform whose mission is to support the animal movement by raising funds from the general public for some of the most effective charities working to fix factory farming. When we built our platform, we directionally estimated how much a donation to each of our recommended charities helps animals, to show users.  This also made it possible for us to calculate how much someone would need to donate to do as much good for farmed animals as their diet harms them – like carbon offsetting, but for animal welfare. So we built it. What we didn’t expect was how much something we built as a side project would capture peoples’ imaginations!  What it is and what it isn’t What it is:  * An engaging tool for bringing to life the idea that there are still ways to help farmed animals even if you’re unable/unwilling to go vegetarian/vegan. * A way to help people get a rough sense of how much they might want to give to do an amount of good that’s commensurate with the harm to farmed animals caused by their diet What it isn’t:  * A perfectly accurate crystal ball to determine how much a given individual would need to donate to exactly offset their diet. See the caveats here to understand why you shouldn’t take this (or any other charity impact estimate) literally. All models are wrong but some are useful. * A flashy piece of software (yet!). It was built as
Recent opportunities in Building effective altruism
2