An adapted excerpt from What We Owe The Future by Will MacAskill is now live in The New York Times and will run as the cover story for their Sunday Opinion this weekend.

I think the piece makes for a great concise introduction to longtermism, so please consider sharing the piece on social media to boost its reach! 

133

0
0

Reactions

0
0
Comments14


Sorted by Click to highlight new comments since:

From the comments in the NYT, two notes on communicating longtermism to people-like-NYT-readers:

  1. Many readers are confused by the focus on humans.
  2. Some readers are confused by the suggestion that longtermism is weird (Will: "It took me a long time to come around to longtermism") rather than obvious.

Re 2, I do think it's confusing to act like longtermism is nonobvious unless you're emphasizing weird implications like our calculations being dominated by the distant future and x-risk and things at least as weird as digital minds filling the universe.

Good points, though it's worth noting that the people who comment on NYT articles are probably not representative of the typical NYT reader

I'm also a bit surprised at how many of the comments are concerned about overpopulation. The most-recommended comment is essentially the tragedy of the commons. That comment's tone - and the tone of many like it, as well as a bunch of anti-GOP ones - felt really fatalistic, which worries me. So many of the comments felt like variations on "we're screwed", which goes against the belief in a net-positive future upon which longtermism is predicated.

On that note, I'll shoutout Jacy's post from about a month ago, echoing those fears in a more-EA way.

which goes against the belief in a net-positive future upon which longtermism is predicated

Longtermism per se isn't predicated on that belief at all—if the future is net-negative, it's still (overwhelmingly) important to make future lives less bad.

I can't unread this comment:

"Humanity could, theoretically, last for millions of centuries on Earth alone." I find this claim utterly absurd. I'd be surprised if humanity outlasts this century.

Ughh they're so close to getting it! Maybe this should give me hope?

Basically, William MacAskill's longtermism, or EA longtermism is trying to solve the distributional shift issue. Most cultures that have long-term thinking assume that there's no distributional shift such that no key assumptions of the present are wrong. Now if this assumption is correct, we shouldn't interfere with cultures, as they will go to local optimums. But it isn't and thus longtermism from has to deal with weird scenarios like AI or x-risk.

Thus the form of EA longtermism is not obvious, as it can't assume that there's no distributional shift into out of distribution behavior. In fact, we have good reasons of thinking that there will be massive distributional shifts. That's the key difference between EA and other culture's longtermism.

Here's a non-paywalled link available for the next 14 days.

Nice article, thanks for linking (and Will for writing).

Unfortunately some people I know thought this section was a little misleading, as they felt it was insinuating that Xrisk from nuclear was over 20% - a figure I think few EAs would endorse. Perhaps it was judged to be a low-cost concession to the prejudices of NYT readers?

We still live under the shadow of 9,000 nuclear warheads, each far more powerful than the bombs dropped on Hiroshima and Nagasaki. Some experts put the chances of a third world war by 2070 at over 20 percent. An all-out nuclear war could cause the collapse of civilization, and we might never recover.

Hmm, I don't read it that way. My read of this passage is: the risk of WWIII by 2070 might be as high as somewhat over 20% (but that estimate is probably picked from the higher end of serious estimates), WWIII may or may not lead to all-out nuclear war, all-out nuclear war has some unknown chance of leading to the collapse of civilization, and if that happened then there would also be some further unknown chance of never recovering. So all-in-all, I'd read this as Will thinking that X-risk from nuclear war in the next 50 years was well below 20%.

I also don't think NYT readers have particularly clear prejudices about nuclear war (they probably have larger prejudices about things like overpopulation), so this would be a weird place to make a concession, in my mind.

Great read!  Am I the only one who heard Will's Scottish brogue in my ear as I was reading?

Does anyone know whether the essay is also published somewhere else (preferably without a pay-wall)? I have a NYT subscription but apparently some friends of mine can’t access it.

A thing that sometimes works to get around paywalls is to add 'archive.is' after the https, like so:

https://archive.is/www.nytimes.com/2022/08/05/opinion/the-case-for-longtermism.html

I've found '12ft.io' works similarly, fwiw. Per its FAQ, it shows the cached version of the page that Google uses to index content to show in search results

Curated and popular this week
 ·  · 23m read
 · 
Or on the types of prioritization, their strengths, pitfalls, and how EA should balance them   The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone is trying to figure out how to prepare for AI. This is the first in a series of posts examining the state of cause prioritization and proposing strategies for moving forward.   Executive Summary * Performing prioritization work has been one of the main tasks, and arguably achievements, of EA. * We highlight three types of prioritization: Cause Prioritization, Within-Cause (Intervention) Prioritization, and Cross-Cause (Intervention) Prioritization. * We ask how much of EA prioritization work falls in each of these categories: * Our estimates suggest that, for the organizations we investigated, the current split is 89% within-cause work, 2% cross-cause, and 9% cause prioritization. * We then explore strengths and potential pitfalls of each level: * Cause prioritization offers a big-picture view for identifying pressing problems but can fail to capture the practical nuances that often determine real-world success. * Within-cause prioritization focuses on a narrower set of interventions with deeper more specialised analysis but risks missing higher-impact alternatives elsewhere. * Cross-cause prioritization broadens the scope to find synergies and the potential for greater impact, yet demands complex assumptions and compromises on measurement. * See the Summary Table below to view the considerations. * We encourage reflection and future work on what the best ways of prioritizing are and how EA should allocate resources between the three types. * With this in mind, we outline eight cruxes that sketch what factors could favor some types over others. * We also suggest some potential next steps aimed at refining our approach to prioritization by exploring variance, value of information, tractability, and the
 ·  · 5m read
 · 
[Cross-posted from my Substack here] If you spend time with people trying to change the world, you’ll come to an interesting conundrum: Various advocacy groups reference previous successful social movements as to why their chosen strategy is the most important one. Yet, these groups often follow wildly different strategies from each other to achieve social change. So, which one of them is right? The answer is all of them and none of them. This is because many people use research and historical movements to justify their pre-existing beliefs about how social change happens. Simply, you can find a case study to fit most plausible theories of how social change happens. For example, the groups might say: * Repeated nonviolent disruption is the key to social change, citing the Freedom Riders from the civil rights Movement or Act Up! from the gay rights movement. * Technological progress is what drives improvements in the human condition if you consider the development of the contraceptive pill funded by Katharine McCormick. * Organising and base-building is how change happens, as inspired by Ella Baker, the NAACP or Cesar Chavez from the United Workers Movement. * Insider advocacy is the real secret of social movements – look no further than how influential the Leadership Conference on Civil Rights was in passing the Civil Rights Acts of 1960 & 1964. * Democratic participation is the backbone of social change – just look at how Ireland lifted a ban on abortion via a Citizen’s Assembly. * And so on… To paint this picture, we can see this in action below: Source: Just Stop Oil which focuses on…civil resistance and disruption Source: The Civic Power Fund which focuses on… local organising What do we take away from all this? In my mind, a few key things: 1. Many different approaches have worked in changing the world so we should be humble and not assume we are doing The Most Important Thing 2. The case studies we focus on are likely confirmation bias, where
 ·  · 1m read
 · 
I wanted to share a small but important challenge I've encountered as a student engaging with Effective Altruism from a lower-income country (Nigeria), and invite thoughts or suggestions from the community. Recently, I tried to make a one-time donation to one of the EA-aligned charities listed on the Giving What We Can platform. However, I discovered that I could not donate an amount less than $5. While this might seem like a minor limit for many, for someone like me — a student without a steady income or job, $5 is a significant amount. To provide some context: According to Numbeo, the average monthly income of a Nigerian worker is around $130–$150, and students often rely on even less — sometimes just $20–$50 per month for all expenses. For many students here, having $5 "lying around" isn't common at all; it could represent a week's worth of meals or transportation. I personally want to make small, one-time donations whenever I can, rather than commit to a recurring pledge like the 10% Giving What We Can pledge, which isn't feasible for me right now. I also want to encourage members of my local EA group, who are in similar financial situations, to practice giving through small but meaningful donations. In light of this, I would like to: * Recommend that Giving What We Can (and similar platforms) consider allowing smaller minimum donation amounts to make giving more accessible to students and people in lower-income countries. * Suggest that more organizations be added to the platform, to give donors a wider range of causes they can support with their small contributions. Uncertainties: * Are there alternative platforms or methods that allow very small one-time donations to EA-aligned charities? * Is there a reason behind the $5 minimum that I'm unaware of, and could it be adjusted to be more inclusive? I strongly believe that cultivating a habit of giving, even with small amounts, helps build a long-term culture of altruism — and it would