Recent Discussion

Epistemic Status: Personal view about longtermism and its critics.

Recently, there have been a series of attacks on longtermism. These largely focus on the (indisputable) fact that avoiding X-risks can be tied to racist or eugenic historical precedents. This should be worrying; a largely white. educated, western, and male group talking about how to fix everything should raise flags. And neglecting to address the roots of futurism is worrying - though I suspect that highlighting them and attempting apologetics would have been an even larger red flag to many critics. 

At the same time, attacks on new ideas like longtermism are inevitable. New ideas, whether good or bad, are usually controversial. Moreover, any approaches or solutions that are proposed will have drawbacks, and when they are compared to the...

4Davidmanheim4hI would note that Toby and others in the long-termist camp do, in fact, very clearly embrace "the foundational values embedded in Peter Singer's writings." I agree that some people who embrace long-termism could decide to do so on other bases than impartial utilitarianism or similar arguments which agree with both redistribution and some importance of the long term, but I don't hear them involved in the discussions, and so I don't think it works as a criticism when the actual people do also advocate for near-term redistributive causes.
2Davidmanheim4hNo, because no-one is really providing this specific bit of outside feedback to most of those groups. As the post says, there have been recent attacks *on longtermism*.

there are also attacks on all global development charity for being colonialist. 

Also, you are giving more credit to the critiques than they deserve. One of them is obviously written by someone who clearly doesn't believe what he is saying but is instead redressing perceived personal slights by some of the people and organisations concerned, in particular with respect to turning him down for jobs, being unwilling to publicise his book,  criticising his work in public etc. For someone who thinks that these longtermist orgs are genocidal, he has applied for jobs at an awful lot of them!

Rethink Priorities is working on a project called ‘Defense in Depth Against Catastrophic AI Failures’. “Defense in depth” refers to the use of multiple redundant layers of safety and/or security measures such that each layer reduces the chance of catastrophe. Our project is intended to (1) make the case for taking a defense in depth approach to ensuring safety when deploying near-term, high-stakes AI systems and (2) identify many defense layers/measures that may be useful for this purpose.

If you can think of any possible layers, please mention them below. We’re hoping to collect a very long list of such layers, either for inclusion in our main output or for potentially investigating further in future, so please err on the side of commenting even if the ideas are...

Four layers come to mind for me:

  • Have strong theoretical reasons to think your method of creating the system cannot result in something motivated to take dangerous actions
  • Inspect the system thoroughly after creation, before deployment, to make sure it looks as expected and appears incapable of making dangerous decisions
  • Deploy the system in an environment where it is physically incapable of doing anything dangerous
  • Monitor the internals of the system closely during deployment to ensure operation is as expected, and that no dangerous actions are attempted

Random thought: You could use prediction setups to resolve specific cruxes on why prediction setups outputted certain values.

P.S. I'd be keen on working on this, how do I get involved?

1Samuel Shadrach43m+1 Coordination is a pain though, you may be better off appealing to specific HNWI investors to rally the cause. If anyone else is interested they can buy stock and delegate votes. In general I think there's a case to be for making delegating voting rights easier.
1Samuel Shadrach1hAs someone coming from the crypto space, I think carefully about which identity has what kind of content attached, and whether they can be cross-linked. Both for privacy and engagement purposes. Usernames instead of real names work well for this. I don't see why researchers or EAs can't do that.

In his speech at EAG London 2021 Benjamin Todd,  the Founder and CEO of 80,000 hours, said that EA organisations tend to have talent bottlenecks in some roles, but that doesn’t mean that it’s necessarily easier to find a job. Unless you are very lucky, you’re likely to go through multiple application rounds and spend at least a couple of months before you get an offer you’re happy with. However, for some reason, the moment we get a job we forget just how hard it was to land one. When we decide to look for another role a couple of years later, it takes us by surprise. No wonder – job search is a skill that everyone should improve and keep up-to-date, as practices change every year.


AAC does offer (or at least they did offer something similar after I went through their Introductory Course) :) in the form of matchmaking with one your peers of the cohort. I assume 80,000 Hours offers similar support with their 1-1 advice. Also people with WANBAM mentors or other mentors get similar support.

Although my thinking is more around "widely available application support for EAs" (regardless of participation in a program etc) I would imagine a pilot being run with a simple survey and then asking people to participate (both in "mentor"/"mentee" ... (read more)

What do EAs think about AI surveillance tech dismantling democracies?

We're expanding the SoGive analytical team and are looking to hire one or two new analysts to evaluate charities. Please see this link for more details, and feel free to contact me at if you want to discuss further

"Our approach has similarities with that followed by charity analysis organisations like GiveWell and Founders Pledge."

To put it bluntly, why should someone go to (work for, consult the recommendations of, support) SoGive vs other leading organizations you mention? Does your org fill a neglected niche, or take a better approach somehow, or do you think it's just valuable having multiple independent perspectives on the same issue?

Sign up for the Forum's email digest
Want a weekly email containing the best posts from the past week? Our moderator Aaron sends out a weekly digest of recent posts that have a lot of karma/discussion or seemed really good to him, as well as question posts that could use more answers.

Crossposted from

I have a surprising number of friends, or friends of friends, who believe the world as we know it will likely end in the next 20 or 30 years.

They believe that transformative artificial intelligence will eventually either: a) solve most human problems, allowing humans to live forever, or b) kill/enslave everyone.

A lot of people honestly aren't sure of the timelines, but they're sure that this is the future. People who believe there's a good chance of transformative AI in the next 20-30 years are called people with "short timelines."

There are a lot of parallels between people with short AI timelines and the early Christian church. Early Christians believed that Jesus was going to come back within their lifetimes. A lot of early Christians were quitting their jobs...

Here’s a version of the database that you filter and sort however you wish, and here’s a version you can add comments to.

Key points

  • I’m addicted to creating collections and have struck once more.

  • The titular database includes >130 organizations that are relevant to people working on longtermism- or existential-risk-related issues, along with info on:

    • The extent to which they’re focused on longtermism/x-risks
    • How involved in the EA community they are
    • Whether they’re still active
    • Whether they aim to make/influence funding, policy, and/or career decisions
    • Whether they produce research
    • What causes/topics they focus on
    • What countries they’re based in
    • How much money they influence per year and how many employees they have[1]
  • I aimed for (but likely missed) comprehensive coverage of orgs that are substantially focused on longtermist/x-risk-related issues and are part of the EA community.

  • I also included

Audio version available at Cold Takes (or search Stitcher, Spotify, Google Podcasts, etc. for "Cold Takes Audio")

This post interrupts the Has Life Gotten Better? series to talk a bit about why it matters what long-run trends in quality of life look like.

I think different people have radically different pictures of what it means to "work toward a better world." I think this explains a number of the biggest chasms between people who think of themselves as well-meaning but don't see the other side that way, and I think different pictures of "where the world is heading by default" are key to the disagreements.

Imagine that the world is a ship. Here are five very different ways one might try to do one's part in "working toward a better...

I've claimed before that the critical enabler for eucatastrophe is having a clear and implementable vision of where you are heading - and that's exactly what is missing is most mutinies. 

To offer an analogy in the form of misquoting Russian literature, "all [functioning governments] are the same, but each [dysfunctional new attempt at government] is [dysfunctional] in its own way.