A specter is haunting Silicon Valley — the specter of TESCREALism.

“TESCREALism” is a term coined by philosopher Émile Torres and AI ethicist Timnit Gebru to refer to a loosely connected group of beliefs popular in Silicon Valley. The acronym unpacks to:

Transhumanism — the belief that we should develop and use “human enhancement” technologies that would give people everything from indefinitely long lives and new senses like echolocation to math skills that rival John von Neumann’s.   

Extropianism — the belief that we should settle outer space and create or become innumerable kinds of “posthuman” minds very different from present humanity.  

Singularitarianism — the belief that humans are going to create a superhuman intelligence in the medium-term future. 

Cosmism — a near-synonym to extropianism. 

Rationalism — a community founded by AI researcher Eliezer Yudkowsky, which focuses on figuring out how to improve people’s ability to make good decisions and come to true beliefs. 

Effective altruism — a community focused on using reason and evidence to improve the world as much as possible.  

Longtermism — the belief that one of the most important considerations in ethics is the effects of our actions on the long-term future.[1]

TESCREALism is a personal issue for Torres,[2] who used to be a longtermist philosopher before becoming convinced that the ideology was deeply harmful. But the concept is beginning to go mainstream, with endorsements in publications like Scientific American and the Financial Times

The concept of TESCREALism is at its best when it points out the philosophical underpinnings of many conversations occurring in Silicon Valley — principally about artificial intelligence but also about everything from gene-selection technologies to biosecurity. Eliezer Yudkowsky and Marc Andreessen — two influential thinkers Torres and Gebru have identified as TESCREAList — don’t agree on much. Eliezer Yudkowsky believes that with our current understanding of AI we’re unable to program an artificial general intelligence that won’t wipe out humanity; therefore, he argues, we should pause AI research indefinitely. Marc Andreessen believes that artificial intelligence will be the most beneficial invention in human history: People who push for delay have the blood of the starving people and sick children whom AI could have helped on their hands. But their very disagreement depends on a number of common assumptions: that human minds aren’t special or unique, that the future is going to get very strange very quickly, that artificial intelligence is one of the most important technologies determining the trajectory of future, that intelligences descended from humanity can and should spread across the stars.[3]

As an analogy, Republicans and Democrats don’t seem to agree about much. But if you were explaining American politics to a medieval peasant, the peasant would notice a number of commonalities: that citizens should choose their political leaders through voting, that people have a right to criticize those in charge, that the same laws ought to apply to everyone. To explain what was going on, you’d call this “liberal democracy.” Similarly, many people in Silicon Valley share a worldview that is unspoken and, all too often, invisible to them. When you mostly talk to people who share your perspective, it’s easy to not notice the controversial assumptions behind it. We learn about liberal democracy in school, but the philosophical underpinnings beneath some common debates in Silicon Valley can be unclear. It’s easy to stumble across Andreesen’s or Yudkowsky’s writing without knowing anything about transhumanism. The TESCREALism concept can clarify what’s going on for confused outsiders. 

However, Torres is rarely careful enough to make the distinction between people’s beliefs and the premises behind the conversations they’re having. They act like everyone who believes one of these ideas believes in all the rest. In reality, it’s not uncommon for, say, an effective altruist to be convinced of the arguments that we should worry about advanced artificial intelligence without accepting transhumanism or extropianism. All too often, Torres depicts TESCREALism as a monolithic ideology — one they characterize as “profoundly dangerous.” To them, TESCREALism is “a new, secular religion, in which ‘heaven’ is something we create ourselves, in this world,” invented by “a bunch of 20th-century atheists [who] concluded that their lives lacked the meaning, purpose and hope provided by traditional religion.

Atheists, who don’t expect justice to come from an omnibenevolent God or a blissful afterlife, have sought meaning, purpose, and hope in improving this world since at least the writing of the 1933 Humanist Manifesto.[4] It is perfectly natural and not especially sinister. If a community working together to create a better world is sufficient criteria to qualify as a religion, I’m all for religion. 

Torres’ primary argument that TESCREALism is dangerous centers on the fondness that effective altruists, rationalists, and longtermists hold for wild thought experiments — and what they might imply about what we should do. Torres critiques philosopher Nick Bostrom for arguing that very tiny reductions in the risk of human extinction outweigh the certain death of many people who currently exist, Eliezer Yudkowsky for arguing that we should prefer to torture one person rather than allow more people than there are atoms in the universe to get dust specks in their eyes, and effective altruists (as a group) for arguing that it might be morally right to work for an “evil” organization and donate the money to charity. 

It seems like the thing Torres might actually be objecting to is analytic ethical philosophy.

Effective altruists, rationalists, and longtermists have no monopoly on morally repugnant thought experiments. Analytic ethical philosophy is full of them. Should you tell the truth to the Nazi at your door about whether there are Jews in your basement? If you’re in a burning building, should you save one child or ten embryos? If an adult brother and sister secretly have sex, knowing that they’re both unable to conceive children, and they both had a wonderful time and believe the sex brought them closer and made their relationship better, did they do something wrong, and if so, why? Ethical philosophers argue both sides of these and many other morally repugnant questions. They’re trying to poke at the edge cases within our intuitions, the places where our intuitive sense of good and bad doesn’t match up with our stated ethical principles.  

Outside the philosophy classroom, ethicists mostly ignore the findings of their philosophy, as philosophers Joshua Rust and Eric Schwitzgebel have shown in a clever series of studies. Ethicists ignore ethical philosophy in ways we like (presumably even the most committed Kantian would lie if there were actually a Nazi at the door), but also in ways we don’t like (not donating to charity). Rationalists and effective altruists are unusual because they act on some of the conclusions of ethical philosophy outside of the classroom — and there, of course, comes the danger. 

In practice, Torres has found little evidence that effective altruists, rationalists, and longtermists have carried these particular thought experiments through to their conclusions. No one has access to more people than there exist atoms in the universe, much less the ability to put dust specks in their eyes. 80,000 Hours, a nonprofit that provides career advice and conducts research on which careers have the most effective impact,[5] has consistently advised against taking harmful jobs

Torres gives an example of an “evil organization” at which effective altruists recommend people work: the proprietary trading firm Jane Street. But Jane Street seems at worst useless. There are many criticisms to be made of a system in which people earn obscene amounts of money making sure that the price of a stock in Tokyo equalizes with the price of a stock in London slightly faster than it otherwise would. But if someone is going to pay millions of dollars for people to do that, it might as well go to people who will spend it on medicine for poor children rather than to people who will spend it on a yacht. It’s dumb to dump money from helicopters, but if someone dumps a million dollars in front of my house, I’m going to take it and donate it. It’s true that Sam Bankman-Fried, an effective altruist Jane Street employee, went on to commit an enormous fraud — but the fraud was universally condemned by members of the effective altruist community. People who do evil things exist in every sufficiently large social movement; it doesn’t mean that every movement recommends evil. 

The most important thought experiment — in terms of the weight Torres gives it and how TESCREALists actually behave — is about trade-offs related to so-called existential risk: the risk of either human extinction or a greatly curtailed future (such as a 1984-style dystopia). While most TESCREALists are worried about a range of existential risks, including bioengineered pandemics, the one most discussed by Torres is advanced artificial intelligence. Many experts in the field worry that we’ll develop extraordinarily powerful artificial intelligences without knowing how to get them to do what we want. If a normal computer program is seriously malfunctioning, we can turn it off until we figure out how to debug it. But a so-called “misaligned” artificial intelligence won’t want us to turn it off — and may well drive us extinct so we can’t. 

People who are worried about risks from advanced artificial intelligence generally expect that it will come very soon. Models created by people who are worried about risks from advanced artificial intelligence generally predict that we’ll develop it long before 2100. No significant number of people are saying, “Well, I think that in 999,999,999 out of 1,000,000,000 worlds we won’t invent an artificial intelligence in the next two hundred years, but I’ve completely reshaped my entire life around it anyway, because there are so many potential digital minds I could affect.”

It’s true that TESCREAList philosophers often debate Pascal’s mugging arguments: arguments that you should (say) be willing to kill four people for an infinitesimal decrease in the risk of existential risk. But Pascal’s mugging arguments are generally considered undesirable paradoxes, and TESCREAList philosophers often work on trying to figure out a convincing, solid counterargument.[6] But it’s convenient for Torres’ case to pretend otherwise.

Many rationalists, effective altruists, and longtermists talk about a concept called “getting off the crazy train.” Rationalists, effective altruists, and longtermists don’t want to be the hypocritical ethics professor who talks about the moral necessity of donating most of your income to help the global poor and then drives home in a Cadillac. They also don’t want to commit genocide because of a one-in-one-billion chance that it would prevent extinction. It makes sense to get off the crazy train at some point. Human reason is fallible; it’s far more likely that you would mistakenly believe that this genocide is justified than that it actually is. 

But it’s difficult to pick any sort of principled stop at which to deboard the crazy train. Some people are bought in on AI risk but don’t accept that a universe with more worse-off people can be better than a universe with fewer better-off people. Some people work on preventing bioengineered pandemics and donate a fifth of their salaries to buy malaria nets. Some people work on vaccines while worrying that everything will be pointless when the world ends. Some people say, “I might believe we live in a simulation, but I don’t accept infinite ethics; that stuff’s too wild,” even though the exact distinction being made here is unclear to anyone else. And everyone shifts uncomfortably and wants to change the subject when the topic of how they made these decisions comes up. 

But there’s one particular stop on the crazy train Torres worries the most about. They critique longtermism sharply:

According to the longtermist framework, the biggest tragedy of an AGI apocalypse wouldn’t be the 8 billion deaths of people now living. This would be bad, for sure, but much worse would be the nonbirth of trillions and trillions of future people who would have otherwise existed. We should thus do everything we can to ensure that these future people exist, including at the cost of neglecting or harming current-day people — or so this line of reasoning straightforwardly implies.

They ask, “If the ends can justify the means, and the end is paradise, then what exactly is off the table for protecting and preserving this end?” In short, TESCREALists are so in love with the idea of a far-off paradise that they are willing to sacrifice the needs of people currently living. 

At first blush, it seems insensitive, even cruel, to prioritize people who don’t exist over people who do. But it’s difficult to have common-sense views about a number of issues without caring about future people. For example, the negative effects of climate change are mostly on people who don’t exist yet — and that was even more true in the late 1980s when the modern consensus around climate change was first coalescing. Should we tolerate higher gas prices now to keep an island from sinking underwater a century from now? After all, high gas prices harm the people choosing between dinner and the gas they need to get to work right now. Why not just pollute as much as we want and stick future generations with the bill?

Longtermism may rearrange our priorities, but it won’t fundamentally replace them. Large effective altruist funders such as Open Philanthropy generally adopt a “portfolio” approach to doing good, including both charities that primarily affect present people and charities that primarily affect future people. Effective altruists are trying to pick the lowest-hanging fruit to make the world a better place. If you’re in an orchard, you’ll do much better picking the easily picked apples from as many trees as you can, rather than hunting for the tree with the most apples and stripping them all off while saying, “This tree has the most apples, and therefore no matter how hard it is to climb, all its apples must be the easiest to get!” Even if the long-term future is overwhelmingly important, we may run low on opportunities that outweigh helping people who already exist. (In fact, the vast majority of people in history were uncontroversially in this position.)

Further, the common-sense view is that, all things equal, things that are good for humanity in the short run are good for humanity in the long run. Great-power war and political instability increase the risk of AI race dynamics or the release of deadly bioengineered pandemics. If humanity is going to face future challenges head-on, it would help if more of its members were well-fed, well-educated, and not sick with malaria. 

Torres worries that longtermists would deprioritize climate change relative to other concerns. But to the extent that longtermism changes our priorities, it might make climate change more important. Toby Ord estimates a one in a thousand chance that climate change causes human extinction. If you’re not a longtermist, we should maybe prioritize climate change a bit more than we currently do. If you are a longtermist, we should seriously consider temporarily banning airplanes. 

Present-day longtermists aren’t campaigning for banning airplanes, because they believe that other threats pose even larger risks of human extinction. The real disagreement between Torres and longtermists is about factual matters. If you believe that artificial intelligence might drive us extinct in 30 years, you worry more about artificial intelligence; if you don’t, you worry more about climate change. The philosophy doesn’t really enter into it. 

Torres hasn’t established that TESCREALists are doing anything extreme. Actions taken by TESCREALists that Torres frowns on include:

Participating in governments, foreign policy circles, and the UN.

Fundraising.

Giving advice to people about how to talk to journalists.

Reaching out to people who are good communicators and thought leaders to convince them of things.

Following social norms and avoiding needless controversy.

Trying to avoid turning people off unnecessarily.

All social movements do these things. It isn’t a dark conspiracy for a movement to try to achieve its goals, especially if the movement’s philosophy is that we should direct our finite resources toward doing the most possible good. 

Torres has received death threats and harassment. I — like any minimally decent person — condemn death threats and harassment wholeheartedly. But harassment is an internet-wide problem, particularly for women and nonbinary people. If harassment were caused by TESCREAList extremism, people wouldn’t be sending each other death threats over not liking particular movies. If even one in ten thousand people thinks sending death threats is okay, critics will face death threats — but it’s unreasonable to hold the death threats against the 9,999 people who think death threats are wrong and would never send one. No major or even minor thinkers in effective altruism, transhumanism, the rationalist movement, or longtermism support harassment. 

Torres is particularly concerned about TESCREALists cavalierly running the risk of nuclear war. They criticize Eliezer Yudkowsky for supporting a hypothetical international treaty that permits military strikes against countries developing artificial intelligence — even if those countries are nuclear powers and the action risks nuclear war. 

But almost any action a nuclear power takes relating to another nuclear power could potentially affect the risk of nuclear war. The war in Ukraine, for example, might increase the risk that Vladimir Putin will choose to engage in a nuclear first strike. That doesn’t mean that NATO should have simply allowed the invasion to happen without providing any assistance to Ukraine. We must trade off the risk of nuclear war against other serious geopolitical concerns. As the world grows more dangerous, our risk calculus should include the dangers posed by emerging technologies, such as bioengineered pandemics and artificial intelligence. We shouldn’t engage in reckless nuclear brinkmanship, but similarly we shouldn’t be so concerned about nuclear war that we miss a rogue country releasing a virus a thousand times more deadly and virulent than COVID-19. 

Torres’ implication that only TESCREALists think this way is simply false. Eliezer Yudkowsky’s argument is no different from calculations that have been made by policymakers across the globe since 1945. If anything, longtermists are more cautious about nuclear war than many saber-rattling politicians for the same reasons they care more about climate change. For example, 80,000 Hours characterizes nuclear security as “among the best ways of improving the long-term future we know of,” although it’s “less pressing than our highest priority areas.”

Torres themself supports a moratorium, perhaps even permanent, on research into artificial intelligence. I have no idea how they believe this would be enforced without the threat of some form of military intervention. Lack of intellectual honesty about the costs of your preferred policies is not a virtue. 

Paradoxically, although Torres believes that TESCREALists make a trade-off between the well-being of present-day people in the name of speculative hopes about the future, the policies Torres supports involve far more wide-ranging and radical sacrifices. They write:

[I]f advanced technologies continue to be developed at the current rate, a global-scale catastrophe is almost certainly a matter of when rather than if. Yes, we will need advanced technologies if we wish to escape Earth before it’s sterilised by the Sun in a billion years or so. But the crucial fact that longtermists miss is that technology is far more likely to cause our extinction before this distant future event than to save us from it.

The solution? For us “to slow down or completely halt further technological innovation.” In a different article, they call for an end to economic growth and to all attempts to “subjugate and control” nature.

It’s possible that Torres is phrasing their beliefs more strongly than they hold them. Perhaps they simply believe that we should avoid developing new technologies that pose an outsized risk of harm — a wise viewpoint originally developed by TESCREAList and philosopher Nick Bostrom

But let’s say that Torres means what they say. Then let us be clear about the consequences of ending technological innovation, economic growth, and the control of nature. Throughout the vast majority of human history, only half of children survived to the age of 15; today, 96% do. Because of the Green Revolution and global transportation networks, for the first time in history, famine happens only if a government is too poorly run to take the simple steps necessary to prevent it. The only solution anyone has discovered for an effective end to poverty is economic growth. Before the Industrial Revolution, all but a tiny minority of elites lived in what we would currently consider extreme poverty.

Many disabled people rely on technology for their survival. If we end all attempts to control nature, innumerable disabled people will die, from people who need ventilators to breathe to premature babies in the NICU. I take a daily pill that treats the disease that would otherwise make my life unlivable; it costs pennies per dose. My six-year-old son has all human knowledge available at his fingertips, even if he mostly uses it to learn more about Minecraft. Due to our economic surplus, an unprecedented number of people have the education and free time to develop in-depth opinions about philosophical longtermism. 

Technological progress continues to benefit the world. To pick only one example, since 2021, when Torres called for an end to technological innovation, solar technology has improved massively — making solar and other clean energy technologies one of our best hopes for fighting climate change. And while large language models get the headlines, most inventions solve the boring problems of ordinary people, as they always have: For example, while traditional cookstoves are a major cause of indoor air pollution, we have yet to develop clean cookstoves that most developing-world consumers want to use. Technology matters.  

For all their faults, TESCREALists usually have a very concrete vision of the future they want: interstellar colonization, the creation of nonhuman minds that transcend their creators, technology giving us new abilities both earthshattering (immortality!) and trivial (flight!). Torres’ vision is opaque at best. 

Torres talks a lot about deliberative-democratic institutions and Indigenous wisdom. They call for “attunement to nature and our animal kin, not estrangement from them; humility, not growth-obsessed, technophilic, rocket-fueling of current catastrophic trends; lower birthrates, not higher; and so forth.” But they give few specifics about what they think a society marked by attunement to nature and humility and Indigenous wisdom would look like. Specifics about Torres’ ideal world, I think, would raise questions about what happens to the NICU babies. 

Torres’ disagreement with TESCREALists is not about whether to care about future people, which they do. It isn’t about whether we should sacrifice the well-being of current people in the hopes of achieving some future utopia: Although Torres criticizes utopian thinking, they engage in it themself. It isn’t even about what measures are acceptable to achieve utopia; Torres achieves moral purity through refusing to discuss how the transition to their ideal society would be accomplished. 

It is entirely and exclusively about what the utopia ought to look like.

Many people find the TESCREAList vision of the future unappealing. The discussion of how we should shape the future should include more opinions from people who didn’t obsessively read science fiction novels when they were 16. But Torres’ critique of TESCREALism ultimately comes from an even more unappealing place: a complete rejection of technological progress.

Torres can dismiss all TESCREALists out of hand because Torres is opposed to economic growth and even the most necessary control of nature. Everyone else has to consider specific ideas. How likely is it that we’ll develop advanced artificial intelligence in the next century, and how much of a risk does it pose? What international treaties should we make about dangerous emerging technologies? Where should you get off the crazy train? These questions are important — and Torres’ critiques of TESCREALism don’t help us answer them. 

  1. ^

    If they dropped “Cosmism” the acronym could be REALEST, and it would be much less unwieldy.

  2. ^

    Going forward I’ll mostly be talking about Torres, who has written far more about their viewpoints.

  3. ^

    Of course, a large number of people working in tech — including many people working on artificial intelligence — have never heard of any of these ideologies.

  4. ^

    While the Humanist Manifesto was written by a religious humanist, many signers were atheists, and 20th-century humanist movements were generally secular.

  5. ^

    Their job board includes listings at many organizations under the effective altruism umbrella, as well as more traditional organizations like USAID and the Bill & Melinda Gates Foundation.

  6. ^

    One example is Nick Beckstead and Teruji Thomas’s paper, “A paradox for tiny probabilities and enormous values.”

112

6
1
2
1

Reactions

6
1
2
1

More posts like this

Comments15
Sorted by Click to highlight new comments since:

One side point about science fiction, eugenics and transhumanism and left/right politics. I am not a sci-fi expert, but the sci-fi that to me most obviously embodies "transhumanist" ideology that I have encountered has GOT to be Iain M. Banks' Culture novels, depicting a future (apparently) utopian society run by broadly benevolent AIs that makes routine use of genetic modification to reduce human suffering and make humans "better". I am not the only one to have noticed this, Musk is a fan: https://www.telegraph.co.uk/books/what-to-read/does-elon-musk-really-understand-books-claims-inspired/  It would be going a bit far to suggest that the Culture novels are an unambiguous endorsement of the society they depict, but I think they are broadly in favour of it. 

But the interesting thing about this is that whilst the books are "libertarian" in some sense, the Culture is a socialist society and Banks was an avowedly socialist, far-left writer. The books are also (at least in intent, people can reasonably disagree about how well they realize this) ultra-"liberal", depicting a society where almost everyone is pansexual and changes their gender at least once in their lifetime and all racial and gender hierarchy has been abolished. In "Player of Games", an ordinary Culture citizen doesn't even have the concept of gender hierarchy and has to have it explained to him. 

I don't think this shows that there's no danger in transhumanists ideas. Left utopianism can be dangerous, think Stalin! Many leftists in the 30s supported bad eugenics. And I don't think it shows that there are no concerns about some prominent EAs being sympathetic to bad right-wing views on race, gender politics etc. But I do think it shows that things are more complicated than transhumanism=eugenics=far-right. 

Personally, I find the acronym frustrating because of how foreign all of it is me based on my own experience as a (fairly new — less than two years) EA in the DC area. I like to think I have an okay read on the community here, and the behaviors and beliefs described by "TESCREALism" just do not seem to map reliably onto how people I know actually think and behave, which has led me to believe that Torres' criticisms are mostly bad faith or strawmen. I admittedly don't interact very much with AI safety or what I sort of nebulously consider to be the "San Francisco faction" of EA (faction probably being too strong a word), so maybe all of y'all over there are just a bunch of weirdos (kidding (like 90%))! 

While I don't agree with a lot of Torres beliefs and attitudes, I don't agree with this article that concerns about EA extremism are unwarranted. Take the stance on SBF, for example:

It’s true that Sam Bankman-Fried, an effective altruist Jane Street employee, went on to commit an enormous fraud — but the fraud was universally condemned by members of the effective altruist community. People who do evil things exist in every sufficiently large social movement; it doesn’t mean that every movement recommends evil. 

Yes, SBF does not represent the majority of EA's, but he still conducted one of the largest frauds in history, and it's unlikely he would have counterfactually done this without EA existing. Harmful, extremist EA-motivated actions clearly have happened, and they were not confined to a few randos on message boards, but contained actual highly influential and respected EA figures. 

Extremism might be in the minority, but it's still a real concern if there's a way to translate that extremism into real world harm, as happened with SBF.

I think this is especially important with AI stuff. Now, I don't believe in the singularity, but many EA'ers do, and some of them are setting out to build what they believe to be a god-like AI. That would be a lot of power concentrated into the people that build that. If they are extremist, flawed, or have bad values, those flaws could be locked in for the rest of time. Even if (more likely) the AI is just very powerful rather than god-like, a few people could still have a significant effect on the future. I think this more than justifies increased scrutiny of the flaws in EA values and thinking.  

I tend to believe that SBF committed fraud for the same reasons that ordinary people commit fraud (both individual traits like overconfidence and systematic traits like the lack of controls in crypto to prevent fraud). Effective altruism might have motivated him to put himself in the sort of situation where he'd be tempted to commit fraud, but I really don't see much evidence that SBF's psychology is much different than e.g. Madoff's. 

I don't know that "extremist" is a good characterization of FTX & Alameda's actions.

Usually "extremist" implies a willingness to take highly antisocial actions for the sake of an extreme ideology.

It's fair to say that trying to found a billion dollar company with the explicit goal of eventually donating all profits is an extreme action. It's highly unusual and goes much further with specific ideas than most adherents do. But unless one is taking a very harsh stance against capitalism (or against cryptocurrency), it's hard to call this action highly antisocial just yet. The antisocial bit comes with the first fraudulent action taken.

A narrative I keep seeing is that Sam and several others thought that not only the longstanding arguments against robbing banks to donate to charity are flawed, but in fact they should feel ok robbing customers who trusted them in order to get donation funds.

If someone believed this extreme-ified version of EA and so they committed fraud with billions of dollars, that would be extremist. But my impression is- whether it started as a grievous accounting flaw, a risky conspiracy between amphetamine fueled manics, or something else- the fraud wasn't a result of people doing careful math, sleeping on it, and ultimately deciding it was net positive. It involved irrational decisions. (This is especially clear by the end. I'd need to refresh my memory to talk specifics, but I think in the last months SBF was making long-term illiquid investments that made it even less plausible they could have avoided bankruptcy, and that blatantly did not increase EV even from a risk-neutral perspective.) 

If the fraud was irrational regardless of whether their ideology was ok with robbery, then in my view there's little evidence ideology caused the initial decision to commit fraud.

Instead the relevant people did an extreme action, and then made various moral and corporate failures typical of white collar crime, which were antisocial and went against their ideology. 

Relevant: Émile Torres posted a "TESCREAL FAQ" today (unrelated to this article I assume; they'd mentioned this was in the works for a while).

I've only skimmed it so far, but here's one point that directly addresses a claim from the article.

Ozy:

However, Torres is rarely careful enough to make the distinction between people’s beliefs and the premises behind the conversations they’re having. They act like everyone who believes one of these ideas believes in all the rest. In reality, it’s not uncommon for, say, an effective altruist to be convinced of the arguments that we should worry about advanced artificial intelligence without accepting transhumanism or extropianism. All too often, Torres depicts TESCREALism as a monolithic ideology — one they characterize as “profoundly dangerous.”

TESCREAL FAQ:

5. I am an Effective Altruist, but I don't identify with the TESCREAL movement. Are you saying that all EAs are TESCREALists?

[...] I wouldn’t say—nor have I ever claimed—that everyone who identifies with one or more letters in the TESCREAL acronym should be classified as “TESCREALists.” ... There are some members of the EA community who do not care about AGI or longtermism; their focus is entirely on alleviating global poverty or improving animal welfare. In my view, such individuals would not count as TESCREALists.

Having followed Torres's work for a while, I felt like Ozy's characterization was accurate -- I've shared the impression that many uses of TESCREAL have blurred the boundaries between the different movements / treated them like a single entity. (I don't have time to go looking for quotes to substantiate this, however, so it's possible my memory isn't accurate -- others are welcome to check this if they want.) Either way, it seems like Torres is now making an effort to avoid this (mis)use of the label.

Yeah, I think Ozy's article is a great retort to Torres specifically, but probably doesn't extrapolate well to anyone who has used the TESCREAL label to explain this phenomenon, many of whom probably have stronger arguments.

This is great - really useful to have calm, researched responses to bundle/bungle/shibboleths like Torres/Gebru seem to have come up with.  I have heard the eugenicist critique a lot, and unfortunately some of it is influential via culture/media eg this book (the author has a CS background and makes many good points (as does Torres in their other writing on x-risk), but the media tagline ends up being reductive and senationalist)

Great article love it!

I have one  minor disagreement here, which isn't even that important...

"Torres gives an example of an “evil organization” at which effective altruists recommend people work: the proprietary trading firm Jane Street. But Jane Street seems at worst useless. There are many criticisms to be made of a system in which people earn obscene amounts of money making sure that the price of a stock in Tokyo equalizes with the price of a stock in London slightly faster than it otherwise would."

I think there's a possibiilty Jane street could not just be "at worst useless" but be at least a little net negative (personally I would put it in the (5%-30% chance range). Potential positive effects are there like increasing liquidity to the market. On the other hand that kind of high frequency trading they do has potential to destabilise markets. Also this kind of firm may well concentrate wealth and increase inequality which might be bad.

Would the world be better or worse if High frequency trading firms like Jane Street didn't exist? Hard to say, but they might be so I don't think Jane street are "at worst useless".

I noticed Torres likes to bring up a particular critique around how longtermism is eugenicist. I haven't been great at parsing it because it's never very well explained, but my best guess is that it goes:

  • Longtermists prioritise the long term future very strongly
  • They are regularly happy to make existential trade-offs for one group of people in order to improve the lives of a different group of people in their thought experiments
    • In some cases, arguments such as these have been made in order to materially redirect funding from saving lives of the global poor to 'increasing capacity' for rich people who could work on longtermist causes
    • These 'capacity increases' sometimes look like just improving quality of life for these people (ex. Wytham Abbey in the most egregious case)
  • Sometimes these groups get selected in a way that makes them look suspiciously genetic
    • For example, the people who get privileged in these scenarios are overwhelmingly (but not exclusively) white, and the people who get traded off are overwhelmingly non-white
  • Therefore, longtermism isn't necessarily intentionally eugenicist, but without significant guardrails could very well end up improving the lives of some genetic groups at the expense of others

This is the best steelperson I could come up with. I am sympathetic to the above formulation, but I imagine Torres' version is a bit more extreme in practice. Fundamentally, I wonder if longtermists should more strongly reject arguments that involve directing funding toward privileged people for 'capacity building'.

But regardless, I'd love to know what your thoughts on that particular line of reasoning are (and not necessarily Torres' specific formulation of it, which as you've demonstrated, is likely to be too extreme to be coherent).

(I wonder if now that we've thoroughly discredited this person, we can move onto more interesting and stronger critiques of longtermism)

[This comment is no longer endorsed by its author]Reply

I assume Torres is thinking about transhumanism. Transhumanists want to use genetic engineering (amongst other) things, to ensure that people are born with more desirable capacities and abilities than they would be otherwise. That's one thing that people sometimes mean by "eugenics". There's a culture gap between analytic philosophy out of which EA comes and other areas of academia here. Mildly "eugenic" views like this are quite common in analytic ethics I think, but my impression (less sure about this) is that they horrify a lot of people in other humanities disciplines. 

Stronger "eugenic" views and, relatedly, extremely controversial views about race are also held by some prominent EAs, i.e. Scott Alexander, Nick Bostrom (at least at one point.) I.e. Scott is at least somewhat sympathetic to trying to influence which humans have children in order to improve the genetics of the population, though he is cagey about what his actual position is: https://www.astralcodexten.com/p/galton-ehrlich-buck?hide_intro_popup=true  Apart from the infamous racism email, Bostrom at one point discussed "dysgenic" trends (less intelligent people having more children) as an X-risk in one of his early papers (albeit to say that he didn't think the issue was all that important.) A post defending a goal of trying to stop people from having children with a high chance of various genetically influenced diseases and claiming the "eugenics" label as a positive one received many upvotes on this forum: https://forum.effectivealtruism.org/posts/PTCw5CJT7cE6Kx9ZR/most-people-endorse-some-form-of-eugenics  Peter Singer famously argues that it parents should have a right to kill disabled babies at birth if they want to replace them with non-disabled babies (because all babies aren't "persons" anyway): https://www.nytimes.com/2003/02/16/magazine/unspeakable-conversations.html

 

I'm inclined to write defenses of views in the latter paragraph:

  • My read (I admit I skimmed) is that Scott doesn't opine because he is uncertain whether there is a large scale reproduction-influencing program that would be a good idea in a world without GE on the horizon, not that he has a hidden opinion about reproduction programs we ought to be doing despite the possibility of GE.
  • I don't think the mere presence of a "dysgenic" discussion in a Bostrom paper merits criticism. Part of his self-assigned career path is to address all of the X-risks. This includes exceedingly implausible phenomena such as demon-summoning, because it's probably a good idea for one smart human to have allocated a week to that disaster scenario. I don't think dysgenic X-risks are obviously less plausible than demon-summoning, so I think it's a good idea someone wrote about it a little.
  • The article on this forum originated as a response to Torres' hyperbolic rhetoric, and primarily defends things that society is already doing such as forbidding incest.
  • Singer's argument, if I remember correctly, does not involve eugenics at all. It involves the amount of enjoyment occurring in a profoundly disabled child vs a non-disabled child, and the effects on the parents, but not the effect on a gene pool. I believe the original actually indicated severe disabilities that are by their nature unlikely to be passed on (due to lethality, infertility, incompatibility with intercourse, or incompatibility with consent), so the only impact would be to add a sibling to the gene pool who might be a carrier for the disability.

You are right—thank you for clarifying. This is also what Torres says in their TESCREAL FAQ. I've retracted the comment to reflect that misunderstanding, although I'd still love Ozy's take on the eugenics criticism.

My take was originally in my article but wound up being cut for flow-- I wound up posting it on my blog.

Executive summary: The concept of "TESCREALism" highlights some common assumptions in Silicon Valley, but it oversimplifies diverse views and fails to demonstrate that longtermists and others are sacrificing present wellbeing for speculative future benefits.

Key points:

  1. TESCREALism usefully points out common assumptions behind many Silicon Valley debates, but it often depicts a diverse range of views as a monolithic, dangerous ideology.
  2. Concern about thought experiments is better directed at analytic philosophy in general. Longtermists acknowledge the need to "get off the crazy train" and not always follow arguments to extreme conclusions.
  3. Longtermism rearranges but doesn't replace moral priorities. It may even increase the importance of present-day issues like climate change. Factual disagreements drive differences in prioritization.
  4. The piece argues that Émile Torres' proposed solutions, like halting technological progress, would be far more destructive to present-day wellbeing than longtermist ideas.
  5. Specific important questions remain about the development of advanced AI and other emerging technologies. Critiques of "TESCREALism" do little to clarify them.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
Relevant opportunities