This is a special post for quick takes by Geoffrey Miller. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

AI Czar attacks EA. (Again.)

Today in this post on X, the U.S. 'AI Czar' David Sacks directly attacked Humans First, an AI safety advocacy organization, by claiming that it's nothing more than 'censorship power play', a shadowy campaign by Effective Altruists to turn the conservative right against the AI industry, and to block technological progress. 

He quote-posted this blog by Jordan Schachtel titled 'Built to Deceive: How the Effective Altruist Machine Infiltrated the Conservative Right on AI'.

As an AI Safety advocate, a member of Humans First, an Effective Altruist, and a political conservative, I'm angry about this misrepresentation of AI safety campaign. And I think EAs should fight back harder against senior federal officials smearing our movement.

Any suggestions on how to respond? I don't have time this week to write a detailed rebuttal, but I'd be happy to link and promote anything that others write.

(Not a solution, but a general observation about people who engage in bashing EA.)

The "dot connectors" will always connect the dots, infer or invent nefarious motivations, and try to bucket you as they like. The problem is that you can't neatly map EAs onto the political spectrum -- yes, there are dominant trends, but the variance in views is sufficiently high that commentators have genuinely no clue where EAs belong. This makes sense because most major movements in history have been political ones, so when assessing EA, most people pull out their internal political philosophy detector and you end up with a mess like the chart below! 

But EA is a moral philosophy movement, and the chain of thinking is genuinely different. Instead of thinking how to organize society and labor, EAs unanimously agree on beneficentrism and deal with questions like, "What morally matters? To what degree? Which interventions are most effective? How do you even assess what is most effective?" When you organize a movement around these set of questions, you end up with:

  • Some people who want to automate software engineering, some who want to pause it entirely, and others who think we should defensively accelerate progress
  • At least two frontier AI labs: let's not forget OpenAI received $30 million in philanthropic money during its inception!
  • Some EAs who think that AI will be a big deal for {their cause area}, others who are skeptical of the whole AI bundle
  • Some EAs passionately dislike AI writing, some are fine with methodical use of AI in writing, and some are even more liberal about it
  • One particular EA who is the loudest voice combatting the data center water usage myth
  • (At least) one person from the EA-sphere who has large holdings in AI infrastructure
  • And conservative AI Safetyists like you and liberal long timeline accelerationists like me

I don't know what the best solution for combatting EA bashing is, but spreading the idea that EA is more politically and intellectually diverse than people think should help. 

 

This is  a slow-burn solution, but the most effective support and rebuttals will come from people who aren't EAs, but are just fair/principled, and have had enough exposure to EA to know when attacks are unfair. E.g. See Dean Ball this week. So the more surface area EAs can create with those sorts of people, the better the position EA is in. For example, I think Andy Masley's datacenter water use posts created a lot of surface ara with such people and has been better for the EA 'brand' than any specific rebuttal.

(A part of this strategy involves, as a general principle, "behave with as much dignity, integrity and fairness as possible, even when others aren't). (Admittedly my own responses usually involve gently poking fun, but I do try to stay good natured).

I think we could use a documentary series where we just go follow around orgs or individual EAs for a couple days and see how they talk, live and act. It would be pretty cheap at the very least. 

My new interview (48 mins) on AI risks for Bannon's War room: https://rumble.com/v6z707g-full-battleground-91925.html

This was my attempt to try out a few new arguments, metaphors, and talking points to raise awareness about AI risks among MAGA conservatives. I'd appreciate any feedback, especially from EAs who lean to the Right politically, about which points were most or least compelling.

You mention having “an ambition, even a prayer for [the AI developers]” (~12 min). You might mean this figuratively, but many viewers of that channel will probably take it literally. When you admit elsewhere that you don't believe in god and don't practice any religion, they likely see this as a contradiction and suspect you’re not being genuine once you become more popular.

My guess is that it's better to be upfront about not being a Christian in order to retain authenticity. You'll probably always be regarded as an outsider by the conservative right (just for another example: your twitter handle includes 'poly', and you've spoken at length online about being polyamorous), but you could hope to be perceived as 'the outsider who gets us'. This kinda worked for a while for Sam Harris or Milo Yiannopoulos.

Tobias -- I take your point. Sort of. 

Just as they say 'There are no atheists in foxholes' [when facing risk of imminent death during combat], I feel that it's OK to pray (literally and/or figuratively) when facing AI extinction risk -- even if one's an atheist or agnostic. (I'd currently identify as an 'agnostic', insofar as the Simulation Hypothesis might be true). 

My X handle 'primalpoly' is polysemic, and refers partly to polyamory, but partly to polygenic traits (which I've studied extensively), and partly to some of the hundreds of other words that start with 'poly'. 

I think that given most of my posts on X over the last several years, and the people who follow me, I'm credibly an insider to the conservative right.

'3 Body Problem' is a new 8-episode Netflix TV series that's extremely popular, highly rated (7.8/10 on IMDB), and based on the bestselling 2008 science fiction book by Chinese author Liu Cixin. 

It raises a lot of EA themes, e.g. extinction risk (for both humans & the San-Ti aliens), longtermism (planning 400 years ahead against alien invasion), utilitarianism (e.g. sacrificing a few innocents to save many), cross-species empathy (e.g. between humans & aliens), global governance to coordinate against threats (e.g. Thomas Wade, the UN, the Wallfacers), etc.

Curious what you all think about the series as an entry point for talking about some of these EA issues with friends, family, colleagues, and students?

I haven't seen the series, but am currently halfway through the second book.

I think it really depends on the person. The person I imagine would watch three-body problem, get hooked, and subsequently ponder about how it relates to the real world, seems like they also would get hooked by just getting sent a good lesswrong post?

But sure, if someone mentioned to me they watched and liked the series and they don't know about EA already, I think it could be a great way to start a conversation about EA and Longtermism.

I think there's a huge difference in potential reach between a major TV series and a LessWrong post.

According to this summary from Financial Times, as of March 27, '3 Body Problem' had received about 82 million view-hours, equivalent to about 10 million people worldwide watching the whole 8-part series. It was a top 10 Netflix series in over 90 countries. 

Whereas a good LessWrong post might get 100 likes. 

We should be more scope-sensitive about public impact!

I think I am misunderstanding the original question then?

I mean if you ask: "what you all think about the series as an entry point for talking about some of these EA issues with friends, family, colleagues, and students"

then the reach is not the 10 million people watching the show, it's the people you get a chance to speak to.

I completely agree Geoffrey! I originally read Liu Cixin's series before I became involved in EA, and would highly recommend to anyone who's reading this comment.

I think the series very much touches on themes similar in EA thought, such as existential risk, speciesism, and what it means to be moral.[1]

I think what makes Cixin's work seem like it's got EA themes is that a lot of the series challenges how humanity views its place in the universe, and it challenges many assumptions about both what the universe is, and our moral obligations to others in that universe, which is quite similar to how EA challenges 'common-sense' views of the world and moral obligation.

  1. ^

    (I also referenced it in this reply to Matthew Barnett)

PS: Fun fact: after my coauthor Peter Todd (Indiana U.) and I read '3 Body Problem' novel in  2015, we were invited to a conference on 'active Messaging to Extraterrestrial Intelligence' ('active METI') at Arecibo radio telescope in Puerto Rico. Inspired by Liu Cixin's book, we gave a talk about the extreme risks of active METI, which we then wrote up as this journal paper, published in 2017:

PDF here

Journal link here

Title: The Evolutionary Psychology of Extraterrestrial Intelligence: Are There
Universal Adaptations in Search, Aversion, and Signaling?

Abstract
To understand the possible forms of extraterrestrial intelligence (ETI), we need not only astrobiology theories about how life evolves given habitable planets, but also evolutionary psychology theories about how intelligence emerges given life. Wherever intelligent organisms evolve, they are likely to face similar behavioral challenges in their physical and social worlds. The cognitive mechanisms that arise to meet these challenges may then be copied, repurposed, and shaped by further evolutionary selection to deal with more abstract, higher-level cognitive tasks such as conceptual reasoning, symbolic communication, and technological innovation, while retaining traces of the earlier adaptations for solving physical and social problems. These traces of evolutionary pathways may be leveraged to gain insight into the likely cognitive processes of ETIs. We demonstrate such analysis in the domain of search strategies and show its application in the domains of emotional aversions and social/sexual signaling. Knowing the likely evolutionary pathways to intelligence will help us to better search for and process any alien signals from the search for ETIs (SETI) and to assess the likely benefits, costs, and risks of humans actively messaging ETIs (METI).

The book in my opinion is better, and relies so much on vast realizations and plot twists that it's better to read it blind—before the series and before even the blurb at the back of the book! So for those who didn't know it was a book, here it is: https://www.amazon.fr/Three-Body-Problem-Cixin-Liu/dp/0765377063 

[anonymous]2
0
0

I didn't know about this, now I think I have a new netflix shot to watch! thanks!

On the topic, I hear season 7, episode 5 of young sheldon is abput a dangerous AI. Edit: I watched the episode, it's not.

Curated and popular this week
Relevant opportunities