Since I started PauseAI, I’ve encountered a wall of paranoid fear from EAs and rationalists that the slightest amount of wrongthink or willingness to use persuasive speech as an intervention will taint the person’s mind for life with self-deception-- that “politics” will kill their mind. I saw people shake in fear to join a protest of an industry they thought would destroy the world if unchecked because they didn’t want to be photographed next to an “unnuanced” sign. They were afraid of sinning by saying something wrong. They were afraid of sinning by even trying to talk persuasively!

The worry about destroying one’s objectivity was often phrased to me as “being a scout/not being a soldier”, referring to Julia Galef’s book Scout Mindset. I think we have all the info we need to contradict the fear of not being a scout in her metaphor. Scouts are important for success in battle because accurate information is important to draw up a good battle plan. But those battle plans are worthless without soldiers to fight the battle! “Everyone Should be a Mapmaker and Fear that Using the Map to Actually Do Something Could Make Them a Worse Mapmaker” would be a much less rousing title, but this is how many EAs and rationalists have chosen to interpret the book.

Even a scout can’t be only a scout. If a scout reports what they found to a superior officer, and the officer wants to pretend they didn’t hear it, a good scout doesn’t just stay curious about the situation or note that the superior officer has chosen a narrative. They fight to be heard! Because the truth of what they saw matters to the war effort. The success of the scout and the officer and the soldier is all ultimately measured in the outcome of the war. Accurate intel is important for something larger than the map— for the battle.

Imagine if the insecticide-treated bednets hemmed and hawed about the slight chance of harm from their use in anti-malaria interventions. Would that help one bit? No! What helps is working through foreseeable issues ahead of time at the war table, then actually trying the intervention with each component fully committed. Bednets are soldiers, and all our thinking about the best interventions would be useless if there were no soldiers to actually carry the interventions out. Advocating for the PauseAI proposal and opposing companies who are building AGI through protests is an intervention, much like spreading insecticide-treated bednets, but instead of bednets the soldiers are people armed with facts and arguments that we hope will persuade the public and government officials.

Interventions that involve talking, thinking, persuasion, and winning hearts and minds require commitment to the intervention and not simply to the accuracy of your map or your reputation for accurate predictions. To be a soldier in this intervention, you have to be willing to be part of the action itself and not just part of the zoomed out thinking. This is very scary for a contingent of EAs and rationalists today who treat thinking and talking as sacred activities that must follow the rules of science or lesswrong and not be used for anything else. Some of them would like to entirely forbid "politics" (by which they generally mean trying to persuade people of your position and get them on your side) or "being a [rhetorical] soldier" out of the fear that people cannot compartmentalize persuasive speech acts from scout thinking and will lose their ability to earnestly truth-seek.

I think these concerns are wildly overblown. What are the chances that amplifying the message of an org you trust in a way the public will understand undermines your ability to think critically? That's just contamination thinking. I developed the PauseAI US interventions with my scout hat on. When planning a protest, I'm an officer. At the protest, I'm a soldier. Lo and behold, I am not mindkilled. In fact, it's illuminating to serve in all of those roles-- I feel I have a better and more accurate map because of it. Even if I didn't, a highly accurate map simply isn't necessary for all interventions. Advocating for more time for technical safety work and for regulations to be established is kind of a no-brainer.

It's noble to serve as a soldier when we need humans as bednets to carry out the interventions that scouts have identified and officers have chosen to execute. Soldiers win wars. The most accurate map made by the most virtuous scout is worth nothing without soldiers to do something with it.

Comments19


Sorted by Click to highlight new comments since:
Tao
39
4
1
1

This is a valuable post, but I don't think it engages with a lot of the concern about PauseAI advocacy. I have two main reasons why I broadly disagree:

  1. Pausing AI development could be the wrong move, even if you don't care about benefits and only care about risks

AI safety is an area with a lot of uncertainty. Importantly, this uncertainty isn't merely about the nature of the risks but about the impact of potential interventions.

Of all interventions, pausing AI development is, some think, a particularly risky one. There are dangers like:

  • Falling behind China
  • Creating a compute overhang with subsequent rapid catch-up development
  • Polarizing the AI discourse before risks are clearer (and discrediting concerned AI experts), turning AI into a politically intractable problem, and
  • Causing AI lab regulatory flight to countries with lower state capacity, less robust democracies, fewer safety guardrails, and a lesser ability to mandate security standards to prevent model exfiltration

People at PauseAI are probably less concerned about the above (or more concerned about model autonomy, catastrophic risks, and short timelines).

Although you may have felt that you did your "scouting" work and arrived at a position worth defending as a warrior, others' comparably thorough scouting work has led them to a different position. Their opposition to your warrior-like advocacy, then, may not come (as your post suggests) from a purist notion that we should preserve elite epistemics at the cost of impact, but from a fundamental disagreement about the desirability of the consequences of a pause (or other policies), or of advocacy for a pause.

If our shared goal is the clichéd securing-benefits-and-minimizing-risks, or even just minimizing risks, one should be open to thoughtful colleagues' input that one's actions may be counterproductive to that end-goal. 

2. Fighting does not necessarily get one closer to winning. 

Although the analogy of war is compelling and lends itself well to your post's argument, in politics fighting often does not get one closer to winning. Putting up a bad fight may be worse than putting up no fight at all. If the goal is winning (instead of just putting up a fight), then taking criticism to your fighting style seriously should be paramount. 

I still concede that a lot of people dismiss PauseAI merely because they see it as cringe. But I don't think this is the core of most thoughtful people's criticism.

To be very clear, I'm not saying that PauseAI people are wrong, or that a pause will always be undesirable, or that they are using the wrong methods. I am answering to 

(1) the feeling that this post dismissed criticism of PauseAI without engaging with object-level arguments, and the feeing that this post wrongly ascribed outside criticism to epistemic purism and a reluctance to "do the dirty work," and

(2) the idea that the scout-work is "done" already and an AI pause is currently desirable. (I'm not sure I'm right here at all, but I have reasons [above] to think that PauseAI shouldn't be so sure either.)

Sorry for not editing this better, I wanted to write it quickly. I welcome people's responses though I may not be able to answer to them!

This analysis seems roughly right to me. Another piece of it I think is that being a 'soldier' or a 'bednet-equivalent' probably feels low status to many people (sometimes me included) because:

  • people might feel soldiering is generally easier than scouting, and they are more replaceable/less special
  • protesting feels more 'normal' and less 'EA' and people want to be EA-coded

To be clear I don't endorse this, I am just pointing out something I notice within myself/others. I think the second one is mostly just bad, and we should do things that are good regardless of whether they have 'EA vibes'. The first one I think is somewhat reasonable (e.g. I wouldn't want to pay someone to be a fulltime protest attendee to bring up the numbers) but I think soldiering can be quite challenging and laudable and part of a portfolio of types of actions one takes.

Yes, this matches what potential attendees report to me. They are also afraid of being “cringe” and don’t want to be associated with noob-friendly messaging, which I interpret as status-related.

This deeply saddens me because one of the things I most admired about early EA and found inspirational was the willingness to do unglamorous work. It’s often neglected so it can be very high leverage to do it!

I feel this way—I recently watched some footage of a PauseAI protest and it made me cringe, and I would hate participating in one. But also I think there are good rational arguments for doing protests, and I think AI pause protests are among the highest-EV interventions right now.

I'd like to add another bullet point
- personal fit

I think that protests play an important role in the political landscape, so I joined a few, but but walking through streets in large crowds and chanting made me feel uncomfortable. Maybe I'd get used to it if I tried more often.

Love this!

Soldiers win wars. The most accurate map made by the most virtuous scout is worth nothing without soldiers to do something with it.

My experience in animal protection has shown me the immense value of soldiers and FWIW I think some of the most resolute soldiers I know are also the scouts I most look up to. Campaigning is probably the most mentally challenging work I have ever done. I think part of that is constantly iterating through the OODA loop, which is cycling through scout and soldier mindsets.

Most animal activists I know in the EA world, were activists first and EA second. It would be interesting to see more EAs tapping into activist actions, which often are a relatively low lift. And I think embracing the soldier mindset is part of that happening.

[anonymous]2
5
3

I think we have all the info we need to contradict the fear of not being a scout in her metaphor. Scouts are important for success in battle because accurate information is important to draw up a good battle plan. But those battle plans are worthless without soldiers to fight the battle! “Everyone Should be a Mapmaker and Fear that Using the Map to Actually Do Something Could Make Them a Worse Mapmaker” would be a much less rousing title, but this is how many EAs and rationalists have chosen to interpret the book.

seems locally invalid.[1]

  • argues from the meaning of terms in a metaphor
  • "Everyone Should be a Mapmaker and Fear that Using the Map to Actually Do Something Could Make Them a Worse Mapmaker" is not a description of the position you want to argue against, because you can do things with information other than optimizing what you say to persuade people.
  1. ^

    'locally invalid' means 'this is not a valid argument', separate from the truth of the premises or conclusion

Curated and popular this week
 ·  · 10m read
 · 
I wrote this to try to explain the key thing going on with AI right now to a broader audience. Feedback welcome. Most people think of AI as a pattern-matching chatbot – good at writing emails, terrible at real thinking. They've missed something huge. In 2024, while many declared AI was reaching a plateau, it was actually entering a new paradigm: learning to reason using reinforcement learning. This approach isn’t limited by data, so could deliver beyond-human capabilities in coding and scientific reasoning within two years. Here's a simple introduction to how it works, and why it's the most important development that most people have missed. The new paradigm: reinforcement learning People sometimes say “chatGPT is just next token prediction on the internet”. But that’s never been quite true. Raw next token prediction produces outputs that are regularly crazy. GPT only became useful with the addition of what’s called “reinforcement learning from human feedback” (RLHF): 1. The model produces outputs 2. Humans rate those outputs for helpfulness 3. The model is adjusted in a way expected to get a higher rating A model that’s under RLHF hasn’t been trained only to predict next tokens, it’s been trained to produce whatever output is most helpful to human raters. Think of the initial large language model (LLM) as containing a foundation of knowledge and concepts. Reinforcement learning is what enables that structure to be turned to a specific end. Now AI companies are using reinforcement learning in a powerful new way – training models to reason step-by-step: 1. Show the model a problem like a math puzzle. 2. Ask it to produce a chain of reasoning to solve the problem (“chain of thought”).[1] 3. If the answer is correct, adjust the model to be more like that (“reinforcement”).[2] 4. Repeat thousands of times. Before 2023 this didn’t seem to work. If each step of reasoning is too unreliable, then the chains quickly go wrong. Without getting close to co
 ·  · 1m read
 · 
(Audio version here, or search for "Joe Carlsmith Audio" on your podcast app.) > “There comes a moment when the children who have been playing at burglars hush suddenly: was that a real footstep in the hall?”  > > - C.S. Lewis “The Human Condition,” by René Magritte (Image source here) 1. Introduction Sometimes, my thinking feels more “real” to me; and sometimes, it feels more “fake.” I want to do the real version, so I want to understand this spectrum better. This essay offers some reflections.  I give a bunch of examples of this “fake vs. real” spectrum below -- in AI, philosophy, competitive debate, everyday life, and religion. My current sense is that it brings together a cluster of related dimensions, namely: * Map vs. world: Is my mind directed at an abstraction, or it is trying to see past its model to the world beyond? * Hollow vs. solid: Am I using concepts/premises/frames that I secretly suspect are bullshit, or do I expect them to point at basically real stuff, even if imperfectly? * Rote vs. new: Is the thinking pre-computed, or is new processing occurring? * Soldier vs. scout: Is the thinking trying to defend a pre-chosen position, or is it just trying to get to the truth? * Dry vs. visceral: Does the content feel abstract and heady, or does it grip me at some more gut level? These dimensions aren’t the same. But I think they’re correlated – and I offer some speculations about why. In particular, I speculate about their relationship to the “telos” of thinking – that is, to the thing that thinking is “supposed to” do.  I also describe some tags I’m currently using when I remind myself to “really think.” In particular:  * Going slow * Following curiosity/aliveness * Staying in touch with why I’m thinking about something * Tethering my concepts to referents that feel “real” to me * Reminding myself that “arguments are lenses on the world” * Tuning into a relaxing sense of “helplessness” about the truth * Just actually imagining differ
JamesÖz
 ·  · 3m read
 · 
Why it’s important to fill out this consultation The UK Government is currently consulting on allowing insects to be fed to chickens and pigs. This is worrying as the government explicitly says changes would “enable investment in the insect protein sector”. Given the likely sentience of insects (see this summary of recent research), and that median predictions estimate that 3.9 trillion insects will be killed annually by 2030, we think it’s crucial to try to limit this huge source of animal suffering.  Overview * Link to complete the consultation: HERE. You can see the context of the consultation here. * How long it takes to fill it out: 5-10 minutes (5 questions total with only 1 of them requiring a written answer) * Deadline to respond: April 1st 2025 * What else you can do: Share the consultation document far and wide!  * You can use the UK Voters for Animals GPT to help draft your responses. * If you want to hear about other high-impact ways to use your political voice to help animals, sign up for the UK Voters for Animals newsletter. There is an option to be contacted only for very time-sensitive opportunities like this one, which we expect will happen less than 6 times a year. See guidance on submitting in a Google Doc Questions and suggested responses: It is helpful to have a lot of variation between responses. As such, please feel free to add your own reasoning for your responses or, in addition to animal welfare reasons for opposing insects as feed, include non-animal welfare reasons e.g., health implications, concerns about farming intensification, or the climate implications of using insects for feed.    Question 7 on the consultation: Do you agree with allowing poultry processed animal protein in porcine feed?  Suggested response: No (up to you if you want to elaborate further).  We think it’s useful to say no to all questions in the consultation, particularly as changing these rules means that meat producers can make more profit from sel
Relevant opportunities