Since I started PauseAI, I’ve encountered a wall of paranoid fear from EAs and rationalists that the slightest amount of wrongthink or willingness to use persuasive speech as an intervention will taint the person’s mind for life with self-deception-- that “politics” will kill their mind. I saw people shake in fear to join a protest of an industry they thought would destroy the world if unchecked because they didn’t want to be photographed next to an “unnuanced” sign. They were afraid of sinning by saying something wrong. They were afraid of sinning by even trying to talk persuasively!

The worry about destroying one’s objectivity was often phrased to me as “being a scout/not being a soldier”, referring to Julia Galef’s book Scout Mindset. I think we have all the info we need to contradict the fear of not being a scout in her metaphor. Scouts are important for success in battle because accurate information is important to draw up a good battle plan. But those battle plans are worthless without soldiers to fight the battle! “Everyone Should be a Mapmaker and Fear that Using the Map to Actually Do Something Could Make Them a Worse Mapmaker” would be a much less rousing title, but this is how many EAs and rationalists have chosen to interpret the book.

Even a scout can’t be only a scout. If a scout reports what they found to a superior officer, and the officer wants to pretend they didn’t hear it, a good scout doesn’t just stay curious about the situation or note that the superior officer has chosen a narrative. They fight to be heard! Because the truth of what they saw matters to the war effort. The success of the scout and the officer and the soldier is all ultimately measured in the outcome of the war. Accurate intel is important for something larger than the map— for the battle.

Imagine if the insecticide-treated bednets hemmed and hawed about the slight chance of harm from their use in anti-malaria interventions. Would that help one bit? No! What helps is working through foreseeable issues ahead of time at the war table, then actually trying the intervention with each component fully committed. Bednets are soldiers, and all our thinking about the best interventions would be useless if there were no soldiers to actually carry the interventions out. Advocating for the PauseAI proposal and opposing companies who are building AGI through protests is an intervention, much like spreading insecticide-treated bednets, but instead of bednets the soldiers are people armed with facts and arguments that we hope will persuade the public and government officials.

Interventions that involve talking, thinking, persuasion, and winning hearts and minds require commitment to the intervention and not simply to the accuracy of your map or your reputation for accurate predictions. To be a soldier in this intervention, you have to be willing to be part of the action itself and not just part of the zoomed out thinking. This is very scary for a contingent of EAs and rationalists today who treat thinking and talking as sacred activities that must follow the rules of science or lesswrong and not be used for anything else. Some of them would like to entirely forbid "politics" (by which they generally mean trying to persuade people of your position and get them on your side) or "being a [rhetorical] soldier" out of the fear that people cannot compartmentalize persuasive speech acts from scout thinking and will lose their ability to earnestly truth-seek.

I think these concerns are wildly overblown. What are the chances that amplifying the message of an org you trust in a way the public will understand undermines your ability to think critically? That's just contamination thinking. I developed the PauseAI US interventions with my scout hat on. When planning a protest, I'm an officer. At the protest, I'm a soldier. Lo and behold, I am not mindkilled. In fact, it's illuminating to serve in all of those roles-- I feel I have a better and more accurate map because of it. Even if I didn't, a highly accurate map simply isn't necessary for all interventions. Advocating for more time for technical safety work and for regulations to be established is kind of a no-brainer.

It's noble to serve as a soldier when we need humans as bednets to carry out the interventions that scouts have identified and officers have chosen to execute. Soldiers win wars. The most accurate map made by the most virtuous scout is worth nothing without soldiers to do something with it.

Comments22


Sorted by Click to highlight new comments since:
Tao
41
4
1
2

This is a valuable post, but I don't think it engages with a lot of the concern about PauseAI advocacy. I have two main reasons why I broadly disagree:

  1. Pausing AI development could be the wrong move, even if you don't care about benefits and only care about risks

AI safety is an area with a lot of uncertainty. Importantly, this uncertainty isn't merely about the nature of the risks but about the impact of potential interventions.

Of all interventions, pausing AI development is, some think, a particularly risky one. There are dangers like:

  • Falling behind China
  • Creating a compute overhang with subsequent rapid catch-up development
  • Polarizing the AI discourse before risks are clearer (and discrediting concerned AI experts), turning AI into a politically intractable problem, and
  • Causing AI lab regulatory flight to countries with lower state capacity, less robust democracies, fewer safety guardrails, and a lesser ability to mandate security standards to prevent model exfiltration

People at PauseAI are probably less concerned about the above (or more concerned about model autonomy, catastrophic risks, and short timelines).

Although you may have felt that you did your "scouting" work and arrived at a position worth defending as a warrior, others' comparably thorough scouting work has led them to a different position. Their opposition to your warrior-like advocacy, then, may not come (as your post suggests) from a purist notion that we should preserve elite epistemics at the cost of impact, but from a fundamental disagreement about the desirability of the consequences of a pause (or other policies), or of advocacy for a pause.

If our shared goal is the clichéd securing-benefits-and-minimizing-risks, or even just minimizing risks, one should be open to thoughtful colleagues' input that one's actions may be counterproductive to that end-goal. 

2. Fighting does not necessarily get one closer to winning. 

Although the analogy of war is compelling and lends itself well to your post's argument, in politics fighting often does not get one closer to winning. Putting up a bad fight may be worse than putting up no fight at all. If the goal is winning (instead of just putting up a fight), then taking criticism to your fighting style seriously should be paramount. 

I still concede that a lot of people dismiss PauseAI merely because they see it as cringe. But I don't think this is the core of most thoughtful people's criticism.

To be very clear, I'm not saying that PauseAI people are wrong, or that a pause will always be undesirable, or that they are using the wrong methods. I am answering to 

(1) the feeling that this post dismissed criticism of PauseAI without engaging with object-level arguments, and the feeing that this post wrongly ascribed outside criticism to epistemic purism and a reluctance to "do the dirty work," and

(2) the idea that the scout-work is "done" already and an AI pause is currently desirable. (I'm not sure I'm right here at all, but I have reasons [above] to think that PauseAI shouldn't be so sure either.)

Sorry for not editing this better, I wanted to write it quickly. I welcome people's responses though I may not be able to answer to them!

lol "great post, but it fails to engage what I think about when I think of PauseAI"

This analysis seems roughly right to me. Another piece of it I think is that being a 'soldier' or a 'bednet-equivalent' probably feels low status to many people (sometimes me included) because:

  • people might feel soldiering is generally easier than scouting, and they are more replaceable/less special
  • protesting feels more 'normal' and less 'EA' and people want to be EA-coded

To be clear I don't endorse this, I am just pointing out something I notice within myself/others. I think the second one is mostly just bad, and we should do things that are good regardless of whether they have 'EA vibes'. The first one I think is somewhat reasonable (e.g. I wouldn't want to pay someone to be a fulltime protest attendee to bring up the numbers) but I think soldiering can be quite challenging and laudable and part of a portfolio of types of actions one takes.

Yes, this matches what potential attendees report to me. They are also afraid of being “cringe” and don’t want to be associated with noob-friendly messaging, which I interpret as status-related.

This deeply saddens me because one of the things I most admired about early EA and found inspirational was the willingness to do unglamorous work. It’s often neglected so it can be very high leverage to do it!

I feel this way—I recently watched some footage of a PauseAI protest and it made me cringe, and I would hate participating in one. But also I think there are good rational arguments for doing protests, and I think AI pause protests are among the highest-EV interventions right now.

I'd like to add another bullet point
- personal fit

I think that protests play an important role in the political landscape, so I joined a few, but but walking through streets in large crowds and chanting made me feel uncomfortable. Maybe I'd get used to it if I tried more often.

Love this!

Soldiers win wars. The most accurate map made by the most virtuous scout is worth nothing without soldiers to do something with it.

My experience in animal protection has shown me the immense value of soldiers and FWIW I think some of the most resolute soldiers I know are also the scouts I most look up to. Campaigning is probably the most mentally challenging work I have ever done. I think part of that is constantly iterating through the OODA loop, which is cycling through scout and soldier mindsets.

Most animal activists I know in the EA world, were activists first and EA second. It would be interesting to see more EAs tapping into activist actions, which often are a relatively low lift. And I think embracing the soldier mindset is part of that happening.

Setting aside the concrete example of Pause AI (haven't given it enough thought), I totally agree with the statement in the title. 
Also, if I may: to some extent, you can accomplish things even when your soldiers aren't as smart, or as ideologically aligned with you, as your scouts; same thing holds for officers. The historical example that comes to mind is the army of the Soviet Union: for some years at least, an important fraction of the officers were former officers of the imperial army; they were called "voenspetsy", which means "military specialists". 

From the Wikipedia page on the Red Army

"In June 1918, Leon Trotsky abolished workers' control over the Red Army, replacing the election of officers with traditional army hierarchies and criminalizing dissent with the death penalty. Simultaneously, Trotsky carried out a mass recruitment of officers from the old Imperial Russian Army, who were employed as military advisors (voenspetsy).[19][20] The Bolsheviks occasionally enforced the loyalty of such recruits by holding their families as hostages.[21][page needed] As a result of this initiative, in 1918, 75% of the officers were former tsarists.[22] By mid-August 1920 the Red Army's former tsarist personnel included 48,000 officers, 10,300 administrators, and 214,000 non-commissioned officers.[23] When the civil war ended in 1922, ex-tsarists constituted 83% of the Red Army's divisional and corps commanders."

[anonymous]1
5
4

I think we have all the info we need to contradict the fear of not being a scout in her metaphor. Scouts are important for success in battle because accurate information is important to draw up a good battle plan. But those battle plans are worthless without soldiers to fight the battle! “Everyone Should be a Mapmaker and Fear that Using the Map to Actually Do Something Could Make Them a Worse Mapmaker” would be a much less rousing title, but this is how many EAs and rationalists have chosen to interpret the book.

seems locally invalid.[1]

  • argues from the meaning of terms in a metaphor
  • "Everyone Should be a Mapmaker and Fear that Using the Map to Actually Do Something Could Make Them a Worse Mapmaker" is not a description of the position you want to argue against, because you can do things with information other than optimizing what you say to persuade people.
  1. ^

    'locally invalid' means 'this is not a valid argument', separate from the truth of the premises or conclusion

At the risk of being pedantic, I reread your comment several times[1] and I still don't see why it's locally invalid. I can see why it's externally/globally invalid, but I don't think you actually speak to the local validity here? 
 

  1. ^

    And the comment is pretty short so I don't think I'm missing something.

Curated and popular this week
 ·  · 23m read
 · 
Or on the types of prioritization, their strengths, pitfalls, and how EA should balance them   The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone is trying to figure out how to prepare for AI. This is the first in a series of posts examining the state of cause prioritization and proposing strategies for moving forward.   Executive Summary * Performing prioritization work has been one of the main tasks, and arguably achievements, of EA. * We highlight three types of prioritization: Cause Prioritization, Within-Cause (Intervention) Prioritization, and Cross-Cause (Intervention) Prioritization. * We ask how much of EA prioritization work falls in each of these categories: * Our estimates suggest that, for the organizations we investigated, the current split is 89% within-cause work, 2% cross-cause, and 9% cause prioritization. * We then explore strengths and potential pitfalls of each level: * Cause prioritization offers a big-picture view for identifying pressing problems but can fail to capture the practical nuances that often determine real-world success. * Within-cause prioritization focuses on a narrower set of interventions with deeper more specialised analysis but risks missing higher-impact alternatives elsewhere. * Cross-cause prioritization broadens the scope to find synergies and the potential for greater impact, yet demands complex assumptions and compromises on measurement. * See the Summary Table below to view the considerations. * We encourage reflection and future work on what the best ways of prioritizing are and how EA should allocate resources between the three types. * With this in mind, we outline eight cruxes that sketch what factors could favor some types over others. * We also suggest some potential next steps aimed at refining our approach to prioritization by exploring variance, value of information, tractability, and the
 ·  · 5m read
 · 
[Cross-posted from my Substack here] If you spend time with people trying to change the world, you’ll come to an interesting conundrum: Various advocacy groups reference previous successful social movements as to why their chosen strategy is the most important one. Yet, these groups often follow wildly different strategies from each other to achieve social change. So, which one of them is right? The answer is all of them and none of them. This is because many people use research and historical movements to justify their pre-existing beliefs about how social change happens. Simply, you can find a case study to fit most plausible theories of how social change happens. For example, the groups might say: * Repeated nonviolent disruption is the key to social change, citing the Freedom Riders from the civil rights Movement or Act Up! from the gay rights movement. * Technological progress is what drives improvements in the human condition if you consider the development of the contraceptive pill funded by Katharine McCormick. * Organising and base-building is how change happens, as inspired by Ella Baker, the NAACP or Cesar Chavez from the United Workers Movement. * Insider advocacy is the real secret of social movements – look no further than how influential the Leadership Conference on Civil Rights was in passing the Civil Rights Acts of 1960 & 1964. * Democratic participation is the backbone of social change – just look at how Ireland lifted a ban on abortion via a Citizen’s Assembly. * And so on… To paint this picture, we can see this in action below: Source: Just Stop Oil which focuses on…civil resistance and disruption Source: The Civic Power Fund which focuses on… local organising What do we take away from all this? In my mind, a few key things: 1. Many different approaches have worked in changing the world so we should be humble and not assume we are doing The Most Important Thing 2. The case studies we focus on are likely confirmation bias, where
 ·  · 1m read
 · 
I wanted to share a small but important challenge I've encountered as a student engaging with Effective Altruism from a lower-income country (Nigeria), and invite thoughts or suggestions from the community. Recently, I tried to make a one-time donation to one of the EA-aligned charities listed on the Giving What We Can platform. However, I discovered that I could not donate an amount less than $5. While this might seem like a minor limit for many, for someone like me — a student without a steady income or job, $5 is a significant amount. To provide some context: According to Numbeo, the average monthly income of a Nigerian worker is around $130–$150, and students often rely on even less — sometimes just $20–$50 per month for all expenses. For many students here, having $5 "lying around" isn't common at all; it could represent a week's worth of meals or transportation. I personally want to make small, one-time donations whenever I can, rather than commit to a recurring pledge like the 10% Giving What We Can pledge, which isn't feasible for me right now. I also want to encourage members of my local EA group, who are in similar financial situations, to practice giving through small but meaningful donations. In light of this, I would like to: * Recommend that Giving What We Can (and similar platforms) consider allowing smaller minimum donation amounts to make giving more accessible to students and people in lower-income countries. * Suggest that more organizations be added to the platform, to give donors a wider range of causes they can support with their small contributions. Uncertainties: * Are there alternative platforms or methods that allow very small one-time donations to EA-aligned charities? * Is there a reason behind the $5 minimum that I'm unaware of, and could it be adjusted to be more inclusive? I strongly believe that cultivating a habit of giving, even with small amounts, helps build a long-term culture of altruism — and it would