Hide table of contents

What are the ideas which confuse people or lead them to think the wrong thing or just go down really badly. How could they be more clearly expressed?

Some bad memes from other spaces:
- abolish the police - people claim it doesn't mean what the words literally mean
- ban cars - just deathly unpopular
- open borders - sadly voters hate this framing

Some suggestions of improving them:
- reform the police/ban qualified immunity
- reduce traffic? more walkable neighbourhoods (I don't know, I'm just giving suggestions)

So what are the EA equivalents of these? In other words, what are the things we should stop saying?

15

0
0

Reactions

0
0
New Answer
New Comment

4 Answers sorted by

The question wasn't about misconceptions, as memes that misrepresent what EA is really about, but how EA has spread memes that misrepresent what it's really about. In other words, the question is about what better memes EA should choose to send messages and represent itself better.

EA was never explicitly about mitigating global poverty, earning to give, ignoring change or utilitarianism. Yet for its first few years, EA disproportionately focused its public messaging on those memes. That's caused these misconceptions about EA to persist for years longer and the point is to recognize how the cause of that is the mistake EA itself made.

The idea that EA has an overabundance of funding or a "funding overhang", and that you should act like how much money you spend on a project doesn't matter. This can easily give someone the impression that donations by the EA community have very little value.

It might be healthier to say something like:

This project sounds pretty promising and will likely be funded by so-and-so. It seems likely to easily clear their bar of cost-effectiveness. Feel free to request additional funding if you think you could productively use it to have a greater impact. They're looking to have as much impact as possible and fund everything with x level of cost-effectiveness and above, so please don't be shy about using additional funding if that increases your impact.

(though that is pretty wordy and I'm not confident that it's an improvement)

Relevant posts:

Earn to give

This idea is good in practise but it's very easy to take out of context: "EAs want you to work for an oil company and donate the money to stop oil spills".

I know that's not what anyone means, but the phrase was confusing.

That has always been a strawman of what earning to give is about. My opinion is that at this point it's better for EA to assert itself in the face of misrepresentations instead of trying to defuse them through conciliatory dialogue that has never worked. 

AI risk

I am concerned about AI risk so I don't like including this, but I do think it "polls badly" among my friends who take GiveWell etc pretty seriously. I wonder if it could be reframed to sound less objectionable.

You know, my take on this is that instead of resisting comparisons to Terminator and The Matrix, they should just be embraced (mostly). "Yeah, like that! We're trying to prevent those things from happening. More or less."

The thing is, when you're talking about something that sounds kind of far out, you can take one of two tactics: you can try to engineer the concept and your language around it so that it sounds more normal/ordinary, or you can just embrace the fact that it is kind of crazy, and use language that makes it clear you understand that perception.

So like, "AI Apocalypse Prevention"?

When I introduce AI risk to someone, I generally start by talking about how we don't actually know what's going on inside of our ml systems, that we're bad at making their goals what we actually want, and we have no way of trusting that the systems actually have the goals we're telling them to optimize for.

Next I say this is a problem because as the state of the art of AI progresses, we're going to be giving more and more power to these systems to make decisions for us, and if they are optimizing for goals different from ours this could have terrible effec... (read more)

I think something about properly testing powerful new technologies and making sure they're not used to hurt people sounds pretty intuitive. I think people intuitively get that anything with military applications can cause serious accidents or be misused by bad actors.

Buck
2y11
0
0

Unfortunately this isn’t a very good description of the concern about AI, and so even if it “polls better” I’d be reluctant to use it.

2
Evan_Gaensbauer
2y
I'm aware a problem with AI risk or AI safety is that it doesn't distinguish other AI-related ethics or security concerns from the AI alignment problem, as the EA community's primary concern about advanced AI. I got interesting answers to a question I recently asked on LessWrong about who else has this same attitude towards this kind of conceptual language. 
2
Nathan Young
2y
"AI is the new Nuclear Weapons. We don't an arms race which leads to unsafe technologies" perhaps? 

Why do they object to it? 

My experience has been that those who don't participate in EA at all have a better reception of "AI risk" in general than near-termists in EA. 

I expect long-termists care as much if not more about what others outside of EA think of AI risk as a concept than near-termists in EA. 

I also recently asked a related question on LessWrong about the distinction between AI risk and AI alignment as concepts.

Avoid catostrophic industrial/research accidents?

Comments7
Sorted by Click to highlight new comments since: Today at 1:31 AM

I think this question is conflating "inaccurate presentation of our beliefs" with "bad optics from accurate representations of our beliefs." It might be helpful to separate the two.  

At first with the question of "bad EA memes" I also wondered if it might include "things lots of EAs believe that make them less effective at doing good"

Agreed.

Summary: It was bad optics for EA to associate itself with memes that misrepresent what the movement is really about. It was mistaken branding efforts in the first few years of EA that has gotten EA stuck with these inaccurate interpretations about the movement.

It's both. Common misconceptions about EA are not only inaccurate presentations of what EA is about. They're the consequence of EA misrepresenting itself. That's why it was bad optics. 

The impression I've gotten from other comments on this post is that people aren't very aware that the these misconceptions about EA were caused by EA branding itself with memes like the one Hauke Hillebrandt references in this comment

I don't know if it's because most people have joined the EA movement before it got stuck with these misconceptions. 

Yet I've participated in EA for a decade and I remember for the first few years we associated ourselves with earning to give and overly simplistic utilitarian (pseudo-utilitarian?) approaches. 

I made that mistake a lot. It's hard to overstate how much we, the first 'cohort' of EA, made that mistake. (Linch, I'm aware you've been in EA for a long time too but I don't mean to imply you're part of that first cohort or whatever.) It took only a few years for us to fully recognize we were making these mistakes and attempt to rectify so many misconceptions about EA. Yet a decade later we're still stuck with them.

If you strongly think I'll do this, I will but it will be a bit of a faff.

It doesn't seem necessary to do it. In this comment, I went over how major mistakes with how EA branded itself in its first few years were in hindsight very bad optics because that resulted in major public misconceptions about what EA is about and what it's really effective for those in EA to do, e.g., with their careers.

Summary: It's common knowledge that the movement which has grown in the aftermath of the George Floyd protests brands itself as seeking to defund rather than abolish the police. To make the same, very literal mistake one criticizes another movement for making signals EA is too sloppy and careless to be really effective or taken seriously. 

This of course rightly identifies the kind of problem but misrepresents its content. The word used is not "abolish" but "defund." 

This is common knowledge. I don't mean this personally, as I sympathize with one not considering it necessary to be so tedious, but there is technically no excuse for making this mistake.

It might seem like a trivial fact. Yet if it's trivial, it also takes no effort to acknowledge it. It's important for participants in effective altruism to indicate their earnest effort to be impartial by taking enough care to not make the same mistake(s) other movements are being criticized for making.

The claim is "defund" means something like: 

  1. Dramatically reduce the annual budgets of police.
  2. Reallocate that funding to  public services and social programs that address the systemic causes of crime and to reduce crime rates by other means.

This of course isn't a sufficient defence of the slogan "defund the police." It neglects the fact that almost everyone who isn't involved in social justice movements will interpret defund as a synonym for abolish.  

Yet rebranding with the term "police reform" would also pose a problem. It's an over-correction that fails to distinguish how one movement seeks to reform the police from the ways anyone else would reform the police. 

The open borders movement faces the same challenge. Rebranding "open borders" as "immigration reform" would be pointless. 

The best term I've seen to replace "defund the police" is "divest from the police" because it more accurately represents the goal of reallocating funding from policing to other public services. I only saw it embraced by the local movement where I live in Vancouver for a few months in 2020. That movement now mostly brands itself with "defund" instead of "divest." I haven't asked why but I presume it's because association with the better-known brand brings them more attention and recognition. 

I'm aware this comment is probably annoying. I almost didn't want to write it  because I don't want to annoy others. 

Yet misrepresenting another movement like this isn't even strawmanning. It indicates an erroneous understanding of that movement. The criticism doesn't apply to something that movement isn't doing.

The need I feel to do this annoys me too. It's annoying because it puts EA in a position of always having to steelman other movements. 

It begs the question of whether it's really necessary for EA to steelman other movements when they only ever strawman EA. The answer to that question is yes. 

It's not about validating those other movements. It's about reinforcing the habit of being effective so EA can succeed when other movements fail. 

Other than the major focus areas in EA, there are efforts to effectively make progress in achieving the goals for causes prioritized by other movements. For example, Open Philanthropy focuses on criminal justice reform too. By trying to be the most effective for every cause EA pursues, EA can outperform other movements in ways that will move the public to care more about effectiveness and trust EA more. 
 

(Full disclosure: I support the general effort to dramatically decrease police funding and reallocate that money to public services and social programs that will better and more systemically serve the goals of public safety and crime reduction. I know multiple core activists and organizers in the local 'defund the police' movement.

Curated and popular this week
Relevant opportunities