[ Question ]

What are the bad EA memes? How could we reframe them?

by Nathan Young1 min read16th Nov 202113 comments


Effective altruism messaging

What are the ideas which confuse people or lead them to think the wrong thing or just go down really badly. How could they be more clearly expressed?

Some bad memes from other spaces:
- abolish the police - people claim it doesn't mean what the words literally mean
- ban cars - just deathly unpopular
- open borders - sadly voters hate this framing

Some suggestions of improving them:
- reform the police/ban qualified immunity
- reduce traffic? more walkable neighbourhoods (I don't know, I'm just giving suggestions)

So what are the EA equivalents of these? In other words, what are the things we should stop saying?

New Answer
Ask Related Question
New Comment

3 Answers

Earn to give

This idea is good in practise but it's very easy to take out of context: "EAs want you to work for an oil company and donate the money to stop oil spills".

I know that's not what anyone means, but the phrase was confusing.

AI risk

I am concerned about AI risk so I don't like including this, but I do think it "polls badly" among my friends who take GiveWell etc pretty seriously. I wonder if it could be reframed to sound less objectionable.

You know, my take on this is that instead of resisting comparisons to Terminator and The Matrix, they should just be embraced (mostly). "Yeah, like that! We're trying to prevent those things from happening. More or less."

The thing is, when you're talking about something that sounds kind of far out, you can take one of two tactics: you can try to engineer the concept and your language around it so that it sounds more normal/ordinary, or you can just embrace the fact that it is kind of crazy, and use language that makes it clear you understand that perception.

So like, "AI Apocalypse Prevention"?

When I introduce AI risk to someone, I generally start by talking about how we don't actually know what's going on inside of our ml systems, that we're bad at making their goals what we actually want, and we have no way of trusting that the systems actually have the goals we're telling them to optimize for.

Next I say this is a problem because as the state of the art of AI progresses, we're going to be giving more and more power to these systems to make decisions for us, and if they are optimizing for goals different from ours this could have terrible effec... (read more)

I think something about properly testing powerful new technologies and making sure they're not used to hurt people sounds pretty intuitive. I think people intuitively get that anything with military applications can cause serious accidents or be misused by bad actors.

6Buck12dUnfortunately this isn’t a very good description of the concern about AI, and so even if it “polls better” I’d be reluctant to use it.
2Nathan Young12d"AI is the new Nuclear Weapons. We don't an arms race which leads to unsafe technologies" perhaps?

Avoid catostrophic industrial/research accidents?

4 comments, sorted by Highlighting new comments since Today at 9:24 PM

I think this question is conflating "inaccurate presentation of our beliefs" with "bad optics from accurate representations of our beliefs." It might be helpful to separate the two.  

At first with the question of "bad EA memes" I also wondered if it might include "things lots of EAs believe that make them less effective at doing good"

If you strongly think I'll do this, I will but it will be a bit of a faff.