I’ve been hypocritical. I tell people they are deluded to think they can influence AI company agendas or values from within, when I’ve still been trying to influence EA from within. With this post, you won’t hear from me on this account again.  I’m also going to do my best to stay out of conversations I consider beside the point that we need external regulatory control over AI, like endless timeline tweaks and baseball card trading evals, on other platforms. Many of you will be muted or blocked elsewhere and I will sincerely do my best not to engage with you.

If you want to work with PauseAI, we will do so on PauseAI terms. If you want to still have a relationship with me, it’s person to person, and I will give your EA positions or connections zero deference. I will not show respect to your compromised friends. I will not go to parties or be in any social situation where I have to be nice to them and they feel included in safety work somehow despite betraying humanity. If you work for an AI lab, we will not be friends, but you can always contact me if you want to spill dirt or need help leaving.

You should all be ashamed of your complicity in bringing about potentially world-ending technology. It disgusts me and fills me with rage to hear excuse after excuse. Beyond disappointing. I’ve wanted to avoid simply seeing you all as a facet of the enemy because of my history with you, but that’s what EA is now. 

I know a LOT of you think Anthropic is your friend. A lot of you used to think that about OpenAI, too. You were and are fools. But I get it— that’s how I feel about you. I’ve been a fool trying to influence people who are on the AI industry’s money and glory payroll. I’m going to take my own advice now, write you off, and focus on the moral majority who wants to protect the world. 

-12

0
1

Reactions

0
1
Comments4
Sorted by Click to highlight new comments since:

I’ve been a fool trying to influence people who are on the AI industry’s money and glory payroll. I’m going to take my own advice now, write you off, and focus on the moral majority who wants to protect the world.

I've donated $30,000 to PauseAI. Some of your past posts played a role in that, such as The Case for AI Safety Advocacy to the Public and Pausing AI is the only safe approach to digital sentience. I don't think writing off people like me is a good idea.

You should all be ashamed of your complicity in bringing about potentially world-ending technology.

I am literally donating to PauseAI. I don't think you are being fair. I fully agree that some EAs are directly increasing x-risk by working on AI development, and they should stop doing that. I don't think it's fair to paint all of us with that brush.

Right - only 5% of EA Forum users surveyed want to accelerate AI:

"13% want AGI never to be built, 26% said to pause AI now in some form, and another 21% would like to pause AI if there is a particular event/threshold. 31% want some other regulation, 5% are neutral and 5% want to accelerate AI in a safe US lab."

This post is not (mainly) calling out EA and EAs for wanting to accelerate AI.

It's calling out those of us who do think that the AGI labs are developing a technology that will literally kill us and destroy everything we love with double digit probability, but are still friendly with the labs and people who work at the labs.

And it's calling out those people who think the above, and take a salary from the AGI labs anyway.

I read this post as saying something like, 

If you're serious about what you believe, and you had very basic levels of courage, you would never go to a party with someone who was working at Anthropic and not directly tell them that what they're doing is bad and they should stop."

Yes, that's awkward. Yes, that's confrontational. 

But if you go to a party with people building a machine that you think will kill everyone, and you just politely talk with them about other stuff, or politely ignore them,  then you are a coward and an enabler and a hypocrite. 

Your interest in being friendly with people in your social sphere, over and above vocally opposing the creation of a doom-machine is immoral and disgraceful to the values you claim to hold.

I (Holly) am drawing the line here. Don't expect me me to give polite respect to what I consider the ludicrous view that it's reasonable to eg work for Anthropic. 

I don't overall agree with this take, at this time. But I'm not very confident in my disagreement. I think Holly might basically be right here, and on further reflection I might come to agree with her.

I definitely agree that the major reason why there's not more vocal opposition to working at an AGI lab is social conformity and fear of social risk. (Plus most of us are not well equipped to evaluated whether it possibly makes sense to try to "make things better from the inside", and so we defer to others who are broadly pro some version of that plan.)

 

If it helps at all, people definitely read your updates, and it would be a shame if you stopped posting them here. I've recommended to students trying to "do EA things" that they should start a local PauseAI chapter. Partially that's because people from PauseAI post on here.

Curated and popular this week
Relevant opportunities