TLDR: As individuals and a community, we should consider the pros and cons of boycotting paid ChatGPT subscription
Straight out of the gate – I’m not arguing that we should boycott (or not), but suggesting that we should make a clear, reasoned decision whether or not it is best for ourselves and the EA community to sign up to paid AI subscriptions.
Although Machine learning algorithms are now a (usually invisible) part of everyday life, for the first time in history anyone anywhere can now pay to directly use powerful AI – for example through the new 20 dollar chat GPT+ subscription. Here are 3 pros and 3 cons of boycotting paid Chat GPT, largely based on known pros/cons of other boycotts. There will likely be more important reasons on both sides than these oh so shallow thoughts – please share and comment on which of these you might weight more or less in your decision making.
For a Boycott
- Avoid contributing directly to increasing P(doom). This is pretty straightforward, we are paying perhaps the most advanced AI company in the world and a potential supplier of said doom to improve their AI.
- Integrity - Improve our ability to spread the word: if we can say we have boycotted a high profile AI then our advocacy for AI danger and alignment might be taken more seriously. With a boycott and this 'sacrificial signalling’ we might find it easier to start discussions and our arguments may carry more weight
Friend: "Wow have you signed up to the new chat GPT?"
Me/You: "It does look amazing, but I've decided not to sign up"
Friend "Why on earth is that?"
Me/You: "Well since you asked..."
- Historical precedent of boycotting what you’re up against: Animal rights activists are usually vegan/vegetarian, some climate activists don’t fly. As flag bearers for AI safety, perhaps we should take historical movements seriously and explore why they choose to boycott.
Against a boycott
- Systemic change > Personal change: What really matters is systemic change in AI alignment - whether we personally pay a bit of money to use a given AI makes a negligible difference or even none at all. If we advocate for boycotts or even broadcast our own, it could even distract from more important systemic changes – in this case AI alignment work and government lobbying for AI safety.
- Using AI to understand it and fight back: Boycotting these tools might hinder our understanding and knowledge of the nature of AI. This is more relevant to direct AI safety workers, but also somewhat relevant to all of us, as we keep ourselves updated by understanding current capabilities.
- Using AI to make more money to give to alignment orgs: We can then give this money to AI alignment organisations. If we gave at a 1:1 ratio this could be considered “moral offsetting” (thanks Jeffrey), but our increased productivity could potentially allow us to give far more than just offsetting the subscription.
- As Chat GPT 666 slaughters humanity, perhaps it will spare its paid users? (J/K)
Seems to me that we’ll only see a change in course from relentless profit-seeking LLM development if intermediate AIs start misbehaving - smart enough to seek power and fight against control, but dumb enough to be caught and switched off.
I think instead of a boycott, this is a time to practice empathic communication with the public now that the tech is on everybody’s radar and AI x-risk arguments are getting a respectability boost from folks like Ezra Klein.
A poster on LessWrong recently harvested a comment from a NY Times reader that talked about x-risk in a way that clearly resonated with the readership. Figuring out how to scale that up seems like a good task for an LLM. In this theory of change, we need to double down on our communication skills to steer the conversation in appropriate ways. And we’ll need LLMs to help us do that. A boycott takes us out of the conversation, so I don’t think that’s the right play.
Gotcha thanks that makes sense.