TLDR: As individuals and a community, we should consider the pros and cons of boycotting paid ChatGPT subscription
Straight out of the gate – I’m not arguing that we should boycott (or not), but suggesting that we should make a clear, reasoned decision whether or not it is best for ourselves and the EA community to sign up to paid AI subscriptions.
Although Machine learning algorithms are now a (usually invisible) part of everyday life, for the first time in history anyone anywhere can now pay to directly use powerful AI – for example through the new 20 dollar chat GPT+ subscription. Here are 3 pros and 3 cons of boycotting paid Chat GPT, largely based on known pros/cons of other boycotts. There will likely be more important reasons on both sides than these oh so shallow thoughts – please share and comment on which of these you might weight more or less in your decision making.
For a Boycott
- Avoid contributing directly to increasing P(doom). This is pretty straightforward, we are paying perhaps the most advanced AI company in the world and a potential supplier of said doom to improve their AI.
- Integrity - Improve our ability to spread the word: if we can say we have boycotted a high profile AI then our advocacy for AI danger and alignment might be taken more seriously. With a boycott and this 'sacrificial signalling’ we might find it easier to start discussions and our arguments may carry more weight
Friend: "Wow have you signed up to the new chat GPT?"
Me/You: "It does look amazing, but I've decided not to sign up"
Friend "Why on earth is that?"
Me/You: "Well since you asked..."
- Historical precedent of boycotting what you’re up against: Animal rights activists are usually vegan/vegetarian, some climate activists don’t fly. As flag bearers for AI safety, perhaps we should take historical movements seriously and explore why they choose to boycott.
Against a boycott
- Systemic change > Personal change: What really matters is systemic change in AI alignment - whether we personally pay a bit of money to use a given AI makes a negligible difference or even none at all. If we advocate for boycotts or even broadcast our own, it could even distract from more important systemic changes – in this case AI alignment work and government lobbying for AI safety.
- Using AI to understand it and fight back: Boycotting these tools might hinder our understanding and knowledge of the nature of AI. This is more relevant to direct AI safety workers, but also somewhat relevant to all of us, as we keep ourselves updated by understanding current capabilities.
- Using AI to make more money to give to alignment orgs: We can then give this money to AI alignment organisations. If we gave at a 1:1 ratio this could be considered “moral offsetting” (thanks Jeffrey), but our increased productivity could potentially allow us to give far more than just offsetting the subscription.
- As Chat GPT 666 slaughters humanity, perhaps it will spare its paid users? (J/K)
I think there's a bit of an "ugh field" around activism for some EA's, especially the rationalist types in EA. At least, that's my experience.
My first instinct, when I think of activism, is to think about people who:
- Have incorrect, often extreme beliefs or ideologies.
- Are aggressively partisan.
- Are more performative than effective with their actions.
This definitely does not describe all activists, but it does describe some activists, and may even describe the median activist. That said, this shouldn't be a reason for us to discard this idea immediately out of hand - after all, how good is the median charity? Not that great compared to what EA's actually do.
Perhaps there's a mass-movement issue here though - activism tends to be best with a large groundswell of numbers. If you have a hundred thousand AI safety activists, you're simply not going to have a hundred thousand people with a nuanced and deep understanding of the theory of change behind AI safety activism. You're going to have a few hundred of those, and ninety nine thousand people who think AI is bad for Reason X, and that's the extent of their thinking, and X varies wildly in quality.
Thus, the question is - would such a movement be useful? For such a movement to be useful, it would need to be effective at changing policy, and it would need to be aimed at the correct places. Even if the former is true, I find myself skeptical that the latter would occur, since even AI policy experts are not yet sure where to aim their own efforts, let alone how to communicate where to aim so well that a hundred thousand casually-engaged people can point in the same useful direction.