TLDR: As individuals and a community, we should consider the pros and cons of boycotting paid ChatGPT subscription
Straight out of the gate – I’m not arguing that we should boycott (or not), but suggesting that we should make a clear, reasoned decision whether or not it is best for ourselves and the EA community to sign up to paid AI subscriptions.
Although Machine learning algorithms are now a (usually invisible) part of everyday life, for the first time in history anyone anywhere can now pay to directly use powerful AI – for example through the new 20 dollar chat GPT+ subscription. Here are 3 pros and 3 cons of boycotting paid Chat GPT, largely based on known pros/cons of other boycotts. There will likely be more important reasons on both sides than these oh so shallow thoughts – please share and comment on which of these you might weight more or less in your decision making.
For a Boycott
- Avoid contributing directly to increasing P(doom). This is pretty straightforward, we are paying perhaps the most advanced AI company in the world and a potential supplier of said doom to improve their AI.
- Integrity - Improve our ability to spread the word: if we can say we have boycotted a high profile AI then our advocacy for AI danger and alignment might be taken more seriously. With a boycott and this 'sacrificial signalling’ we might find it easier to start discussions and our arguments may carry more weight
Friend: "Wow have you signed up to the new chat GPT?"
Me/You: "It does look amazing, but I've decided not to sign up"
Friend "Why on earth is that?"
Me/You: "Well since you asked..."
- Historical precedent of boycotting what you’re up against: Animal rights activists are usually vegan/vegetarian, some climate activists don’t fly. As flag bearers for AI safety, perhaps we should take historical movements seriously and explore why they choose to boycott.
Against a boycott
- Systemic change > Personal change: What really matters is systemic change in AI alignment - whether we personally pay a bit of money to use a given AI makes a negligible difference or even none at all. If we advocate for boycotts or even broadcast our own, it could even distract from more important systemic changes – in this case AI alignment work and government lobbying for AI safety.
- Using AI to understand it and fight back: Boycotting these tools might hinder our understanding and knowledge of the nature of AI. This is more relevant to direct AI safety workers, but also somewhat relevant to all of us, as we keep ourselves updated by understanding current capabilities.
- Using AI to make more money to give to alignment orgs: We can then give this money to AI alignment organisations. If we gave at a 1:1 ratio this could be considered “moral offsetting” (thanks Jeffrey), but our increased productivity could potentially allow us to give far more than just offsetting the subscription.
- As Chat GPT 666 slaughters humanity, perhaps it will spare its paid users? (J/K)
With respect to Point 2, I think that EA is not large enough that a large AI activist movement would be comprised mostly of EA aligned people. EA is difficult and demanding - I don't think you're likely to get a "One Million EA" march anytime soon. I agree that AI activists who are EA aligned are more likely to be in the set of focused, successful activists (Like many of your friends!) but I think you'll end up with either:
- A small group of focused, dedicated activists who may or may not be largely EA aligned
- A large group of unfocused-by-default, relatively casual activists, most of whom will not be EA aligned
If either of those two would be effective at achieving goals, then I think that makes AI risk activism a good idea. If you need a large group of focused, dedicated activists - I don't think we're going to get that.
As for Point 1, it's certainly possible - especially if having a large group of relatively unfocused people would be useful. I have no idea if this is true, so I have no idea if raising awareness is an impactful idea at this point. (Also, there are those that have made the point that raising AI risk awareness tends to make people more likely to race for AGI, not less - see OpenAI)