I've decided to donate $240 to both GovAI and MIRI to offset the $480 I plan to spend on ChatGPT Plus over the next two years ($20/month).

I don't have a super strong view on ethical offsets, like donating to anti-factory farming groups to try to offset harm from eating meat. That being said, I currently think offsets are somewhat good for a few reasons:

They seem much better than simply contributing to some harm or commons problem and doing nothing, which is often what people would do otherwise.

It seems useful to recognize, to notice, when you're contributing to some harm or commons problem. I think a lot of harm comes from people failing to notice or keep track of ways their actions negatively impact others, and the ways that common incentives push them to do worse things.

A common Effective Altruism argument against offsets is that they don't make sense from a consequentialist perspective. If you have a budget for doing good, then spend your whole budget on doing as much as possible. If you want to mitigate harms you are contributing to, you can offset by increasing your "doing good" budget, but it doesn't make sense to specialize your mitigations to the particular area where you are contributing to harm rather than the area you think will be the most cost effective in general.

I think this is a decently good point, but doesn't move me enough to abandon the idea of offsets entirely. A possible counter-argument is that offsets can be a powerful form of coordination to help solve commons problems. By publicly making a commitment to offset a particular harm, you're establishing a basis for coordination - other people can see you really care about the issue because you made a costly signal. This is similar for the reasons to be vegan or vegetarian - it's probably not the most effective from a naive consequentialist perspective, but it might be effective as a point of coordination via costly signaling.

After having used ChatGPT (3.5) and Claude for a few months, I've come to believe that these tools are super useful for research and many other tasks, as well as useful for understanding AI systems themselves. I've also started to use Bing Chat and ChatGPT (4), and found them to be even more impressive as research and learning tools. I think it would be quite bad for the world if conscientious people concerned about AI harms refrained from using these tools, because I think it would disadvantage them in significant ways, including in crucial areas like AI alignment and policy. 

Unfortunately both can be true:

1) Language models are really useful and can help people learn, write, and research more effectively
2) The rapid development of huge models is extremely dangerous and a huge contributor to AI existential risk

I think OpenAI, and to varying extent other scaling labs, are engaged in reckless behavior scaling up and deploying these systems before we understand how they work enough to be confident in our safety and alignment approaches. And also, I do not recommend people in the "concerned about AI x-risk" reference class refrain from paying for these tools, even if they do not decide to offset these harms. The $20/month to OpenAI for GPT-4 access right now is not a lot of money for a company spending hundreds of millions training new models. But it is something, and I want to recognize that I'm contributing to this rapid scaling and deployment in some way.

Weighing all this together, I've decided offsets are the right call for me, and I suspect they might be right for many others, which is why I wanted to share my reasoning here. To be clear, I think concrete actions aimed at quality alignment research or AI policy aimed at buying more time are much more important than offsets. I won't dock anyone points for not donating to offset harm from paying for AI services at a small scale. But I will notice if other people make similar commitments and take it as a signal that people care about risks from commercial incentives.

I didn't spend a lot of time deciding which orgs to donate to, but my reasoning is as follows: MIRI has a solid track record highlighting existential risks from AI and encouraging AI labs to act less recklessly and raise the bar for their alignment work. GovAI (the Center for AI governance) is working on regulatory approaches that might give us more time to solve key alignment problems. According to staff I've talked to, MIRI is not heavily funding constrained, but that they believe they could use more money. I suspect GovAI is in a similar place but I have not inquired.

Comments10


Sorted by Click to highlight new comments since:

I really appreciate the donation to GovAI!

According to staff I've talked to, MIRI is not heavily funding constrained, but that they believe they could use more money. I suspect GovAI is in a similar place but I have not inquired.

For reference, for anyone thinking of donating to GovAI: I would currently describe us as “funding constrained” — I do current expect financial constraints to prevent us from making program improvements/expansions and hires we’d like to make over the next couple years. (We actually haven’t yet locked down enough funding to maintain our current level of operation for the next couple years, although I think that will probably come together soon.)

We’ll be putting out a somewhat off-season annual report soon, probably in the next couple weeks, that gives a bit of detail on our current resources and what we would use additional funding for. I’m also happy to share more detailed information upon request, if anyone might be interested in donating and wants to reach out to me at ben.garfinkel@governance.ai.

I remain unconvinced that these offsets are particularly helpful, and certainly not at 1:1.

My understanding is that alignment as a field is much more constrained by ideas, talent, and infrastructure than funding. Providing capabilities labs like OpenAI with more resources (and making it easier for similar organisations to raise capital) seems to do much more to slow down timelines than providing some extra cash to the alignment community today does to get us closer to good alignment solutions.

I am not saying it can never be ethical to pay for something like ChatGPT Plus, but if you are not directly using that to help with working on alignment then I think it’s likely to be very harmful in expectation.

I am pretty surprised that more of the community don’t have more of an issue with merely using ChatGPT and similar services - it provides a lot of real world data that capabilities researchers will use for future training, and encourages investment into more capabilities research, even if you don’t pay them directly.

Very harmful seems unreasonably strong. These products are insanely widely used and Jeffrey's impact will be negligible. I generally think that tracking minor harms like this causes a lot more stress than its worth

Thanks for the response!

The products being widely used doesn’t prevent the marginal impact of another user being very high in absolute terms, since the absolute cost of an AI catastrophe would be enormous.

In addition, establishing norms about behaviour can lead to a difference of a much larger number of users.

You could make similar arguments to suggest that if you are concerned about climate change and/or animal welfare, that it is not worth avoiding flying, or eating vegan, but I think those are at least given more serious consideration both in EA communities and other communities that care about these causes.

You could make similar arguments to suggest that if you are concerned about climate change and/or animal welfare, that it is not worth avoiding flying, or eating vegan

If it helps, I also hold this opinion and think that many EAs are also wrong about this. I particularly think that on the margin fewer EAs should be vegan, by their own lights (my impression is that most EAs do still fly when flying makes sense).

The products being widely used doesn’t prevent the marginal impact of another user being very high in absolute terms, since the absolute cost of an AI catastrophe would be enormous.

I agree with this argument in principle, but think that it just doesn't check out - if you compare to the other options for reducing AI x risk (like Jeffrey's day job!), I think his impact from that seems vastly higher than the impact of ChatGPT divided by several million (and the share given to openai, Microsoft, etc). And these both scale with the expected harm of AI x risk, so the ratio argument still holds regardless of the absolute scale. And I generally think that it's a mistake to significantly stress over tiny fractions of your expected impact, and leads to poor allocations of time and mental energy.

Even if you're not working on AI x risk directly, I would guess that eg donations to the longterm future fund still matter way more.

This isn't an actual argument, I have a meta level suspicion that in many people this kind of reasoning is generated more by the virtue ethics of "I want to be a good person => avoid causing harm" than the utilitarian "I want to maximise the total amount of good done", and in many people the utilitarian case is more post hoc justification (

I think the influencing other people argument does check out, but I'm just pretty skeptical that the average number of counterfactual coverts will be more than, say, 5,and this doesn't change my argument. You also need to be careful to avoid double counting evidence - if being vegan convinces someone else to become vegan, and THEY convince someone else, then you need to split the credit between you two. If you think it leads to significant exponential growth then MAYBE the argument goes through even after credit sharing and accounting for counterfactuals?

In the concrete case of ChatGPT, I expect the models to just continue getting better and the context to shift far too rapidly for slow movement growth like that to be that important (concretely, I think that a mass movement boycott of these products is unlikely to be a contingent factor in AI products like this being profitable)

@Daniel_Eth asked me why I choose 1:1 offsets. The answer is that I did not have a principled reason for doing so, and do not think there's anything special about 1:1 offsets except that they're a decent schelling point. I think any offsets are better than no offsets here. I don't feel like BOTECs of harm caused as a way to calculate offsets are likely to be particularly useful here but I'd be interested in arguments to this effect if people had them. 

Thanks for the post.

I've decided to donate $240 to both GovAI and MIRI to offset the $480 I plan to spend on ChatGPT Plus over the next two years ($20/month).

These amounts are small.

Let's say the value of your time is $500 / hour.

I'm not sure it was worth taking the time to think this through so carefully.

To be clear, I think concrete actions aimed at quality alignment research or AI policy aimed at buying more time are much more important than offsets.

Agree.

By publicly making a commitment to offset a particular harm, you're establishing a basis for coordination - other people can see you really care about the issue because you made a costly signal

...

I won't dock anyone points for not donating to offset harm from paying for AI services at a small scale. But I will notice if other people make similar commitments and take it as a signal that people care about risks from commercial incentives.

Honestly, if someone told me they'd done this, my first thought would be "huh, they've taken their eye off the ball". My second would be "uh oh, they think it's a good idea to talk about ethical offsetting".

I think it's worth pricing in the possibility of reactions like this when reflecting on whether to take small actions like this for the purpose of signalling.

Let's say the value of your time is $500 / hour.

I'm not sure it was worth taking the time to think this through so carefully.

But:

  1. J is thinking this through and posting it to give insight to others, not just for his own case.

  2. If J’s time is so valuable, it may be because his insight is highly valuable, including on this very question

Thanks for writing this. I've been wondering whether it's ethical for me to have a ChatGPT plus subscription and it's useful to see other folks thinking along similar lines + providing  'solutions'. 

As a side note, I've just written a shortform about how I believe more people should be integrating new AI tools into their workflows. for people worried about giving data and money to microsoft, I think offsetting is likely a great way to ensure you capture the benefits which I expect to be higher than the price of the offset

Curated and popular this week
 ·  · 23m read
 · 
Or on the types of prioritization, their strengths, pitfalls, and how EA should balance them   The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone is trying to figure out how to prepare for AI. This is the first in a series of posts examining the state of cause prioritization and proposing strategies for moving forward.   Executive Summary * Performing prioritization work has been one of the main tasks, and arguably achievements, of EA. * We highlight three types of prioritization: Cause Prioritization, Within-Cause (Intervention) Prioritization, and Cross-Cause (Intervention) Prioritization. * We ask how much of EA prioritization work falls in each of these categories: * Our estimates suggest that, for the organizations we investigated, the current split is 89% within-cause work, 2% cross-cause, and 9% cause prioritization. * We then explore strengths and potential pitfalls of each level: * Cause prioritization offers a big-picture view for identifying pressing problems but can fail to capture the practical nuances that often determine real-world success. * Within-cause prioritization focuses on a narrower set of interventions with deeper more specialised analysis but risks missing higher-impact alternatives elsewhere. * Cross-cause prioritization broadens the scope to find synergies and the potential for greater impact, yet demands complex assumptions and compromises on measurement. * See the Summary Table below to view the considerations. * We encourage reflection and future work on what the best ways of prioritizing are and how EA should allocate resources between the three types. * With this in mind, we outline eight cruxes that sketch what factors could favor some types over others. * We also suggest some potential next steps aimed at refining our approach to prioritization by exploring variance, value of information, tractability, and the
 ·  · 5m read
 · 
[Cross-posted from my Substack here] If you spend time with people trying to change the world, you’ll come to an interesting conundrum: Various advocacy groups reference previous successful social movements as to why their chosen strategy is the most important one. Yet, these groups often follow wildly different strategies from each other to achieve social change. So, which one of them is right? The answer is all of them and none of them. This is because many people use research and historical movements to justify their pre-existing beliefs about how social change happens. Simply, you can find a case study to fit most plausible theories of how social change happens. For example, the groups might say: * Repeated nonviolent disruption is the key to social change, citing the Freedom Riders from the civil rights Movement or Act Up! from the gay rights movement. * Technological progress is what drives improvements in the human condition if you consider the development of the contraceptive pill funded by Katharine McCormick. * Organising and base-building is how change happens, as inspired by Ella Baker, the NAACP or Cesar Chavez from the United Workers Movement. * Insider advocacy is the real secret of social movements – look no further than how influential the Leadership Conference on Civil Rights was in passing the Civil Rights Acts of 1960 & 1964. * Democratic participation is the backbone of social change – just look at how Ireland lifted a ban on abortion via a Citizen’s Assembly. * And so on… To paint this picture, we can see this in action below: Source: Just Stop Oil which focuses on…civil resistance and disruption Source: The Civic Power Fund which focuses on… local organising What do we take away from all this? In my mind, a few key things: 1. Many different approaches have worked in changing the world so we should be humble and not assume we are doing The Most Important Thing 2. The case studies we focus on are likely confirmation bias, where
 ·  · 1m read
 · 
I wanted to share a small but important challenge I've encountered as a student engaging with Effective Altruism from a lower-income country (Nigeria), and invite thoughts or suggestions from the community. Recently, I tried to make a one-time donation to one of the EA-aligned charities listed on the Giving What We Can platform. However, I discovered that I could not donate an amount less than $5. While this might seem like a minor limit for many, for someone like me — a student without a steady income or job, $5 is a significant amount. To provide some context: According to Numbeo, the average monthly income of a Nigerian worker is around $130–$150, and students often rely on even less — sometimes just $20–$50 per month for all expenses. For many students here, having $5 "lying around" isn't common at all; it could represent a week's worth of meals or transportation. I personally want to make small, one-time donations whenever I can, rather than commit to a recurring pledge like the 10% Giving What We Can pledge, which isn't feasible for me right now. I also want to encourage members of my local EA group, who are in similar financial situations, to practice giving through small but meaningful donations. In light of this, I would like to: * Recommend that Giving What We Can (and similar platforms) consider allowing smaller minimum donation amounts to make giving more accessible to students and people in lower-income countries. * Suggest that more organizations be added to the platform, to give donors a wider range of causes they can support with their small contributions. Uncertainties: * Are there alternative platforms or methods that allow very small one-time donations to EA-aligned charities? * Is there a reason behind the $5 minimum that I'm unaware of, and could it be adjusted to be more inclusive? I strongly believe that cultivating a habit of giving, even with small amounts, helps build a long-term culture of altruism — and it would