sbehmer

sbehmer's Comments

Effective Altruism and Free Riding

Thanks for the comment. If differences in careful thinking are the main sources of differences in people's altruistic behavior and those differences can be easily eliminated through informing people about the benefits of thinking carefully, then I agree that the ideas in this post are not very important.

The reason that the second part is relevant is because as long as these differences in careful thinking persist, then it's as if people have differences in values (this is the same as what I said in the essay about how there are a lot of differences in beliefs within the EA community which lead to different valuations of causes, even when people's moral values are identical). If these differences in careful thinking were easily to eliminate, then we should be prioritizing informing the entire world about their mistakes ASAP, so that any differences in altruistic priorities would be eliminated. Unfortunately, I don't think it's true that these differences are easy to eliminate (I think that's partially why the EA community has moved away from advocacy).

I also would disagree that differences in careful thinking are the main sources of disagreements in people's altrusitic behavior. Even within the EA community, where I think most people think very carefully, there are large differences in people's valuations of causes, as I mentioned in the post. I expect that the situation would be similar if the entire world started "thinking more carefully".

Reducing long-term risks from malevolent actors

Thanks, it's a very nice article on an important topic. If you're interested, there's a small literature in political economy called "political selection" (here's an older survey article) . As far as I know they don't focus specifically on the extreme lower tail of bad leaders, but they do discuss how different institutional features can lead to different types of people gaining power.

Effective Altruism and Free Riding

First, the only strong claim that I'm trying to make in the post is that the standard EA advice in this setting is to free-ride. Free-riding is not necessarily irrational or immoral. In the section "Working to not Destroy Cooperation" I argue that it's possible that this sort of free-riding will make the world worse, but that is more speculative.

As far as who the other players are in the climate change example, I was thinking of it as basically everyone else in the world who has some interest in preventing climate change, but the most important players are those who are or could potentially have a large impact on climate change and other important problems. This takes the form of a many-player public goods game, which is similar conceptually to a prisoner's dilemma. While I do think it's unlikely that everyone who has contributed to fighting climate change will collectively decide "let's not help EA with their goals", I think it's possible that if EA has success with their current strategy, some people will choose to use the methodology of EA. This could lead them to contribute to causes which are neglected by their value systems but which most people currently in EA find less important than climate change (causes like philanthropy in their local communities, or near term conservation work, or spreading their religion, or some bizarre thing that they think is important but no one else does). So, in that way, free-riding by EA could lead others to free-ride, which could make us all worse off.

Effective Altruism and Free Riding
I'd be up for being convinced otherwise – and maybe the model with log returns you mention later could do that. If you think otherwise, could you explain the intuition behind it?

The more general model captured the idea that there are almost always gains from cooperation between those looking to do good. It doesn't show, however, that those gains are necessarily large relative to the costs of building cooperation (including opportunity costs). I'm not sure what the answer is to that.

Here's one line of reasoning which makes me think the net gains from cooperation may be large. Setting aside the possibility that everyone has near identical valuations of causes, I think we're left with two likely scenarios:

1. There's enough overlap in valuations of direct-work to create significant gains from compromise on direct-work (maybe on the order of doubling each persons impact). This is like example A in the post.

2. Valuations of direct work are so far apart (everyone thinks that their cause area is 100x more valuable than others) that we're nearly in the situation from example D, and there will be relatively small gains from building cooperation on direct work. However, this creates opportunities for huge externalities due to advocacy, which means that the actual setting is closer to example B. Intuition: If you think x-risk mitigation is orders of magnitude more important than global poverty, then an intervention which persuades someone to switch from working on global poverty to x-risk will also have massive gains (and have massively negative impact from the perspective of the person who strongly prefers global poverty). I don't think this is a minor concern. It seems like a lot of resources get wasted in politics due to people with nearly perpendicular value systems fighting each other through persuasion and other means.

So, in either case, it seems like the gains from cooperation are large.

Effective Altruism and Free Riding
I'd still agree that we should factor in cooperation, but my intuition is then that it's going to be a smaller consideration than neglect of future generations, so more about tilting things around the edges, and not being a jerk, rather than significantly changing the allocation.

For now, I don't think any major changes in decisions should be made based on this. We don't know enough about how difficult it would be to build cooperation and what the gains to cooperation would be. I guess the only concrete recommendation may be to more strongly emphasize the "not being a jerk" part of effective altruism (especially because that can often be in major conflict with the "maximize impact" part). Also I would argue that there's a chance that cooperation could be very important and so it's worth researching more.

Effective Altruism and Free Riding

One more example to add here of a cause which may be like a "public good" within the EA community: promoting international cooperation. Many important causes are global public goods (that is, causes which benefit the whole world and thus any one nation has an incentive to free-ride on other nations' contributions), including global poverty, climate change, x-risk reduction, and animal welfare. I know that FHI already has some research on building international cooperation. I would guess that some EAs who primarily give to global poverty would be willing to shift funding towards building international cooperation if some EAs who normally give to AI safety do the same.

Effective Altruism and Free Riding

I agree with your intuition that with what a "cooperative" cause prioritization might look like. Although I do think a lot more work would need to be done to formalize this. I also think it may not make sense to use cooperative cause prioritization: if everyone else always acts non-cooperatively, you should too.

I'm actually pretty skeptical of the idea that EA tends to fund causes which are widely valued by people as a whole. It could be true, but it seems like it would be a very convenient coincidence. EA seems to be made up of people with pretty unique value systems (this, I'd expect, is partly what leads EAs to view some causes as being orders of magnitude more important than the causes that other people choose to fund). It would be surprising if optimizing independently for the average EA value system leads to the same funding choices as would optimizing for some combination of the value systems in the general population. While I agree that global poverty work seems to be pretty broadly valued (many governments and international organizations are devoted to it), I'm unsure about things like x-risk reduction. Have you seen any evidence that that is broadly popular? Does the UN have an initiative on x-risk?

I would imagine that work which improves institutions is one cause area which would look significantly more important in the cooperative framework. As I mention in the post, governments are one of the main ways that groups of people solve collective action problems, so improving their functioning would probably benefit most value systems. This would involve improving both formal institutions (constitutions), or informal institutions (civic social norms). In the cooperative equilibrium, we could all be made better off because people of all different value systems would put a significant amount of resources towards building and maintaining strong institutions.

A (tentative) response to your second to last paragraph: the preferences of animals and future generations would probably not be directly considered when constructing the cooperative world portfolio. Gains from cooperation come from people who have control over resources working together so that they're better off than in the case where they independently spend their resources. Animals do not control any resources, so there are no gains from cooperating with them. Just like in the non-cooperative case, the preferences of animals will only be reflected indirectly due to people who care about animals (just to be clear: I do think that we should care about animals and future people). I expect this is mostly true of future generations as well, but maybe there is some room for inter-temporal cooperation.

Effective Altruism and Free Riding

Thanks a lot for the comment. Here are a few points:

1. You're right that the simple climate change example it won't always be a prisoner's dilemma. However, I think that's more due to the fact that I assumed constant returns to scale for all three causes. At the bottom of this write-up I have an example with three causes that all have log returns. As long as both funders value the causes positively and don't have identical valuations, a pareto improvement is possible through cooperation (unless I'm making a mistake in the proof, which is possible). So I think the existence of collective action problems is more general than the climate change example would make it seem.

2. It's a very nice point that the gains from cooperation may be small in magnitude, even if they're positive. That is definitely possible. But I'm a little skeptical that large valuation differences between the 4 'schools' of EA donors means that the gains to cooperation are likely to be small. I think even within those schools there are significant disagreements among causes. For example, within the long-termist school, disagreements on whether we're living in an extremely influential time or on how to value population increases can lead to very large disagreements in valuation of causes. Also, when people have very large differences in valuations of direct causes, the opportunity for conflict on the advocacy front seems to increase (See Phil Trammell's post here).


I agree that it would be useful to get more of an idea of when the prisoner's dilemma is likely to be severe. Right now I don't think I have much more to add on that.

Effective Altruism and Free Riding

Thanks for the clarification. I apologize for making it sound as if 80k specifically endorsed not cooperating.

Effective Altruism and Free Riding

Thanks for the comment. First, I'd like to point out that I think there's a good chance that the collective action problem within EA isn't so bad because, as I mentioned in the post, there has been a fairly large emphasis on cooperating with others within EA. It's when interacting with people outside of EA that I think we're acting non-cooperatively.


However, it's still worth discussing whether there are major unsolved collective action problems within EA. I'll give some possible examples here, but note that I'm very unsure about many of these examples. First, here are some causes which I think benefit EAs of many different value systems and are thus would be underfunded if people were acting non-cooperatively:

1. General infrastructure including the EA forum, EA funds or EA global. This also would include the mechanisms for cooperation which I mentioned in the post. All of these things are like public goods in that that they probably benefit nearly every value system within EA. If true, this also means that the "EA meta fund" may be the most public good-like of the four EA funds.

2. The development of informal norms within the community (like being nice, not overly-stating or making misleading arguments, cooperating with others). The development and maintenance of these norms also seems to be a public good which benefits all value systems.

3. (this is the most speculative one) more long-term oriented approaches to near-term EA cause areas. An example is approaches to global development which involve building better and lasting political institutions (see this forum post). This may represent a kind of compromise between some long-termist EAs (who may normally donate to AI safety) and global development EAs (who would normally donate to short-term development initiatives like AMF).


And here are some causes which I think are viewed as harmful by some value systems and thus would be overfunded if people acted non-cooperatively:

1. Advocacy efforts to convince people to convert from other EA cause areas to your own. As I mentioned in the post, these can be valued negatively by other value systems.

2. Causes which increase (or decrease) the population. Some people disagree on whether creating more lives is on average good or bad (for example, some suffering-focused EAs may think that creating more human lives is good. Conversely, some people may think that creating more farm animal lives is on average good). This means that causes which increase (decrease) the population will be viewed as harmful by those who view population increases (decreases) as bad. Brian Tomasik's example at the end of this post is along those lines.


So, in general, I don't think I agree that the EA community is likely to not have major collective action problems. It seems more likely, though, that EA has solved most of its internal collective action problems through emphasizing cooperation.

Load More