Thanks for the post. One question on the background: is there any data (from the EA survey or elsewhere) about the percentage of EAs who lean towards suffering-focused ethics?
Thanks for the talk and the report. I think this it's a very interesting topic and an important one to work on, given how many socially-minded people seem to care about impact investing.
I have a few more questions in addition to the one about perfectly elastic demand curves:
1. You note that if public markets are efficient then it will take nearly the entire population of investors to divest for the divestment movement to impact stock prices. This seems to make sense: it only takes a small group of socially-neutral investors to drastically increase their investments in the bad company in response to divestment from others. However, if we consider a movement to increase investment in a socially-good company, it seems like this idea doesn't apply. Let's say that the good company makes up .001% of the total stock market. It seems like if .001% of investors are willing to accept lower returns for investing in that company, then they should be able fund the company all on their own. In equilibrium no socially-neutral investors will hold that company's stock, and the stock would yield lower returns than socially-neutral stocks. So perhaps movements which promote investment in good companies are more likely to succeed than divestment movements are.
2. From your research it looks like the current ESG ratings are very low-quality. Given how big of a market impact investing is, do you think that there would be value in trying to improve those ratings?
Thanks for the reply.
You're right that the paper I posted doesn't present direct evidence. I just thought it was important that in their literature review they claim that prior studies show that demand curves are not perfectly elastic (at least in theory. They aren't citing empirical papers).
On the empirical side, I'm surprised to hear you say that there seems to be agreement that long-run demand curves are perfectly elastic. On page 18 of the founder's pledge report, you seem to say that there is expert disagreement on this, and you cite multiple recent studies on both sides of the issue. Has more evidence come out since the report was published?
It seems like you are fairly confident from your research that impact investing will tend to have little impact in publicly traded markets. I briefly looked into the theoretical literature on this, and I'm not seeing why we should be so confident in that idea. For example, this paper from 2019 claims:
"In general, systematic screening of assets based on investors’ preferences leads to a return premium on the screened assets, in equilibrium, and such return differences cannot be arbitraged away by 'neutral' investors".
They then cite four theoretical papers in support of that claim (note: I haven't actually read through these papers. I just glanced at the introductions and the setups of their models. It could be that these are bad papers).
Were you aware of this literature when writing your report? Why should we be so confident in the arbitrage argument?
Thanks for the comment. If differences in careful thinking are the main sources of differences in people's altruistic behavior and those differences can be easily eliminated through informing people about the benefits of thinking carefully, then I agree that the ideas in this post are not very important.
The reason that the second part is relevant is because as long as these differences in careful thinking persist, then it's as if people have differences in values (this is the same as what I said in the essay about how there are a lot of differences in beliefs within the EA community which lead to different valuations of causes, even when people's moral values are identical). If these differences in careful thinking were easily to eliminate, then we should be prioritizing informing the entire world about their mistakes ASAP, so that any differences in altruistic priorities would be eliminated. Unfortunately, I don't think it's true that these differences are easy to eliminate (I think that's partially why the EA community has moved away from advocacy).
I also would disagree that differences in careful thinking are the main sources of disagreements in people's altrusitic behavior. Even within the EA community, where I think most people think very carefully, there are large differences in people's valuations of causes, as I mentioned in the post. I expect that the situation would be similar if the entire world started "thinking more carefully".
Thanks, it's a very nice article on an important topic. If you're interested, there's a small literature in political economy called "political selection" (here's an older survey article) . As far as I know they don't focus specifically on the extreme lower tail of bad leaders, but they do discuss how different institutional features can lead to different types of people gaining power.
First, the only strong claim that I'm trying to make in the post is that the standard EA advice in this setting is to free-ride. Free-riding is not necessarily irrational or immoral. In the section "Working to not Destroy Cooperation" I argue that it's possible that this sort of free-riding will make the world worse, but that is more speculative.
As far as who the other players are in the climate change example, I was thinking of it as basically everyone else in the world who has some interest in preventing climate change, but the most important players are those who are or could potentially have a large impact on climate change and other important problems. This takes the form of a many-player public goods game, which is similar conceptually to a prisoner's dilemma. While I do think it's unlikely that everyone who has contributed to fighting climate change will collectively decide "let's not help EA with their goals", I think it's possible that if EA has success with their current strategy, some people will choose to use the methodology of EA. This could lead them to contribute to causes which are neglected by their value systems but which most people currently in EA find less important than climate change (causes like philanthropy in their local communities, or near term conservation work, or spreading their religion, or some bizarre thing that they think is important but no one else does). So, in that way, free-riding by EA could lead others to free-ride, which could make us all worse off.
I'd be up for being convinced otherwise – and maybe the model with log returns you mention later could do that. If you think otherwise, could you explain the intuition behind it?
The more general model captured the idea that there are almost always gains from cooperation between those looking to do good. It doesn't show, however, that those gains are necessarily large relative to the costs of building cooperation (including opportunity costs). I'm not sure what the answer is to that.
Here's one line of reasoning which makes me think the net gains from cooperation may be large. Setting aside the possibility that everyone has near identical valuations of causes, I think we're left with two likely scenarios:
1. There's enough overlap in valuations of direct-work to create significant gains from compromise on direct-work (maybe on the order of doubling each persons impact). This is like example A in the post.
2. Valuations of direct work are so far apart (everyone thinks that their cause area is 100x more valuable than others) that we're nearly in the situation from example D, and there will be relatively small gains from building cooperation on direct work. However, this creates opportunities for huge externalities due to advocacy, which means that the actual setting is closer to example B. Intuition: If you think x-risk mitigation is orders of magnitude more important than global poverty, then an intervention which persuades someone to switch from working on global poverty to x-risk will also have massive gains (and have massively negative impact from the perspective of the person who strongly prefers global poverty). I don't think this is a minor concern. It seems like a lot of resources get wasted in politics due to people with nearly perpendicular value systems fighting each other through persuasion and other means.
So, in either case, it seems like the gains from cooperation are large.
I'd still agree that we should factor in cooperation, but my intuition is then that it's going to be a smaller consideration than neglect of future generations, so more about tilting things around the edges, and not being a jerk, rather than significantly changing the allocation.
For now, I don't think any major changes in decisions should be made based on this. We don't know enough about how difficult it would be to build cooperation and what the gains to cooperation would be. I guess the only concrete recommendation may be to more strongly emphasize the "not being a jerk" part of effective altruism (especially because that can often be in major conflict with the "maximize impact" part). Also I would argue that there's a chance that cooperation could be very important and so it's worth researching more.
One more example to add here of a cause which may be like a "public good" within the EA community: promoting international cooperation. Many important causes are global public goods (that is, causes which benefit the whole world and thus any one nation has an incentive to free-ride on other nations' contributions), including global poverty, climate change, x-risk reduction, and animal welfare. I know that FHI already has some research on building international cooperation. I would guess that some EAs who primarily give to global poverty would be willing to shift funding towards building international cooperation if some EAs who normally give to AI safety do the same.