51Joined Feb 2019


I have a PhD in finance and am the strategist at Affinity Impact, the impact initiative of a Taiwanese family that makes both grants and impact investments.


Got it. But I think the phrasing for the number of animals that die is confusing then. Since you say "100 other human [sic] would probably die with me in that minute," the reference is to how many animals would also do during that minute.  I think what you want to say is for every human death, how many animals would die, but that's not the current phrasing (and by that logic, the number of humans that would die per human death would be 1, not 100).

I'd suggest making everything consistent on a per-second basis as smaller numbers are more relatable. So  1 other human would die with you that second, along with 10 cows, etc.

Thanks for writing this! The very last sentence seems off. Did you mean to say every second (instead of minute)? Also, the number of farm animals that die every second should be 1/60 (not 1/120) of that in the “minute” table above.

This last sentence was quite shocking for me to read. It’s sad…but very powerful.

Minor suggestion: in your title and summary, please just write out "10 k" as 10,000. No need to abbreviate when people may be unsure that it's actually 10,000 (given that it's such a large difference). 

I agree with Michael that concrete examples would be very helpful, even for researchers.  A post should be informative and persuasive, and examples almost always help with that. In this case, examples can also make clear the underlying logic, and where the explanation can be confusing. 

For example, let's think about investing in alternative protein companies as a way to tackle animal welfare. Assume that in a future state where lots more people eat real meat (bad world state), the returns for alt-proteins in that state are low but cost-effectiveness is high. This could be because alt proteins have faced lower rates of adoption (low returns) but it's now easier to persuade meat eaters to switch (search costs are now low since more willing-switchers can be efficiently targetted). The opposite situation is true too. In a good future state with few meat-eaters, alt protein returns are high but cost-effectiveness is low. So this scenario should put us in your table's upper left quadrant (negative correlation btw/ World State and Cost-Effectiveness + negative correlation btw/ Return and Cost-Effectiveness).

This example illustrates how some of your quadrant descriptions may be  confusing or even inappropriate:

  1. "Underweight investment": I agree with this one since to have a greater EV, you want investments with a positive correlation between returns and cost-effectiveness. This isn't true for alt proteins here, so you should avoid them.
  2. "Divest from evil to do good": I don't think this makes sense because alt proteins are not "evil" (but you should avoid them given the scenario).
  3. "Mission leveraging": I was quite confused initially because I was assuming that the comparison is to no investment at all. If so, then investing in alt proteins can lead to an ambiguous impact on volatility (depending on the relative magnitude of return changes versus cost-effectiveness changes). It could in fact be mission hedging (with an improvement in the bad state) if the low returns end up producing more total good because of the state's high cost-effectiveness. However, I eventually realized that the comparison is to a fixed grant within the animal welfare space (although this was never made explicit in the post and may not be what most people would assume). If so, then indeed this is always mission leveraging since a positive correlation between the world state and returns does ensure lower volatility.

So as you can see, an example makes clear where table descriptions may be inappropriate and where a clearer description can be helpful. It also makes more concrete what various correlation signs mean and how to think about them.

This post (and the series it summarizes) draws on the scientific literature to assess different ways of considering and classifying animal sentience. It persuasively takes the conversation beyond an all-or-nothing view and is a significant advancement for thinking about wild animal suffering as well farm animal welfare beyond just cows, pigs, and chickens.

Thanks for the clarification, Owen! I had mis-understood 'investment-like' as simply having return compounding characteristics. To truly preserve optionality though, these grants would need to remain flexible (can change cause areas if necessary; so grants to a specific cause area like AI safety wouldn’t necessarily count) and liquid (can be immediately called upon; so Founder's Pledge future pledges wouldn't necessarily count). So yes, your example of grants that result "in more (expected) dollars held in a future year (say a decade from now) by careful thinking people who will be roughly aligned with our values" certainly qualifies, but I suspect that's about it. Still, as long as such grants exist today, I now understand why you say that the optimal giving rate is implausibly (exactly) 0%. 

Hi Owen, even if you're confident today about identifying investment-like giving opportunities with returns that beat financial markets, investing-to-give  can still be desirable. That's because investing-to-give preserves optionality. Giving today locks in the expected impact of your grant, but waiting allows for funding of potentially higher impact opportunities in the future.

The secretary problem comes to mind (not a perfect analogy but I think the insight applies). The optimal solution is to reject the initial ~37% of all applicants and then accept the next applicant that's better than all the ones we've seen. Given that EA has only been around for about a decade, you would have to think that extinction is imminent for a decade to count for ~37% of our total future. Otherwise, we should continue rejecting opportunities. This allows us to better understand the extent of impact that's actually possible,  including opportunities like movement building and global priorities research. Future ones could be even better! 

I highly recommend the Founder's Pledge report on Investing to Give. It goes through and models the various factors in the giving-now vs giving-later decision, including the ones you describe. Interestingly, the case for giving-later is strongest for longtermist priorities, driven largely by the possibility that significantly more cost-effective grants may be available in the future. This suggests that the optimal giving rate today could very well be 0%.  

Have you compared your analysis to this previous EA Forum post? Are there different takeaways? Have you done anything differently and if so, why? 

Load More