Hi there! I'm an EA from Madrid. I am currently finishing my Ph.D. in quantum algorithms and would like to focus my career on AI Safety. Send me a message if you think I can help :)
"It is appropriate for small donors to spend time finding small charities to support"
For:
Against:
I think GFI has claimed this in the past, and given their role of large coordinator of the area I’m inclined to believe their conterfactual importance. However the problem is that without a downstream model of how dollars convert into averted animal suffering, it is quite hard to prioritise between theories of change.
Hi Caroline, thanks for the reply. I think you are very right in that both approaches are complementary and we should support both. There’s even a chance that advocacy campaigns may end up creating momentum from which alternative proteins could benefit. It is also true that alternative proteins may be able to access funds that are not available to corporate advocacy campaigns or similar, not just VC but also government support. That may also be the reason why GFI is highlighting the environmental aspect which is an easier sell outside of EA or animal welfare circles. Still, GFI is itself only charity funding (as far as I know) so we may argue donations to them act as a catalyst. In any case, I posed this question because I think we lack a formal model to make decisions on what types of interventions make more sense. It is as if in the area of Global Health people did not have models that allowed them to compare setting up water infrastructure vs water chlorination or wells. I think parametric models could shed some light on the optimal capital allocation between interventions, or at least make decision making more clear.
I think there are good arguments why those actions might have indeed been horrible mistakes. But I’m also quite uncertain about what would have been the best course of action at the time. Eg, there’s a reasonable case that the best we might hope for is steering the development of AI. I unfortunately don’t know.
Let me give a non AI example: I find it reasonable that some EAs try to steer how factory farming works (most animal advocacy), despite I preferring no animal died or was tortured for food.
But on the other hand I believe people at leadership positions failed to detect and flag the FTX scandal ahead of time. And that’s a shame.
What would be the motivation? Is writing a good skill to have and thus merits practising?