I believe they are largely tractable, there's a variety of different intervention types (Policy, Direct work, Meta, Research), cause areas (Alt Proteins, Farmed Animals, Wild animal suffering, Insects), organisations and geographies to pursue them in. Of particular note may be potentially highly tractable and impactful work in LMIC (Africa, Asia, Middle East, Eastern Europe)
I will say animal welfare is a newer and less explored area than global health but that may mean that your donation can be more impactful and make more of a difference as there could be a snowball effect from funding new high-potential intervention or research.
If you are quite concerned about traceability, perhaps you could consider donating to organisations that are doing more research or meta-work to discover more tractable interventions
Either way, it's not entirely clear and highly depends on your philosophy, risk tolerance, knowledge and funding counterfactuals.
Great post, not something we often see discussed. I think it's unlikely to take off because it's hard for EAs to sympathise, EA often becomes a purpose. What if we circulated something like this book around as a "Guide" for people to find their purpose - https://www.goodreads.com/book/show/38452905-the-pragmatist-s-guide-to-life
Great post, should have more upvotes IMO, don't see many people thinking too much about this.
Thoughts on Correlations:
Too much money comes from Tech and Crypto - We should diversify EA funding into pharma, energy, healthcare, transport etc (We could do this by encouraging E2Gers to go in this direction, there are also direct impact opportunities here).
Too much focus on non-profits and not enough focus on for-profits and entrepreneurship - I've gotten more sold on for-profits recently, why? The self-reinforcing mechanism of your product funding itself can create a flywheel effect allowing for scale as fast as the possible and pushing impact that is uncapped by funders. It's worth noting SBF was EA from the start - we should seed the next 5 SBF's to cover the other 5 (and growing) cause areas of EA.
Too Much Risk Aversion - We play by the rules too much and play it safe often, I think the iterative and empirical approach is great, we have a lot of that stock in our Portfolio as EA, what I think we do not have is that ~10% of our portfolio allocated to high-risk high reward projects. I'd like to see a culture shift into larger risk-taking and more status, money and awards for failures.
Too much stock is put into specific individuals and entities' opinions and takes - at the end of the day one person's opinion is just that, in EA as has been previously written about there is a large culture of deference and referencing. Individuals should be encouraged to think for themselves and reduce their level of deference - IMO EA has a bad culture on this front, what are the real chances 80+% of people would come to a specific conclusion on their own (which often happens in EA).
I'm highly sympathetic to this. Informational asymmetries likely account for a lot of harm worldwide, and there are plenty of informational arbitrage opportunities.
I suspect this gap may be closed by something like a combination for GPT-3 like AI and a recommender app as mentioned in the post. Seems like something worthy to pursue and could work well as a for-profit model too. (Interesting to think how much good Google has done in the world with search, docs, sheets, meets and so on).
It's also worth considering how "bad" google can be these days, with website owners and companies optimising for SEO to make money. I often pine for the ability to talk to my phone and for it to intelligently talk back, make recommendations and take notes based on how I use it and in general be an excellent PA and life planner, but it's not.
Really interesting post, I think this kind of macro thinking and overview is sorely needed.
I agree with the improving forecasting, realistically what we should do is model out chances of success as you have done here with weights and then allocate our resources accordingly IE.
For example, if lab-grown meat is 50% then we should weigh this with 50% of the resources and so forth, I'm pretty sure this is currently very out of alignment, and unfortunately influenced by funder preferences and so forth.
thanks for the insightful post.
Hey George, glad to hear more on this project and thought this was a great write-up.
In my opinion, this is exactly the kind of thing we need more of in the vegan food world and I'm very on board with the thesis of making a superior product and obsoleting meat as opposed to substituting.
Good luck with the venture and I will be following your progress!