Karolina is co-founder and Director of Research at Charity Entrepreneurship.
She also serves as a Fund Manager at the EA Animal Welfare Fund, and as a board member and consultant for various nonprofits and think tanks.
"Visions" - another song he released in 2021 gives me very strong EA vibes. Lyrics include:
VisionsImagining the worlds that could beShaping a mosaic of fatesFor all sentient beings[...]VisionsAvoidable suffering and painWe are patiently inching our wayToward unreachable utopiasVisionsEnslaved by the forces of natureElevated by mindless replicatorsChallenged to steer our collective destinyVisionsLook at the magic of realityWhile accepting with all honestyThat we can't know for sure what's nextNo, we can't know for sure what's nextBut that we're in this togetherWe are here together
VisionsImagining the worlds that could beShaping a mosaic of fatesFor all sentient beings[...]
VisionsAvoidable suffering and painWe are patiently inching our wayToward unreachable utopias
VisionsEnslaved by the forces of natureElevated by mindless replicatorsChallenged to steer our collective destiny
VisionsLook at the magic of realityWhile accepting with all honestyThat we can't know for sure what's next
No, we can't know for sure what's nextBut that we're in this togetherWe are here together
How has the EA fund grown over the years?
You can check the donations made through all the funds on our website. Below I pasted a graph illustrating AWF’s growth over the last three years:
Do you have a sense of what percentage of overall EA Animal Welfare giving is being done through the fund as opposed to direct donations from EAs to orgs?
I’m not aware of any comparison data of that sort, but a couple of sources (mainly EA Survey) may give us some approximate answers.
EA Survey 2019 Series: Donation Data quotes the following amount of donations made by EA community members who filled out the EA Survey. Note however, that $87,385.50 of that amount was donated to EA Animal Welfare Fund.
Out of that following organizations received funding:
Those numbers add up to 463,104.79, so something is not right with the data. But it gives us a ballpark number.
Lewis Bollard provided another interesting piece of data in one of his newsletters, where he estimated a Farmed Animal Advocacy Groups and Team’s Revenue by Year. Note however, that those estimates are from 2014 and 2016, so before AWF launched.
Lastly, for comparison data from Lewis Bollard’s newsletter released in 2018 claiming that “Since the start of 2016, the Open Philanthropy Project has approved 82 farm animal welfare grants totaling $47M to 50 grantees in 24 countries.”
Similarly to LTFF, we solicit applications via an open process advertised on relevant sites, Facebook groups, and by individually reaching out to promising candidates. Additionally, we create an RFP and distribute it accordingly, which I believe LTFF decided not to do. Although similarly to LTFF, at AWF applications are initially triaged, rejecting applications that are out of scope or clearly below the bar for funding, we reject <5% instead of 40% of applications at that stage. The remaining applications are assigned to a primary and secondary fund manager with relevant, compatible expertise.
From the LTFF:
The assigned fund manager will read the application in detail, and often reaches out to interview the applicant or ask clarifying questions. In addition, they may read prior work produced by the applicant, reach out to the applicant's references, or consult external experts in the area. They produce a brief write-up summarizing their thinking, and assign a vote to the application.
This is applicable to AWF as well. However, before the primary reviewer assigns their vote, they notify the secondary reviewer and ask for their input. We’re also a bit less likely to reach out to interview the applicant.
What follows is voting by all fund managers. As outlined in another question by Marcus, we grade all applications with the same scoring system. For the prior round, after the review of the primary and secondary investigator and we've all read their conclusions, each grant manager gave a score (excluding cases of conflict of interests) of +5 to -5, with +5 being the strongest possible endorsement of positive impact, and -5 being a grant with an anti-endorsement that's actively harmful to a significant degree. We then averaged across scores, approving those at the very top, and dismissing those at the bottom, largely discussing only those grants that are around the threshold of 2.5 unless anyone wanted to actively make the case for or against something outside of these bounds (the size and scope of other grants, particularly the large grants we approve, is also discussed).
Similarly to LTFF,
we provide feedback to a subset of applications (both approved and rejected) where we believe our perspective could be particularly beneficial for the applicant's work in the future,
however, we only provide feedback if asked by a grantee.
We don’t have any immediate plans to write a longer post about the process outside of this AMA. However, we are generally planning to increase communication of the fund’s approach, so that is something we could potentially draft in the future unless other higher priority write-up will take precedence.
I would expect that people with deep expertise in software engineering may have a better understanding of how they can apply those skills than a person without such background. We are always keen to hear people's ideas, so you can encourage others to think of an impactful project and apply to the fund!
One example of an idea we funded in this category was a prototype algorithm that identifies the exact location and number of animals in each Iowa egg farm based on Google Earth data developed by Charles He.
One of the projects I would be keen to see is an interactive data visualization of issues faced by different animals in different countries and conditions, similar to what GDP Compare created to aggregate and visualize sources of DALYs lost due to different conditions in humans. Maybe some software engineering skills could be helpful in research on wild animals, e.g. tracking patterns of behaviors. Again, I have very low confidence in those ideas, so they should be treated as creative brainstorming rather than a recommendation. :)
Lastly, I would recommend checking out services offered by Animal Advocacy Careers, including their job board and career coaching. They may be aware of some opportunities available for people with a background in software engineering.
We had aggregated the data from 2017 (when the fund started) to 2020.
One caveat is that it doesn't represent the ideal distribution of funding that we are aiming at. For example, if we had received more applications from groups working in Asia, the amount granted to that region would have likely increased as well. Additionally, the difference in cost of running a program in various parts of the world also makes the amount look slightly disproportional, so I outlined the amount and number of grants made in each region.
The current breakdown looks like the following:
There are multiple benefits I see:
That's right, there is growing support for invertebrate welfare work.
One point that I feel that we haven’t communicated well enough on is that cost of $27,000 per farm we have in the CEA doesn’t literally mean that we will pay the farm $27,000. As mentioned in the post, “this aims to set a conservative minimal threshold for cost-effectiveness. A high-scale, lower cost strategy (e.g. outreach through farmers associations) could further increase cost-effectiveness.”. We want to test in our CEA the worst possible scenario and it doesn’t mean that this will be the strategy. I will make a note to structure our reports differently in the future to avoid the confusion that what we test under “charity report” is literally also an implementation that the organization is going to go with.
“However, I couldn't see any mention in the report of how the initial work with individual farms could be translated into policy change.”
Sorry if we don’t include the details about the implementation in the charity ideas report. We usually follow up those reports with an “implementation report” to discuss long-term strategy, etc. Those are shared with co-founders who often contribute to them. Still, we prefer not to share them publicly for two reasons i) we don’t want the details of the strategy to potentially negatively affect the campaign ii) the strategy outlines the uncertainties that co-founders have to test at the beginning and how they should adopt the strategy to that, so because the plans are to some extend flexible they could change so we don’t want to create confusion. More specifically to your point,
“It seems likely that it would increase profits in the Indian egg industry by paying for something (at an estimated cost of $27,000 per farm according to the model) which will likely increase the overall profitability of farms”.
The approach that charity will take is to first try to achieve success for cage-free and feed fort through multiple means that don’t require any support from us and put costs on the producers (e.g., outreach through farmers’ associations and partnership building ). If that would be unsuccessful (e.g., because there is no proof of concept), then try to subsidize an additional cost that farmer would have to take to have a higher level of nutrients in the feed (e.g., if low-nutrient feed would cost 1$, and high-nutrient feed would cost $3, we would subsidize the $2 difference between them). That way, the new situation is that producers’ costs are the same as before the intervention, and hens have a higher nutrient feed at the same time. No change in costs = no change in price = no major change in long-term profitability. If there is still resistance to fortification, only then would we consider a higher level of subsidization to achieve proof of concept and then when more widely adopted. That would be the case only until enough farmers operate like that to push for more systemic change, e.g., new mandatory feed standards in state regulations that are not subsidized at scale. What we model in the CEA is the absolute worst-case scenario, not the scenario that is most likely.
"Although the animal advocate understands that these problems could also be problems for the cage-free campaigns, they think that cage-free is a better ask because it tackles one of the underlying issues of intensive factory farming (confinement), where feed fortification doesn’t."
I agree with this advocate’s opinion that behavioral restrictions (like foraging and movement deprivation) caused by conventional cages and enriched cages are the biggest welfare problem, as you can see on the graph we linked from Cynthia Schuck-Paim and Wladimir Alonso’s forthcoming book, Quantifying Pain in Laying Hens. But keel bone fractures are the second biggest issue in conventional and enriched cages and the biggest in cage-free/aviary systems.
That’s why in places where there is no cage-free production (e.g., in India), we would recommend a focus on cage-free and feed fort, and in places where the shift to cage-free already happened, we want to work on feed fortification to avert keel bone fracture.
When speaking with advocates about it, we only spoke about feed fort in India, instead of cage-free + feed fort in India, so maybe that created confusion.
Hi Gordan! Happy to respond more in-depth but first, I have two clarifying points.This intervention is for egg-laying hens, not broiler chickens. Egg-laying hens are not used for meat, but I could address your question from the perspective of egg quality. Is that fine? Also, are you making an argument that feed fort will specifically be more prone to “humane-washing” compared to, e.g. cage-free/broiler campaigns or that all welfare-focused interventions that aim to improve the conditions on the farms are prone to “humane-washing” and therefore may be net-negative in the long term?
Hi Jamie! Thanks for engaging with the research.