Why it’s important to fill out this consultation
The UK Government is currently consulting on allowing insects to be fed to chickens and pigs. This is worrying as the government explicitly says changes would “enable investment in the insect protein sector”. Given the likely sentience of insects (see this summary of recent research), and that median predictions estimate that 3.9 trillion insects will be killed annually by 2030, we think it’s crucial to try to limit this huge source of animal suffering.
Overview
* Link to complete the consultation: HERE. You can see the context of the consultation here.
* How long it takes to fill it out: 5-10 minutes (5 questions total with only 1 of them requiring a written answer)
* Deadline to respond: April 1st 2025
* What else you can do: Share the consultation document far and wide!
* You can use the UK Voters for Animals GPT to help draft your responses.
* If you want to hear about other high-impact ways to use your political voice to help animals, sign up for the UK Voters for Animals newsletter. There is an option to be contacted only for very time-sensitive opportunities like this one, which we expect will happen less than 6 times a year.
See guidance on submitting in a Google Doc
Questions and suggested responses:
It is helpful to have a lot of variation between responses. As such, please feel free to add your own reasoning for your responses or, in addition to animal welfare reasons for opposing insects as feed, include non-animal welfare reasons e.g., health implications, concerns about farming intensification, or the climate implications of using insects for feed.
Question 7 on the consultation: Do you agree with allowing poultry processed animal protein in porcine feed?
Suggested response: No (up to you if you want to elaborate further).
We think it’s useful to say no to all questions in the consultation, particularly as changing these rules means that meat producers can make more profit from sel
Some possible criteria:
I like this list. We could improve on it by establishing a hierarchy of metrics.
1st Tier: more quantifiable and objective metrics which are also most strongly tied or correlated with direct impact.
2nd Tier: quantifiable metrics which aren't directly tied to increased impact, but are strongly expected to lead to increased impact. In this tier I include memberships which are expected to lead to more donations, and to overcome constraints on talent and human capital.
3rd Tier: metrics which are less direct, more subjective, less quantifiable, and are more about awareness than exactly expected impact.
I think it's possible for one metric to jump from one tier to the next in terms of how much confidence we put on it. This can happen under dramatic circumstances. For example, "media coverage" or "positive media coverage" would be something we would have much confidence in as impactful if effective altruism gets a cover story on, e.g., TIME magazine.
I'm skeptical of explicit metrics like "number of GWWC pledge signers", "money moved", etc. Any metrics that get proposed will be imperfect and may fall prey to Goodhart's law.
To me, careful analysis and thoughtful discussion are the most important aspects of EA. Good intentions are not enough. (After you read the previous article, imagine if an earlier EA movement had focused on "money moved to Africa" as its success metric.)
The default case is for humans to act altruistically in order to look good, not do good. It's very important for us to resist the pull of this attractor for as long as possible.
Turning the current negative feedback loop (donors give based on "warm glow", not impact-> charities dis-incentivized to gather/provide meaningful impact info->donors who want impact info can't find it and give based on warm glow) into a positive feedback loop (donors give based on impact-> charities incentivized to achieve/measure/report impact->easier for donors to conduct better analysis).
More generally, drastically shifting incentives people face re: EA behavior (giving effectively, impact-based career decisions, keeping robots from killing us, etc.)
A sustainable flourishing world!
I was reading Lifeblood by Alex Perry (it details the story of malaria bed nets). The book initially criticizes a lot of aid organizations because Perry claims that the aim of aid should be "for the day it's no longer needed". E.g., the goal of the Canadian Cancer Society should be to aim for the day when cancer research is unnecessary because we've already figured out how to beat it. However, what aid organizations actually do is expand to fill a whole range of other needs, which is somewhat suboptimal.
In this case, EA is really no exception. Suppose that in the future, we've tackled global poverty, animal welfare, and climate change/AI risk/etc. We would just move on to the next most important thing in EA. Of course, EA is separate from classical aid organizations, because it's closer to a movement/philosophy than a single aid effort. Nevertheless, I still think it might be useful to define "winning" as "alleviating a need for something". This could be something like "to reach a day when we no longer need to support GiveDirectly [because we've already eliminated poverty/destitution/because we've reached a quality of wealth redistribution such that nobody is living below X dollars a year]."
On that note, for Effective Altruist organizations, I imagine that 'not being needed' means 'not continuing to be the best use of our resources', or, 'have faced significant diminishing marginal returns to additional work'. That said, the condition for an organization to rationally end is different than their success condition.
On obvious point: Most organizations/causes have multiple increasingly-large success conditions. There's not one 'success condition', but a progressive set of improvements. We won't 'win' as an abstract term. I mean, I don't think Martin Luther King would say that he 'won', he accomplished a lot, but things got complicated at the end and there was still a lot to be done; needless to say though, he did quite well.
A better set of questions may be 'what are some reasonable goals to aim for?' Then, 'how can we measure how far we are from those specific goals?'
In completely pragmatic matters, I think that the best goals for us is not legislation, but monetary donations to EA-related causes.
Goal 1: 100m/year
3: 1b/year
4: 10b/year
etc
The ultimate goal for all of us may be a positive-singularity, though that is separate from effective altruism itself and harder to measure. Also, of course the money above would have to be adjusted for quality of the EA org relative to the best.
There is of course, still the question of how good the interventions are and how good the intervention-deciding mechanisms are. However, I feel like measuring / estimating those are quite a bit more challenging and also present a very orthogonal and distinct challenge from raising money. For instance, growing a movement and convincing people in the large would be an 'EA popularity goal', which would be measured in money, while finding new research to understand effectiveness would be more of a 'EA Research Goal'. Two very different things.
Hitting sharply diminishing returns on QALYs/$
Currently you can buy decades and decades of QALYs for a year's salary or less. And that's just straight forward, low variance, uncontroversial purchases. If you cast your net wider (far future concerns) you could potentially be purchasing trillions of QALYs on expectation. I'll consider EA to have won once those numbers drop to something reasonable.
Clippy wants to point out that this goal could easily be achieved through a deadly virus that wipes out the human race, planetwide nuclear winter, etc. :P
Yep, that's fundamental. Also, we don't want to give the impression that our obligations are limited to opportunities that land in our lap. If we seem to be hitting diminishing returns, it's time to try looking for awesome opportunities in different domains.
I would think winning is likely to depend sharply on cause area, or at least on particular assumptions that are not agreed upon in the EA community, at least if it is to be sufficiently concrete. Most EAs could probably agree that a world where utility is maximized (or some fairly similar metric, or optimization function) is a win. What world will realize this depends on views about the value of nonhuman animals, the value of good vs. bad experiences, and other issues where I've seen quite a bit of disagreement in the EA community.