Kevin Xia 🔸

Managing Director @ Hive
1149 karmaJoined Working (0-5 years)Austria
bio.site/kevinhive

Bio

Participation
5

Managing Director at Hive. Effective Altruism and Animal Advocacy Community Builder, experience in national, local and cause-area specific community building. Amateur Philosopher, particularly keen on moral philosophy.

How others can help me

I'm super happy to chat with anyone and learn from you, so don't hesitate to reach out if you don't have any expertise on the following - however, some specific areas I am hoping to learn more about are:

- I work at Hive, a global community-building organization for farmed animal advocates. I would love to hear your thoughts, (project) ideas and feedback! 

- The implications, opportunities and risks of AI development on farmed animal advocacy.

- Farmed animal advocacy careers outside of NGOs and Alt-protein (e.g., food industry/adjacent sector jobs and policy in governmental institutions)

How I can help others

I have a fairly good overview of the farmed animal advocacy space, so happy to chat about all things there. I find that I am most helpful in brainstorming, red-teaming, effective giving and career advice. And, of course, happy to talk about Hive or meta-level work in animal advocacy more generally! I have some experience in community building on a city, national and cause-area specific level, so happy to nerd about that. I also have a background in philosophy, focusing on moral philosophy - so happy to bounce ideas or chat cause prioritization.

Comments
30

Kevin Xia 🔸
4
1
0
70% disagree

Very uncertain on this one, mainly a matter of "I just don't see why it would" and a strong default to "technological process has largely been bad for animals."

I do think the "better" AI goes for humans (or broadly, the more "extreme" the outcome is), the more likely it is that factory farming would basically disappear incidentally.

However, I think a large range of possible futures where AI goes well for humans are (comparably) normal scenarios, in which I just don't have any strong reason to believe that they would go well for animals.

Yes, "cost" is the negation of utility, and the whole thing is anchored against 0 (so, the world where everything goes well is the baseline, 0, and it only calculates how bad it would be for something to go wrong). There is definitely a more elaborate version of this where you differentiate between more possible worlds that go badly, neutrally, or well for humans and/or animals and involve negative and positive numbers - not sure how much that would realistically change, cause prio wise.

This is not really an argument to either side, but a while ago I created a rough little spreadsheet where you can put in:

- How much disvalue you see in the world going badly for animals vs. humans
- How likely you think it is that the world will go badly for animals vs. humans vs. both
- How much of the world that makes AI go well for humans you expect to also help make AI go well for animals

And it calculates for you what you should focus on (AIS vs. AIxAnimals) :) 

It's very rough, very proxy, all the usual caveats apply. But I am hoping that it can with some intuitions about implications to whatever stance you have after (!) the debate week. Feel free to make a copy and tweak for your own use!

On the first point - that seems right; I think in a discussion like this, there can be a lot of confusion and conflation about what is meant by net negative welfare, lives worth living/barely worth living, too what degree one can and should trust empirical assessments about the welfare of animals, etc. - my best guess is that people are typically "somewhat" risk-averse and "somewhat" negative leaning consequentialist; so the bar for empirical evidence to show that chickens live net positive lives is intuitively set higher; both for how solid it is as well as how positive. That being said, I do think one can have distrust in intuitions as information for moral judgments while leaning on them for empirical questions - that doesn't seem inherently at odds for me. (I do think that the latter still clashes with "using evidence and reason," of course, but can be "explained for" with risk aversion and negative-leaning positions - which would change what "... to do the most good" means). But at this point, I am just speculating about what people are thinking about in making these arguments. 

On the second point; my impression is that EAs rarely completely abandon moral intuition. They don't consider it particularly trustworthy, but don't think it's useless either. It serves some function (e.g., find the internally consistent theory that is least counterintuitive or that satisfies the most or the deepest lying moral intuitions), but then they'd basically have the theory take it from there (in theoretical discussion; once again, this tends to be different when actually being acted upon). I agree that it is plausibly arbitrary (where to draw the line at which to abandon moral intuitions; others might disagree with calling that arbitrary); but I don't think it usually serves prior commitments (in my experience, EAs are the social impact group that are most open to just changing commitments). That being said, I do think that some form of this (drawing a seemingly arbitrary line from where to not trust moral intuition) is true for effectively everyone who doesn't completely lean into moral intuitionism. My best guess explanation here is basically what I expressed in the last three points of my initial comment; most (perhaps all) EAs I am thinking of in this context are "doing-good" first; and underlying that is a strong moral compass/intuition that can be at real practical tension with trying to abandon moral intuition as information, so they try to find the right balance; with the balance being on-average more in favor of a well reasoned through theory vs. intuition; but not completely. 

Thank you for sharing your paper, Vera! I have been trying to discuss and understand a lot of adjacent themes around foundational philosophical assumptions, ad absurdum arguments, and moral intuitions vs. strict theory. Since you're asking how this will land with the audience here, I'd like to offer my personal account based on engaging with many EA(A)s. This is very much my subjective impression, not endorsed by any one person I discussed this with in particular. Also, I say a lot of "they" here - to be transparent, I endorse most, but not all, of these stances myself. 

  1. My impression is that EAs are largely aware of the counterintuitive implications their theory of choice faces. This has broadly been my experience with consequentialist-leaning people: they rarely claim, or even want to claim, that their ethical theory is perfect or aligned with all intuitions. They just believe the counterintuitive implications of their theory are less counterintuitive than those of other theories, and/or find other theories either inconsistent or arbitrary.
  2. Underlying this is a general desire to reason through ethics, and underlying that is a general skepticism toward moral intuition as a definitive argument. My impression is that EAs are very analytical in their approach to philosophy, and as a result, they often don't consider their own moral intuitions particularly trustworthy — to varying degrees; some might want to abandon them altogether, others simply don't weigh them heavily.

I think these two points are why many EAs would read the paper and think something like: "Yes, I know. This isn't surprising. And also, show me a theory that doesn't run into such issues."

  1. However, most EAs don't fundamentally start from a purely philosophical stance on what is true in ethics and try to apply it to all their actions. They "first" want to do good and almost instrumentally try to figure out what that means. I think most EAs are "quasi-consequentialist": when pressed, or when wanting to defend their views in a theoretical discussion, they consider consequentialism the strongest perspective to take — perhaps because they find it least counterintuitive, or closest to explaining how they conceptualize ethics.
  2. When put into practice, this stance becomes largely action-guiding. It acts as a first proxy for identifying what to do or which choice is better. But unlike in theoretical discussions, a reductio ad absurdum isn't just a "bullet to bite" where one can rest calmly on knowing that others have "bigger bullets to bite" — it's a practical blocker that is rarely broken through. I think that's why everyone is "leaning utilitarian" or "leaning consequentialist": the theory acts as a guide and pushes the limits of one's otherwise-unquestioned moral intuitions to some degree, but not far beyond what one would consider generally reasonable. This is also exemplified by how often I hear things like "I know that X is probably right, but I just don't feel comfortable doing that." Strict theory comes a close second, but almost no one completely abandons their moral intuitions in favour of it.
  3. I think a neat resolution to all of this is the concept of moral uncertainty. I wouldn't confidently claim that the people I describe above are likely to explicitly endorse this framing, but I think it explains much of the friction between theory and intuition. Under moral uncertainty, one doesn't simply act on one's best-guess ethical theory; one hedges across plausible theories, weighted by credence. That naturally prevents the kind of single-theory extremes that generate repugnant/counterintuitive conclusions in practice, even while allowing consequentialism to carry significant weight. This sort of uncertainty, I think, is very much in line with typical EAs' general way of thinking.

I think that's why most EAs won't feel particularly "addressed" by this or similar arguments — and why they'll likely end up with something like: "Yeah, I know this could be an issue, but I wouldn't do this anyway." I also think this explains why many will first try to argue based on empirical assumptions (e.g., can CAFO's even be net-positive).

Hope this makes sense, curious to hear whether this has mirrored your experience discussing this piece :)

It might allow for more nuanced and actionable discussion to ask "how good" - perhaps something like "Promoting Ozempic will be among the most cost-effective ways to help animals."

Not necessarily the "biggest" win, but one that I didn't see coming and think is underrated is:

Malaysia’s Islamic Authority Declares Cultivated Meat Can Be Halal (First Muslim-Majority Country)

Important by itself, but even moreso under (AI-)accelerated alt-protein situations, where non-technological barriers to adoption (such as: is cultivated meat halal?) can pose as unnecessary barriers that can be addressed already.

Really enjoyed reading this post! 

You can influence Big Normie Foundation to move $1,000,000 from something generating 0 unit of value per dollar (because it is useless) to something generating 10 units of value per dollar.

This example reminded me of something similar I have been meaning to write about, but @AppliedDivinityStudies got there before me (and did so much better than I could have!) - it is not just that influencing Big Normie Foundations could produce the same marginal impact due to a lower counterfactual, but also that there is way more money in them.

I think one can conceptualize impact as a function of how much influence we are affecting, where it is moving from (e.g., the counterfactual badness/lack-of-goodness), and where it is moving to. It seems to me like we are overly focused on affecting where the influence is moving to. Perhaps justifiably so, for the objections you mention in the post, but it seems far from obvious that we are focus is optimally balanced.

Great question, thank you for working on this. An inter-cause-prio-crux that I have been wondering about is something along the lines of:

"How likely is it that a world where AI goes well for humans also goes well for other sentient beings?"

It could probably be much more precise and nuanced, but specifically, I would want to assess whether "trying to make AI go well for all sentient beings" is marginally better supported through directly related work (e.g., AIxAnimals work) or through conventional AI safety measures - the latter of which would be supported if, e.g., making AI go well for humans will inevitably or is necessary to make sure that AI goes well for all. Although if it is necessary, it would depend further on how likely AI will go well for humans and such; but I think a general assessment of AI futures that go well for humans would be a great and useful starting point for me. 

I also think various explicit estimates of how neglected exactly a (sub-)cause area is (e.g., in FTE or total funding) would greatly inform some inter-cause-prio questions I have been wondering about - assuming that explicit marginal cost-effectiveness estimates aren't really possible, this seems like the most common proxy I refer to that I am missing solid numbers on. 

Super interesting read, thanks for writing this! I have been thinking a bit about the US and China in an AI race and was wondering whether I could get your thoughts on two things I have been unsure about:

1) Can we expect the US to remain a liberal democracy once it develops AGI? (I think I first saw this point brought up in a comment here), especially given recent concerns around democratic backsliding? (And if we can't, would AGI under the US still be better?)

2) On animal welfare specifically, I'm wondering whether the very pragmatic, techno-optimistic, efficiency stance of China could make a pivot to alternative proteins (assuming they are an ultimately more efficient product) more likely than in the US, where alt-proteins might be more of a politically charged topic?

I don't have strong opinions on either, but these two points first nudged me to be significantly less confident in my prior preference for the US in this discussion.

Load more