Hide table of contents

If we want to help farmed animals, there are at least two clear avenues to pursue. One, we can promote technologies to replace farming, like plant-based or clean (cultured) meat. Or two, we can try to change social attitudes towards the moral standing of animals. Does either of this options seem much better than the other? In this talk from EA Global 2018: San Francisco, Kelly Witwicki and Kieran Greig discuss the relevant considerations.

A lightly edited transcript of Kelly and Kieran's talk is below, followed by a question from the audience. You can also watch it on YouTube and read it on effectivealtruism.org.

The Talk

Kieran: Before we go into the considerations for and against tech change versus social change, I think we're going to first start off with a number of qualifications in order to narrow down on exactly what we are going to be discussing. The first thing is for technological development, we're just going to be constrained to food tech, so particularly plant-based alternatives and clean meat. I know there're other things we could consider, so things like virtual reality, or we could get a lot more speculative about things like artificial intelligence or potentially gene editing, but we're just really not going to go into those things. So it's just focused on food tech, plant-based foods, and clean meat.

Kelly: And I'm going to be talking about social change/advocacy. I might use those interchangeably. We're talking about the things that the animal advocacy community right now thinks are priorities, so things like strong corporate or legislative reforms, things like personhood initiatives to get basic rights for some species, or for some individuals of non-human species. And possibly also work that affects society's attitudes towards farmed animals, towards animal farming, and towards animal farming alternatives. So there's a lot of range in there. This could include supportive documentaries and non-fiction film and television, but just assume that we're talking about the best things in the clean meat, plant-based food tech space and the best things in the advocacy space. So clean chicken meat to strong reforms or personhood initiatives, not like clean suede and, I don't know, whatever advocacy we think is ineffective.

So just assume we're talking about the top things because some of this, obviously in practice some of these considerations come down to, well this is a bad tech idea and this is a good advocacy idea, so in this talk we're only considering good advocacy and tech ideas. Also, to qualify, these are considerations for how we should divide and prioritize our resources as a movement, and where the marginal value is. So in practice, an individual's comparative advantage might matter a lot. If you have a degree in tissue engineering, you should probably just go work in food tech, but if you're the kind of person who could be a really strong entrepreneur or a really strong advocate, this is more of an open question for you.

These are very highly speculative topics and we're relying on pretty weak evidence in either direction, so really no one should come away from this with a strong opinion of whether in general we should be putting more into tech or more into advocacy, you should really have a weak leaning either way. And last couple of qualifications, that we're not each taking a side, we're going to each talk about some considerations for advocacy and for tech, generally just the ones that we each find strongest on either side. And we're going through some very loaded and complicated considerations so we're so sorry if we don't have time to explain some things.

Kieran: I think another really important thing to acknowledge is that there's obviously really complex and important interactions between the two, so the approaches are complementary. If we do make technological progress, and make social progress easier by changing attitudes, we also increase demand for the technological progress. Definitely that's worth considering and neither of us are saying that it's either/or, obviously it's how much do we want of this versus how much do we want of that. I think the other thing I would say on this is obviously that once people have stopped eating animals, it does seem like it would be easier to change their attitudes towards animals.

And I think the final point I would make here is that historically, there does seem to be a relationship between technological progress and social change. So for instance, and I don't have great sense of this but I've heard cases where the technological change seems to have driven the social change, so for example, with the eradication of the horse and cart by the automobile, with the decimation of whaling by changes in how we source oil. There's also examples around female empowerment through contraception, opposition to the Vietnam war through televised images. Yeah and I think I'll leave it at that for now.

Kelly: And this is one of our first cruxes and disagreements. I think those particular examples, teasing out the causation is very challenging because we do expect there to be a cyclical effect. In the case of animal farming, we have these studies suggesting that if you're thinking about eating animals or if you're eating animals, you think they're less morally valuable than you otherwise would, so we should expect therefore that once people are eating more clean meat, plant-based meat, they will be more receptive to animal friendly attitudes. So that's the kind of effect that can have and then in the opposite direction, we have things like, if people care about the animals more, they're going to want this technology more.

But teasing out the causation historically between tech change and advocacy change, I think is hard. It just seems like maybe the best we can say is like, there is causation in both directions and of course we also have examples where it seems like the tech definitely just followed morality. Like clean energy and electric cars and a lot of technology changes that have to do with the environmentalist movement, seem much more driven by morality and interest in climate change than just pure interest in efficiency. So as I said, don't think of us as taking a side, we'll just clarify what each point is for.

So, here's a point in favor of advocacy. Basically tech doesn't go backwards like social change can. once we've developed a technology, we know how to use it, we can use it, but social change is harder to say that's going to stick around. If we work on technology and bring the tech faster, then what we're doing is adding value between when the tech comes and when it would have come, but our end states are equally good in the long run.

If we instead work on advocacy, that changes the direction of the future, and not just the speed with which it comes, because we expect the technology to eventually come anyway but we don't expect social change to necessarily happen if we don't work on it. Then instead what's happening is at this time, when we start making the clean meat, if we're also doing this advocacy then maybe we can push up the change higher, and then we end up with a higher value long-term world. So that's a big consideration in favor of working on social change instead of tech, and we just think the tech might happen without us anyways.

A counter consideration, however, is that tech itself could be an efficient way to achieve that change because we reduce the dissonance and then maybe that helps us actually get social change.

Kieran: Yeah, sure. So I think that counter consideration is important. So, if we do take into account that the tech change could be the thing that is driving the social change, and that tech unlike social change can't go backwards, I think that the timeline, versus the direction change, to me doesn't seem to clearly point in favor of focusing on social change.

Kelly: Sure, so for one specific example of why this could be the case, let's say we expect value lock in to happen some time soon. For what value lock in means, basically just assume there's some kind of world, maybe like Artificial General Intelligence develops, and just for whatever reason, our values at that time matter a lot, they extend into the future from that point. Then we maybe want to make sure that we speed up value change as fast as possible as much as possible, maybe in exchange for what would otherwise be more robust, stronger, longer-term change.

And of course some tech might offer quicker improvement to that, so maybe aggressively rolling out clean meat chicken nuggets reduces cognitive dissonance more than taking the time to develop whole chickens' bodies with bones and special connective tissue and everything. And maybe that's the 80/20 of the tech that we can do for social change, but of course there may also be other ways to work against speciesist prejudice that have higher near-term returns even if we'd expect them to have lower returns relative to other strategies in the long-term, which could be things like maybe focusing on dramatic changes, like rights of personhood for some animal. Maybe that's just what we would want to do if we thought value lock in was going to come really soon. So we could just make sure we're pushing that speciesist barrier as fast as possible, even if not as long-term effectively as possible were value lock in not to happen.

We're really trying to run through a lot of things here, so excuse us for just going so, so quickly. I missed something just earlier that I want to make a point on. With advocacy, there's a consideration that maybe before our advocacy efforts can even have any promise, we need to get to a point where we have the same products that we're consuming now, and just have a different production process. So that there's no actual trade off for the end user, for the consumer, which might mean that we just want to get the tech to as strong a place as we can right now and then do the advocacy. Then go for the political change once we have the identical products, just with a different production process.

Kieran: Yeah, so just to jump back to the value lock in consideration. So I think one thing that affects my view here is that it seems like a low probability scenario, but potentially a really high value scenario. I think the other thing which I would be wary of there is potentially if we are trying to advance social change really quickly and perhaps using more aggressive tactics, whatever that would be, you are attempting to achieve personhood for certain nonhuman animals. I think there's some potential for there to be short term negative effects of that, and I think I'd just be aware of that when we are making that decision. And I think the other thing that I would say on this is just that, again, if we do suspect that there is going to be this value lock in scenario in the somewhat near future, that the tech lever might still be the best lever to pull on in order to achieve the social change, and it's not necessarily that the social change lever would be the best to pull on.

Kelly: Yeah, yeah so we want to make those chicken nuggets and reduce that dissonance or something like that.

Kieran: Yeah. Another consideration which informs my opinion on this, would be that in general, I think that the comparative advantage of advocates is going to be doing advocacy, which requires care and empathy for animals. For tech change, I think we're generally just going to be more replaceable. We'd be able to have scientists or other people who don't necessarily care about animals but they are motivated by profits and that sort of thing, they could potentially work on those things whereas for us if we do care about animals, our comparative advantage seems to an area which would require that care.

Kelly: And of course, that could also mean that we would see very high returns by starting tech projects, because maybe we're the most motivated in starting them but then investors and consumers can go take it from us once we've started it. This same consideration is also a general argument for the relative tractability of tech development so maybe we're more replaceable, but maybe it's also more tractable. You can see why we think you should only really lean weakly one way or the other, because these are all quite speculative considerations that just lean one way or the other a little bit. Nothing is very strong here, nothing is "Okay well this is obvious, we should be over here."

Something I've been thinking a bit about lately is that it's possible that humanity's moral circle mostly includes humans, most of us more or less, and then it kind of goes gray out into like, okay we care quite a bit about dogs and cats in the US, we care a little bit about pigs and chickens right now. At least we don't like factory farming. We don't really care about insects, we don't really care about digital minds yet, so they're further away from the moral circle.

And one consideration for working on tech instead of advocacy is that it's possible that the moral circle just has a set point, and it's just going to trend towards some particular point, it's not going to be able to expand forever. If there is a set point, it's probably focused around powerful beings and it's probably, if there is such a set point, it's likely to be a place where those who are included are those who you can get a lot out of, and those who are excluded are the ones who are a burden to take care for, who don't really give you anything in return. You just have to care for, like, children who are just always going to be children, aren't going to end up being productive to your society and the things that you want out of your life.

And that will probably mean that maybe our moral circle stops at humans, or it's just a lot harder to get past humans because humans of different ethnicities, having reduced racism means lower conflict and that's great for us. But maybe just including chickens is not necessarily great for us, it's only great for the chickens. So that would be then a reason just for thinking that the advocacy is less tractable and therefore we want to work more on the tech.

Kieran: Yeah so that's a really interesting point on this potential set point to a moral circle expansion. I think that for me, there seems to be just a large amount of uncertainty and speculation involved in whether there is a set point, and if there is a set point, where that set point is. So I find that for me, that is not one of the considerations which is most informing my opinion here. Obviously, it is definitely a consideration and we should definitely take that into account, but I think that something that I find more informative is that for clean meat, I think there are significant questions around whether we can achieve cost competitiveness with farmed animal products.

So there's been one report from the Open Philanthropy Project, there's also been a paper published on this by Van der Weele and Tramper in 2014, and the basic thesis here is that, with clean meat, you need a growth medium for the cells to proliferate, and the minimum costs of the growth medium currently are such that clean meat just can't become cost competitive. So I think at the minimum costs we're looking at something like $8 per kilogram and factory farmed animal products are just… they're literally a fraction of that right now. But on the other hand, there are proponents, companies working in this area. So for example, Hampton Creek reported that they would have clean meat in restaurants by 2018. Another group recently reported that by the end of 2018, the prices would be at $8 per kilogram. And yeah, I think I'll go with that.

Kelly: So this going to get at, I think, a major crux on some of these considerations, but basically clean meat should necessarily become cheaper than animal meat in the long term because it necessarily involves less energy in terms of the physical constraints of the universe. If you're growing an entire individual, you have to develop their brain and their immune system, I mean they might need some kind of immune system for clean meat, but we don't need to create a sentient intelligent brain. It feels like we can skip that part, we can skip the skeleton, so that's a lot of processes that just don't need to be part of the clean meat process. So theoretically, given enough technological advancement, the physical constraints of the universe suggest that this is what should happen eventually, maybe that's in 100 years, maybe it's 1,000 years. But it should happen eventually, which gets us to the crux here.

If you discount your uncertain impacts a lot, however far away they are, if they're farther into the future, if they're further away in space, if you think your uncertain impacts are things you should just discount basically down to zero or somewhere close to that, then you may be more interested in the next few decades. Because of that, you… well, that can go different ways on some of these considerations. But for this particular consideration, it might mean that you want to work on the advocacy and not the tech, because you don't think the tech is going to come in the next few decades. But if you are more interested in the very long term, and you don't discount that uncertainty, the uncertainty of 1,000 years from now, of the impacts that you can make 1,000 years from now, then you may be more interested in pursuing the technology because you think it's going to happen at least sometime before then.

And that goes for a few of the other considerations here, so log that if that's something that you… if you regularly either go towards the near-term stuff or the far-term stuff, that could make the decision much more clear for you, which way it should go, one or the other. We have a couple other considerations, we may not have time for questions but let's try to get through this quickly so we can do one or two.

One of the considerations for advocacy is that many existing animal product replacements are very high quality, and yet only a small part of the population is vegan, and our animal product consumption continues to rise. So that suggests that just making the technology good enough isn't enough, we need advocacy around it. We need to make people excited about it in order to get it adopted because just having the technology isn't enough.

Kieran: I'm not sure if they are high quality. So I have talked to people who feel like they kind of clearly don't taste like meat, or this is clearly different from the taste of milk. I think the other thing is that the ones which do seem to be high quality do tend to be more expensive, and this is just… this basic economics is one of the factors which determine what people buy, if the alternatives are more expensive, then people are going to buy them less. So we can drive the costs down by further technological development, then yeah, this could potentially make progress a lot easier.

So I think the final consideration for me is that there are axes other than tech versus social change which I find are informing my opinion here. So when I'm thinking about possible donation targets on the social change side versus the technological development side, it feels like there are other just very relevant differences between them, so the track record of certain organizations seems to be stronger on the advocacy side. I can look at something like THL which has been around for a number of years and I can feel more confident in them. When I look at the technological development side, these are just new startups, they have a very limited track record.

Kieran: Similarly, I feel more confident on the advocacy side in terms of how advocates have responded to signs of success and failure, and being able to update their approach in light of those things. I feel less confident that the tech side of things is going to be able to do that. Yeah, so I find that there's just these other relevant dimensions than the tech versus social change dichotomy, which is also informing my opinion here. And they do seem to be importantly different between the two options and I thought I would highlight that.

Kelly: Well there you go, sorry we don't have any answers for you and left you just more confused.

Q&A

Question: Your argument about the long-term future being better with advocacy than with tech suggests that everyone will eventually switch to clean meat or plant-based meat. Do you think that there will still be a large percentage of the population that will continue eating non-plant-based meat? And if so, how much?”

Kelly: So there are a couple of things in this, obviously these graphs are oversimplifications, but I think it's unlikely that literally just developing the tech itself is going to be enough. There have been plenty of other technologies that were not adopted because advocacy got in the way or people's attitudes got in the way. Even though it's a better technology, nuclear energy got to 80% of the grid in France, only 20% in the US. We don't want the same kind of thing to happen to clean meat. We don't want clean meat to be banned like GMOs are because people are just more afraid of cell-based meat than they are of factory farms. That would suck. So we want to do advocacy work to make sure that happens. I think if we literally just weren't doing any advocacy, I would not expect the entire world to adopt clean meat. Anytime soon at least, maybe in the long run. It is more efficient, so it probably would happen eventually, so this maybe is partially just a speed consideration.

But when I say advocacy, I also mean things to expand the moral circle more broadly. So not just talking about what's literally going to get animals switched off of people's plates for cell-based or plant-based products, but also what's going to make them care more about those animals so that they don't hurt them in other ways, what's going to make them care more about other minds so that other problems can get solved too. And I think if we have this moral emphasis around the way we end animal farming, if we end animal farming, not just because it's more efficient but we have this advocacy going with it that's like, we're doing this because we care about the animals, it just also is really easy for you to believe that now that we have the technology and are making it easy for you, then that affects people's attitudes and helps us keep expanding that circle to protect more people.

9

0
0

Reactions

0
0

More posts like this

Comments


No comments on this post yet.
Be the first to respond.
Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f
 ·  · 8m read
 · 
In my past year as a grantmaker in the global health and wellbeing (GHW) meta space at Open Philanthropy, I've identified some exciting ideas that could fill existing gaps. While these initiatives have significant potential, they require more active development and support to move forward.  The ideas I think could have the highest impact are:  1. Government placements/secondments in key GHW areas (e.g. international development), and 2. Expanded (ultra) high-net-worth ([U]HNW) advising Each of these ideas needs a very specific type of leadership and/or structure. More accessible options I’m excited about — particularly for students or recent graduates — could involve virtual GHW courses or action-focused student groups.  I can’t commit to supporting any particular project based on these ideas ahead of time, because the likelihood of success would heavily depend on details (including the people leading the project). Still, I thought it would be helpful to articulate a few of the ideas I’ve been considering.  I’d love to hear your thoughts, both on these ideas and any other gaps you see in the space! Introduction I’m Mel, a Senior Program Associate at Open Philanthropy, where I lead grantmaking for the Effective Giving and Careers program[1] (you can read more about the program and our current strategy here). Throughout my time in this role, I’ve encountered great ideas, but have also noticed gaps in the space. This post shares a list of projects I’d like to see pursued, and would potentially want to support. These ideas are drawn from existing efforts in other areas (e.g., projects supported by our GCRCB team), suggestions from conversations and materials I’ve engaged with, and my general intuition. They aren’t meant to be a definitive roadmap, but rather a starting point for discussion. At the moment, I don’t have capacity to more actively explore these ideas and find the right founders for related projects. That may change, but for now, I’m interested in