Dear Amazing EA Forum Users,

Thank you for dedicating yourselves to doing good.

Today I read an article in Watt Poultry that stoked an old fear. It's a fear that comes up for me—and I think for some other animal-focused EAs—when thinking about the longterm future. My fear is: Will the longterm future mean expanded factory farming, which goes on forever? (And therefore produces exponentially more suffering than factory farming currently produces?) Who is looking out for animals in the longterm future?

The article was titled Chickens grown on Mars?. It says: 

With the research done by the NC State Nuggets on Mars program, we could one day be growing chickens on Mars . . . . 

[The teachers] will learn about . . . the unique challenges of raising chickens on Mars. . . . 

At the end of the program, the participants will be tasked with putting together a unit for their class focused on developing ideas on ways to raise chickens on mars.

Emma, Cottrell, Chickens grown on Mars?,  Watt Poultry (Mar. 11, 2022), https://www.wattagnet.com/articles/44685-chickens-grown-on-mars.

The Nuggets on Mars program is apparently being paid for by the U.S. federal government's United States Department of Agriculture (USDA). And it sounds like the program has support from an ag industry group called the North Carolina Farm Bureau Federation.

From this article, it seems that some in the U.S. agriculture industry and the U.S. government hope that a future of humans on other planets will include factory farming on other planets!

That's exactly my fear.

The plans described in this article differ from what I've often heard longtermist EAs say, when I bring up my recurring fear about animal cruelty in the longterm future. I generally hear longtermist EAs reassure me that, (1) 'If we expand to living on other planets, we won't bring factory farming with us, because it would be too cumbersome to raise animals in space,' (2) 'In the far longterm future, all minds will be digital, so we will no longer have to worry about animal cruelty,' and (3) 'Over time, people will become more empathetic, so future humanity will treat animals better.'

I hope that the longtermist EAs are right, and that the meat industry is wrong. But, in my experience, the meat industry is pretty good at getting things done, even if those things seem wasteful or cumbersome. (For example, farming animals for food, even on Earth, is already far less efficient than farming plants. And yet the number of animals farmed expands every year.) And the meat industry and the U.S. government seem to WANT the future to involve high-tech ways to factory farm even more animals. So that scares me.

I  acknowledge that this news article is focused on a far shorter term future than longtermists think about. The article seems to hint that people could start living on Mars within the lifetime of children who are alive today. But to someone, like me, who is uneducated on such topics, the practice of factory farming in space seems like a practice that, once started, could simply continue and expand. Think, for example, of what happened with factory farming on Earth. My mind says, 'Once factory farms get to Mars, perhaps it will be too late to prevent a future of constantly expanding space factory farms?'

I know essentially all EAs care about all sentient minds, including animal minds. And I know that the EAs focused on longtermism care about animal welfare, to the extent there will be animals in the longterm future. 

My fear, though, is that, from my vantage point, longtermism and factory farming seem like two separate EA cause areas that are rarely discussed together. (This may be totally wrong, and I apologize if it is.) My decade in animal advocacy has taught me that farmed animals almost always get forgotten. And it has also taught me that people outside of the animal movement tend to have an inaccurately inflated sense of how powerful the animal movement is. So, I can't help wondering: Will farmed animals get forgotten, when EAs succeed in figuring out a way to keep humanity going forever? Are longtermist EAs assuming that farmed-animal EAs have this issue covered, while farmed-animal EAs assume that longtermists have it covered?

Therefore, I want to ask three questions to my fellow EAs:

(1) What are we doing, and what can we do, to make sure factory farms don't come with us, if humanity expands to other planets? 

(2) If we digitize all minds, how will we ensure that digitized humans treat digitized animals better than flesh-and-blood humans currently treat flesh-and-blood animals? 

(3) What are we doing, or what can we do, to ensure that future humans have a new ethic of being kind to animals?

Thank you all for everything you do to make the world, the future, and all conscious experiences, better! And thank you for your patience with an article written on a topic (longtermism) that I know very little about.

Love,

Alene

Comments33
Sorted by Click to highlight new comments since: Today at 10:34 AM

I have some draft reports on this matter (one on longtermist animal advocacy and one on work to help artificial sentience) written during two internships I did, which I can share with anyone doing relevant work. I really ought to finish editing those and post them soon! In the meantime here are some takeaways—apologies in advance for listing these out without the supporting argumentation, but I felt it would probably be helpful on net to do so.

  • Astronomically many animals could experience tremendous suffering far into the future on farms, in the wild, and in simulations.
  • Achieving a near-universal and robust moral circle expansion to (nonhuman) animals seems important to protecting them in the long term. This is reminiscent of the abolitionist perspective held by many animal advocates; however, interventions which achieve welfare improvements in the near-term could still have large long-term effects.
    • But moral circle expansion work can have direct and indirect negative consequences, which we should be careful to minimize.
  • It may become easier or harder to expand humanity’s moral circle to nonhuman animals in the future. Since this could have strategic implications for the animal advocacy movement, further research to clarify the likelihood of relevant scenarios could be very important.
    • I think it's more likely to become harder, mostly because strong forms of value lock-in seem very plausible to me.
  • Small groups of speciesist holdouts could cause astronomical suffering over large enough time-scales, which suggests preserving our ability to spread concern for animals throughout all segments of humanity could be a priority for animal advocates.
  • Ending factory farming would also ameliorate biorisk, climate change, and food supply instability, all of which could contribute to an irrecoverable civilization collapse.
  • Preventing the creation of artificial sentiences until society is able to ensure they are free from significant suffering appears very beneficial if attainable, as developing artificial sentience at the present time both appears likely to lead to substantial suffering in the near-term, and also could bring about a condition of harmful exploitation of artificial sentience that persists into the far future.
  • Moral advocacy for artificial sentience can itself have harmful consequences (as can advocacy for nonhuman animals to a somewhat lesser extent). Nevertheless, there are some interventions to help artificial sentience which seem probably net-positive to me right now, though I'd be eager to see what others think about their pros/cons.

In terms of who is doing relevant work, I consider Center for Reducing Suffering, Sentience Institute, Wild Animal Initiative, and Animal Ethics to be especially relevant. But I do think most effective  animal advocacy organisations that are near-term oriented are doing work that is helpful in the long-term as well—especially any which are positively influencing attitudes towards alternative foods or expanding the reach of the animal movement to neglected regions/groups without creating significant backlash. Same goes for meta-EAA organisations like ACE or Animal Advocacy Careers.

Tobias Baumann's recent post How the animal movement can do even more good  is quite relevant here, as well as an earlier one Longtermism and animal advocacy.

I also am very pleased that I'm the third James to respond so far out of four commenters :)

Fai
2y11
0
0

Hey James (Faville), yes you should publish these reports! I look forward them in published form. (I believe I haven't read the draft for the AS one)

Dear James III,

Thank you so much!

And  thank  you so  much for doing that research  to begin with! I would love to see the  rest of  it, and I'm sure  other EA Forum readers would too! Your point  about artificial sentience is really concerning.

I really appreciate you researching and  analyzing all this,  and sharing  it.

Sincerely,

Alene

Hi James, I would love to read these reports. I'm considering doing a deeper dive into this. My email is ncrosser@gmail.com if you're willing to share.

This subject was discussed by Fai here:

https://forum.effectivealtruism.org/posts/bfdc3MpsYEfDdvgtP/why-the-expected-numbers-of-farmed-animals-in-the-far-future

In the comments no one seemed to know of anyone taking on this issue so it seems fully neglected. 

Thank you so much! I missed that post! 

Fai's post is way more educated, more specific, and better written than my post.  I'm really glad Fai wrote that. And I'm even more worried, now, about future animals.

It stinks to hear that this is fully neglected. I hope that changes, thanks to analyses like Fai's!

Yes. But the focus is a little bit different. Abraham's post was mainly worrying about wild animals, not space factory farming. Actually, Abraham assumed that factory farming will end soon, which I think we shouldn't assume without very very strong reasons.

This is a great post and (in my opinion) a super important topic - thanks for writing it up! We (at the Charity Entrepreneurship office) were actually talking about this today and funnily enough, made similar points you listed above why it might not be a problem (e.g. it's too infeasible to colonise space with animals). Generally though we agreed that it could be a big problem and it's not obvious how things are going to play out.

A potentially important thing  we spoke about but isn't mentioned above is how aligned would future artificial general intelligence to the moral value of animals. AGI alignment is probably going to be affected by the moral values of the humans working on AI alignment, and there is a potential concern that a superintelligent AGI might have similar feelings towards animal welfare relative to most of the human population, which is largely indifference at their suffering. This might mean we design superintelligent AGI that is okay with using animals as resources within their calculations, rather than intelligent and emotional beings who have the capacity to suffer. This could, potentially, lead to factory farming scenarios worse than what we have today, as AGI would ruthlessly optimise for production with zero concern for animal welfare, which some farmers would at least consider nowadays. Not only could the moment-to-moment suffering of animals be potentially worse, this could be a stable state that is "locked-in" for long periods of time, depending on the dominance of this AGI and the values that created it.  In essence, we could lock-in centuries (or longer) of intensely bad suffering for animals in some Orwellian scenario where AGI doesn't include animals as morally relevant actors. 

There are obviously some other important factors that will drive the calculations of this AGI if/when designing or implementing food production systems, namely: cost of materials, accessibility, ability to scale, etc. This might mean that animal products are naturally a worse option relative to plant-based or cultivated counterparts but in the cases where it is more efficient to use animal-based products (which will also be improved in efficiency by AGI), the optimisation of this by AGI could be extremely concerning for animal suffering.

Obviously I'm not sure how likely this is to happen, but the outcome seems extremely bad so it's probably worth putting some thought into it, as I'm not sure what is happening currently. It was just a very distressing conclusion to come to that this could happen but I'm glad to see other people are thinking about this (and hopefully more will join!)

Dear James,

Thank you so much for this thoughtful response!

It is wonderful to know that people are having conversations about these issues. 

You make a really great point about the risk of AGI locking in humans' current attitude towards animals. That is super scary. 

Sincerely,

Alene

alene thank you for this topic, I was thinking about this but never thought that this might realy happen. I just hope that some data about AI taking care more about farmed animals than humans do https://www.vox.com/22528451/pig-farm-animal-welfare-happiness-artificial-intelligence-facial-recognition, will be true in the future. But I also hope that Farming animals will change soon somehow or will end.

I think worry about factory farm AI being overall negative, and much less likely overall positive. First, it might reduce diseases, but that also means factory farms can therefore keep animals more crowdedly because they have better disease control. Second, AI would decrease the cost of animal products, causing more demand, and therefore increase the number of animals farmed. Third, lower prices mean animal products will be harder to be replaced by alternatives. Fourth, I argue that AI that are told to improve or satisfice animal welfare cannot do so rubustly. Please refer to my comment above to James Ozden.

Fai
2y18
0
0

Hey James (Ozden), I am really glad that CE discussed this!  I thought about them too, so wonder if you and CE would like to discuss? (CE rejected my proposal on AI x animals x longtermism, but I think they made the right call, these ideas were too immature and under-researched to set up a new charity!)

I now work as Peter Singer's  RA (contractor) at Princeton, on AI and animals. We touched on AI alignment, and we co-authored a paper on speciesist algorithmic bias in AI systems (language models, search algorithms), with two other professors, which might be relevant. 

I also looked at other problems which might look like quasi-AI-alignment for animals problems. (or maybe, they are not quasi?)

For example, some AI systems are given the tasks to "tell" the mental states (+/-, scores) of farmed animals and zoo animals, and some of them will, in the future, be given the further task of 
satisficing/maximizing (I believe they won't "maximize, they are satistficing for animal "welfare" due to legal and commercial concerns). A problem is that, the "ground truths" labels in the training datasets of these AI are, as far as I know, all labelled by humans (not the animals! Obviously. Also remember that among humans, the one chosen to label such data likely have interests in factory farming). This causes a great problem. What these welfare maximizing (let's charitably think they will be do this instead of satisficing) systems will be optimizing are the scores attached to the physical parameters chosen to be given scores of. For example, if the AI system is told to look for "positive facial expressions" defined by "animal welfare experts", which actually was something people trained AI on, the AI system would have a tendency to hack the reward by maximizing the instances the pigs have these "positive facial expressions, without true regard to welfare. If the systems get sophisticated enough, toy examples for human-AI alignment like  an ASI controlling the facial muscles of humans to maximize the number of human smiles, could actually happen in factory farms. The same could happen even if the systems are told to minimize "negative expressions" - the AI could find ways to make the animals hide their pain and suffering.

If we keep using human labellers for "ground truths" of animals' interests, preferences, welfare. There will be two alignment problems. 1. How to align human definitions and labels with the animals' actual interests/preferences? 2. The human-AI alignment problem we usually talk about. (And if there is a mesa-optimizer problem in such systems, we have 3!)

There's a kind of AI systems which might break this partially. There's a few projects out there trying to decipher the "languages" of rats, whales, or animals generally. While there are huge potentials, it's not only positive for me. Setting aside 10+ other philosophical problems I identified with "deciphering animal language", I want to discuss the quasi-alignment problem I see here: Let's say the approach is to use ML to group the patterns in animals' sounds. To "decipher animal language", at some point the human researchers still have to use their judgement to decide that a certain sound pattern means something in a human language. For example if the same sound pattern appears every time the rats are not fed, the researchers might conclude that this pattern means "hungry". But that's still the same problem, the interpretation what the animals actually expressed was done by humans first, before going to the AI. What if the rats are actually not saying "hungry", but "feed me?", or "hangry", we might carry the prejudice that rats are not as sophisticated as that, but what if they are?

 

Wait, I don't know why I wrote so much, but anyway, thank you if you have read so far :)

I haven't read this fully (yet! will respond soon) but very quick clarification - Charity Entrepreneurship weren't talking about this as an organisation. Rather, there's a few different orgs with a bunch of individuals who use the CE office and happened to be talking about it (mostly animal people in this case). So I wouldn't expect CE's actual work to reflect that conversation given it only had one CE employee and 3 others who weren't!

Oh okay, thanks for the clarification!

Great to learn about your paper Fai, I didn't know about it till now, and this topic is quite interesting. I think when longtermism talks about the far future it's usually "of humanity" that follows and this always scared me, because I was not sure either this is speciesist or is there some silent assumption that we should also care about sentient beings. I don't think there were animal-focused considerations in Toby Ord's book (I might be wrong here) and similar publications? I would gladly then read your paper. I quickly jumped to the conclusion of it, and it kinds of confirm my intuitions in regards to AI (but also long-term future work in general): 
"Up to now, the AI fairness community has largely disregarded this particular dimension of discrimination. Even more so, the field of AI ethics hitherto has had an anthropocentric tailoring. Hence, despite the longstanding discourse about AI fairness, comprising lots of papers critically scrutinizing machine biases regarding race, gender, political orientation, religion, etc., this is the first paper to describe speciesist biases in various common-place AI applications like image recognition, language models, or recommender systems. Accordingly, we follow the calls of another large corpus of literature, this time from animal ethics, pointing from different angles at the ethical necessity of taking animals directly into consideration [48,155–158]..."
 

Thanks Fai, I think you're right. Somehow I didn't notice James's comment. James thanks for the clarification, I haven't seen this risk before. Especially this part

 This might mean we design superintelligent AGI that is okay with using animals as resources within their calculations, rather than intelligent and emotional beings who have the capacity to suffer.

I just thought that AI would take care of animal health in general, like the exact amount of food, humidity, water, etc. But I didn't think about the raw calculations made by the AI.

This isn't a very direct response to your questions, but is relevant, and is a case for why there might be a risk of factory farming in the long-term future.  (This doesn't address the scenarios from your second question.) [Edit: it does have an attempt at answering your third question at the end.]

--

It may be possible that if plant-based meat substitutes are cheap enough and taste like (smell like, have mouth feel of, etc.) animal-derived meat, then it won't make economic sense to keep animals for that purpose.

That's the hopeful take, and I'm guessing maybe a more mainstream take.

If life is always cheaper in the long-run for producing meat substitutes (the best genetic engineering can always produce life that can out-compete the best non-life lab techniques), would it have to be sentient life, or could it be some kind of bacteria or something like that?  It doesn't seem to me that sentience is helpful in making animal protein, and probably just imposes some cost.

(Another hopeful take.)

A less hopeful take:  One advantage that life has over non-life, and where sentience might be an advantage, is that it can be let loose in an environment unsupervised and then rounded up for slaughter.  So we could imagine "pioneers" on a lifeless planet letting loose some kind of future animal as part of terraforming, then rounding them up and slaughtering them.   This is not the same as factory farming, but if the slaughtering process (or rounding-up process) is excessively painful, that is something to be concerned about.

My guess is that one obstacle to humans being kind to animals (or being generous in any other way) has to do with whether they are in "personal survival mode".  Utilitarian altruists might be in a "global survival mode" and care about X-risk.  But, when times get hard for people, personally, they tend to become more of "personal survival mode" people.  Maybe being a pioneer on a lifeless planet is a hard thing that can go wrong (for the pioneers), and the cultures that are formed by that founding experience will have a hard time being fully generous.

Global survival mode might be compatible with caring about animal welfare.  But personal survival mode is probably more effective at solving personal problems than global survival mode (or there is a decent reason to think that it could be), even if global survival mode implies that you should care about your own well-being as part of the whole, because personal survival mode is more desperate and efficient, and so more focused and driven toward the outcome of personal survival.  Maybe global survival mode is sufficient for human survival, but it would make sense that personal survival mode could outcompete it and seem attractive when times get hard.

Basically, we can imagine space colonization as a furtherance of our highest levels of civilization, all the colonists selected for their civilized values before being sent out, but maybe each colony would be somewhat fragile and isolated, and could restart at, or devolve to, a lower level of civilization, bringing back to life in it whatever less-civilized values we feel we have grown past.  Maybe from that, factory farming could re-emerge.

If we can't break the speed of light, it seems likely to me that space colonies (at least, if made of humans), will undergo their own cultural evolution and become somewhat estranged from us and each other (because it will be too hard to stay in touch), and that will risk the re-emergence of values we don't like from human history. 

How much of cultural evolution is more or less an automatic response to economic development, and how much is path-dependent?  If there is path-dependency, we would want to seed each new space colony with colonists who 1) think globally (or maybe "cosmically" is a better term at this scale), with an expanded moral circle, or more important, a tendency to expand their moral circles; 2) are not intimidated by their own deaths; 3) maybe have other safeguards against personal survival mode; 4) but still are effective enough at surviving.  And try to institutionalize those tendencies into an ongoing colonial culture.  (So that they can survive, but without going into personal survival mode.)   For references for that seeded culture, maybe we would look to past human civilizations which produced people who were more global than they had to be given their economic circumstances, or notably global even in a relatively "disestablished" (chaotic, undeveloped, dysfunctional, insecure) or stressed state or environment. 

(That's a guess at an answer to your third question.)

This is SUPER interesting. And it's amazing that you have put so much thought into this exact issue!

Also, I love that everybody who responded is named James! :-) 

Hi James (Banks), I wrote a post on why PB/CM might not eliminate factory farming. Would be great if you can give me some feedback there.

I think we should persuade public space agencies like NASA, ESA, and JAXA, and private spaceflight companies like SpaceX, to impose a moratorium on animal farming in space until humane methods of animal farming in space are developed. Some of us could draft a letter to elected representatives to make this happen.

It will be difficult to effectively counteract this on a long-term basis without building a center of power that's:

  • Comparable in influence to today's animal agriculture lobby, 
  • Inseparably dedicated to the welfare of all living things, and 
  • Able to sustain itself indefinitely through business activity. 

I say this because the animal ag lobby is expert at shifting the debate, and no matter how comprehensively we rebut the case for space-based slaughterhouses on practical and moral grounds, they will simply float trial balloon after trial balloon until unstudied observers are fooled into thinking that their approach is grounded in deep research rather than tactful astroturfing. 

That means we need to develop, refine, and popularize approaches to capital allocation that put ethics first. Which is easier said than done, but over the long term the only way to counteract this pressure is to build a societal soft infrastructure that's not just motivated to counteract it, but also incentivized to do so. 

I'm working on this from an investment standpoint - I'm the founder of Invest Vegan - and am hopeful that over the long term the companies we invest in will recast our collective sense of what's possible. 

But the more the merrier, you know? 

Hi Alene, Thank you for writing this! I am glad that a lot of people (James's) are discussing here. I hope this is the beginning a lot of useful discussions!

Alene, I think about this all the time! I've thought about starting a project or NGO solely to focus on preventing animal agriculture from being a component of  space colonization. If  we do successfully colonize the cosmos then it could  be that the vast majority of humans will end up living elsewhere than Earth. We could be at a unique point in history where we are actively laying the cultural and technological groundwork for those future societies. I wrote a blog on the space food topic that might interest you  https://ecotech.substack.com/p/spacefood?s=w 

You should!!!

I was glad to see James Faville link to Tobias Baumann's post on Longtermism and animal advocacy. I'll highlight a few quotes relevant to your questions (I especially like the third one):

it stands to reason that good outcomes are only possible if those in power care to a sufficient degree about all sentient beings… What hope is there of a good long-term future (for all sentient beings) as long as people think it is right to disregard the interests of animals (often for frivolous reasons like the taste of meat)? Generally speaking, the values of (powerful) people are arguably the most fundamental determinant of how the future will go, so improving those values is a good lever for shaping the long-term future.

Folks in the comments here have described a number of mechanisms for the immense suffering risks associated with a longterm future that includes animal agriculture, or more generally lacks concern for animals. Those examples make it clear to me that moral progress (with respect to animals, but elsewhere too) is a necessary but not sufficient condition for a positive longterm future. Organizations focused on making moral progress, in this conversation animal advocacy charities, are pretty clearly contributing to the longtermist cause. Of course, that's not to say animal advocacy charities are the most effective intervention from a longtermist perspective, but right now, my sense is longtermism suffers from a dearth of projects worth funding, and is less concerned with ranking their effectiveness.

 

a longtermist outlook implies a much stronger focus on achieving long-term social change, and … This entails a focus on the long-term health and stability of the animal advocacy movement.

Meta-charities like ACE, Faunalytics, and Encompass are examples of such organizations, and would probably represent the best fit for a philanthropist influenced by longtermism interested in animal advocacy.

 

it is crucial that the movement is thoughtful and open-minded… we should also be mindful of how biases might distort our thinking (see e.g. here) and should consider many possible strategies, including unorthodox ones such as the idea of patient philanthropy.

A focus on building epistemic capacity in the animal advocacy movement leads you to similar organizations.

Thank you for the post. Very important topic. 

Hi James, thank you for your links, they are really helpful for me. I'll wait for your reports because they sound fascinating.

Thank you Anele for writing on this topic. Still reading through all the materials linked. I am deeply interested on the topic and eager to learn more in the EA Forum or anywhere else.

I don't see a way for it to go on forever.

  • We should expect the efficiency of farming to improve until no suffering is involved.
    (See the cellular agriculture/cultured meat projects.)
  • We should expect humans to change for the better.

    It would be deliriously conservative to expect as many as 10 thousand years to pass before humans begin to improve their own minds, to live longer while retaining mental clarity and flexibility, to be aware of more, to be more as they wish to be, to reckon deeply with their essential, authentic values, to learn to live in accordance with them.

    Even this deliriously conservative estimate of 10 thousand years would  place the vast majority of future after this transition.

    And after that transition, I would expect to see a very large portion of humanity realize that they have little tolerance for the suffering of other beings.
    Even if the proportion of humanity who weather uplifting and somehow remain indifferent to suffering is high, say, 30%, I'd expect the anti-suffering majority to buy all of their farms from them, few could relish suffering so much that we would not offer more, to halt it. Very little suffering would continue.

    If you think that this will not be the case — that the deep values of the majority of humanity genuinely do not oppose suffering —... Then it is difficult to imagine a solution, or to argue that this even is a problem that a thing like EA can solve.
    At that point, it would be a military issue. Your campaign would no longer be about correcting errors, it would be about enforcing a specific morality upon a population who authentically don't share it. You could try that. I'm not sure I would want to help. I currently think that I would help, but if it turns out that so much of humanity's performance of compassion was feigned, I could no longer be confident that I would end up on that side of the border. I'm not even sure that you could be confident that you would remain on that side of the border.
Fai
2y25
0
0

I disagree here. Even though I think it's more likely than not space factory farming won't go on forever, it's not impossible that it will stay, and the chance isn't like vanishingly low. I wrote a post on it.

Also, for cause prioritization., we need to look at the expected values from the tail scenarios. Even if the chances could be as low as 0.5%, or 0.1%, the huge stake might mean the expected values could still be astronomical, which is what I argue for space factory farming. I think what we need to do is to prove why factory farming will go away in the near/mid future 100%, which I don't see good arguments for.

For example, there is no proof that cellular agriculture is more energy and resource efficient than all kinds of factory farming. In fact, insect farming, and the raising of certain species of fish, are very efficient. Cellular agriculture also takes a lot of energy to go against entropy. This is especially true if the requirement for the alignment of protein structures is high. In terms of organizing things together against entropy, biological beings are actually quite efficient, and cellular agriculture might have a hard task to outperform all animal protein. There needs to be serious scientific research specifically addressing this issue, before we can claim that cellular agriculture will be more efficient in all possible ways.

On human becoming compassionate.  I feel pessimistic about that, because here we are talking about moral circle expansion beyond our own species membership. Within species, whether it be women, people of color, elderly, children, LGBTQ, they all share very similar genes with dominant humans (which generally were white men, in history), neural structures (so that we can be sure that they suffer in similar ways), and we have shared natural languages. All these made it rather easy for dominant humans to understand dominated humans reasonable well. It won't be the same for our treatment of nonhumans, such as nonhuman animals and digital minds without natural language capabilities. 

Hi Alene,
Thanks, as always, for your thoughtful concern for the most abused species.
I think that effective efforts to expose factory farms now will have the most long-term impact. As noted below, just having PB/CM won't cause most people to switch. We need to give them reasons to do so. That's why I think One Step for Animals and Legal Impact for Chickens are the most important small groups. (HSUS is doing great work, but as has been said, they have more money than god.)

Awwwwww!!!!!

More from alene
Curated and popular this week
Relevant opportunities