This might be a bit late, but I reckon it might be quite relevant to put this here in this thread. Here's my paper with Peter Singer on AI Ethics: The Case for Including Animals:https://link.springer.com/article/10.1007/s43681-022-00187-z
I also think the same way about infants' experiences. I don't remember anything from before around 3 or 4 years old, but that doesn't mean my first 3 years of life didn't matter to me at the time.
I agree with everything you said Michael. And this makes me think of the hernia operation that I can no longer remember. My mother told me that the doctor said he would give me some anesthetic. But I was still tied up by adhesive tapes and I was struggling violently during the operation, so much that the bruises on my limbs are still there after a week.
Thank you for the amazing study!
I have a question. It is said that " To avoid confusion or inaccurate comparisons, we’ve opted to include a separate time series for fish, which can be found in the second tab in the interactive line charts."
Would it be better for the article to be called "Global Land Animal Slaughter Statistics" instead of "Global Animal Slaughter Statistics"?
Thank you for writing this! It's interesting and encouraging to learn that welfare economists are starting to take animals into account, and even calculate the impact on monetary terms!
instead try to support uploading animals into a simulated environment under our control.
But what about those physical ones that will still exist?
we are not obligated to give them pleasure
What about humans? Just trying to know if you hold this because you hold something like a pleasure-pain imparity, or that you think there is something special about humans that makes us obliged to give them pleasure, but not the animals.
Finally on the question of predation, some thoughts on this. I do tend towards allowing it for at least conditional on backup/waiver. My r
Transformative technologies like AI or nanotech change the world a lot. I’m very uncertain how this might affect wildlife
I think it's possible that in the future most locations on earth or even the universe, could be monitored by AI and nanobots, and managed according to certain objectives. (Not suggesting this is good for wild animals, unless if elimination is part of the objective of the AIs)
But I still want to differentiate between animals who are farmed for food or other purposes on space settlements, and animals who are freely roaming in spaces created for humans to explore
Makes sense to separate them for cause prioritization and division of labor. What motivated me to question whether they differ from a philosophical sense is somehow responding to challenges such as naturalistic fallacy, "we should care more about suffering we cause directly than those not caused by us", etc.
Also, animals might not be only farmed for food. Scientists... (read more)
It’s unclear if there would be farmed or “wild” animals in such space settlements.
Thank you for writing this! I want to express my view that in certain cases, such as in space settlements, the line between "wild" or "farmed" animals could blur. If the "wild" (maybe you put the word in quotes for that reason) animals were intentionally brought about, fed, monitored, and managed, what makes them not "farmed"?
Yes, you had expressed this thought in this article (which I link to somewhere in this text) and that's what influenced me to use quotes. But I still want to differentiate between animals who are farmed for food or other purposes on space settlements, and animals who are freely roaming in spaces created for humans to explore (similar to nature reserves). Perhaps the latter group could be called "managed animals". For example, in the case of Bernal Sphere, animals would be farmed in a dedicated sector of a space settlement (as you can see in this illu... (read more)
Thank you for the work, and the report. This is very important.
In a good world, no one will get sick, age, or die unless they want to.
I really admire your vision, and also that you have mentioned your concern for nonhuman sentient beings such as nonhuman animals. But I am also afraid this framing of the vision is only possible if we limit ourselves to humans, human-like things, or maybe plus a few chosen animals. Maybe some superintelligent AI in the future could solve this, but my limited mind just can't imagine how every single animal in the world can attain all of these.
This is an interesting question. I think a similar question asked by Richard Ngo, and my reply post, might be relevant to this.
But wouldn't a new post on this topic serve the same purpose of expressing and discussing this concern, without having the effects of affecting this topic?
I think this topic is more relevant than the original one.
Relevant with respect to what? For me, the most sensible standard to use here seems to be "whether it is relevant to the original topic of the post (the thesis being brought up, or its antithesis)". Yes, the topic of personal behavior is relevant to EA's stability and therefore how much good we can do, or even the long-term future. But considering that there are other ways of letting people know what is being communicated here, such as starting a new post, I don't think we should use thi... (read more)
I think that's a strong reason for people other than Jacy to work on this topic.
Watching the dynamic here I suspect this might likely be true. But I would still like to point out that there should be a norm about how these situations should be handled. This likely won't be the last EA forum post that goes this way.
To be honest I am deeply disappointed and very worried that this post has gone this way. I admit that I might be feeling so because I am very sympathetic to the key views described in this post. But I think one might be able to imagine how they feel if certain monumental posts that are crucial to the causes/worldviews they care dearly about, went this way.
But what about the impact on the topic itself? Having the discussion heavily directed to a largely irrelevant topic, and affecting its down/upvoting situation, doesn't do the original topic justice. And this topic could potentially be very important for the long-term future.
Maybe another typo? : "Bostrom argues that if humanizes could colonize the Virgo supercluster", should that be "humanity" or "humans"?
Thank you so much! I used this in my research just last week. I can now revise this more easily!
I and a few other people are discussing how to start some new charities along the lines of animals and longtermism, which includes AI. So maybe that's what we need in EA before we can talk about where we can donate to help steer AI to better care for animals.
Hi Cate, thank you for your courage to express potentially controversial claims, and I upvoted (but not strongly) for this reason.
I am not a computer or AI scientist. But my guess is that you are probably right, if by "predictable" we mean "predictable to humans only". For example, in a paper (not yet published) Peter Singer and I argue that self-driving cars should identify animals that might be on the way and dodge them. But we are aware that the costs of detection and computation will rise, and that the AI will have more constraints in its optimization ... (read more)
So if an AI being aligned means that it cares about animals to the extent humans do, it could still be unaligned with respect to the animals' own values to the extent humans are mistaken about them (which we most certainly are).
I very much agree with this. This will actually be one of the topics I will research in the next 12 months, with Peter Singer.
I really like this idea. In addition to financial supports, maybe EA should formally take a stance on this?
But you are introducing a regress here. Already, EAs care about animal welfare and consider AI important.
But I think it's much more like, some EAs care about animal welfare, and some EAs care about AI, and less care about both things. More importantly, of the relatively few people who care about both AI and animals, quite few of them care about them in a connected way.
Thus, I doubt that any AI safety agreements would omit non-human animals.
I actually doubt any AI safety agreements would explicitly include non-human animals. If you look at the p... (read more)
A project called Evolving Language was also hiring a ML researcher to "push the boundaries of unsupervised and minimally supervised learning problems defined on animal vocalizations and on human language data".
There's also deepsqueak which studies rat squeaks using DL. But their motive seems to be to do better, and more, animal testing. (not suggesting this is neccessarily net bad)
Ah yes! I think copy and paste probably didn't work at that time, or my brain! I fixed it.
I am so glad to see people interested in this topic! What do you think of my ideas on AI for animals written here?
And I don't think we have to wait for full AGI to do something for wild animals with AI. For example, it seems to me that with image recognition and autopilot, an AI drone can identify wild animals that have absolutely no chance of surviving (fatally injured, about to be engulfed by forest fire), and then euthanize them to shorten their suffering.
Hi Andrew, I am glad that you raised this. I agree that animal welfare matters and AI will likely decide most of what happens in the future. I also agree that this is overlooked, both by AI people and animal welfare people. One very important aspect is how AI will tranform the factory farming industry, which might change the effectiveness of a lot of interventions farmed animal advocates are using.
I have been researching the ethics of AI concerning nonhuman animals over the last year, supervised by Peter Singer. Along with two other authors, we wrote a pap... (read more)
Thank you for writing this! I have been thinking about some ideas that could become mega projects, just throwing some of them out here (you have already listed some of them)
Wow thank you! Very relevant!
Thank you for the great post! I think my post might be relevant to 2.1.1. Animals [1.1].
(my post discusses about factory farmed animals in the long-term future, but that doesn't mean I only worry about that as the only source of animal suffering in the long-term)
I disagree here. Even though I think it's more likely than not space factory farming won't go on forever, it's not impossible that it will stay, and the chance isn't like vanishingly low. I wrote a post on it.
Also, for cause prioritization., we need to look at the expected values from the tail scenarios. Even if the chances could be as low as 0.5%, or 0.1%, the huge stake might mean the expected values could still be astronomical, which is what I argue for space factory farming. I think what we need to do is to prove why factory farming will go away in the... (read more)
I think worry about factory farm AI being overall negative, and much less likely overall positive. First, it might reduce diseases, but that also means factory farms can therefore keep animals more crowdedly because they have better disease control. Second, AI would decrease the cost of animal products, causing more demand, and therefore increase the number of animals farmed. Third, lower prices mean animal products will be harder to be replaced by alternatives. Fourth, I argue that AI that are told to improve or satisfice animal welfare cannot do so rubustly. Please refer to my comment above to James Ozden.
Oh okay, thanks for the clarification!
Hi James (Banks), I wrote a post on why PB/CM might not eliminate factory farming. Would be great if you can give me some feedback there.
Yes. But the focus is a little bit different. Abraham's post was mainly worrying about wild animals, not space factory farming. Actually, Abraham assumed that factory farming will end soon, which I think we shouldn't assume without very very strong reasons.
Hey James (Ozden), I am really glad that CE discussed this! I thought about them too, so wonder if you and CE would like to discuss? (CE rejected my proposal on AI x animals x longtermism, but I think they made the right call, these ideas were too immature and under-researched to set up a new charity!)
I now work as Peter Singer's RA (contractor) at Princeton, on AI and animals. We touched on AI alignment, and we co-authored a paper on speciesist algorithmic bias in AI systems (language models, search algorithms), with two other professors, whic... (read more)
I haven't read this fully (yet! will respond soon) but very quick clarification - Charity Entrepreneurship weren't talking about this as an organisation. Rather, there's a few different orgs with a bunch of individuals who use the CE office and happened to be talking about it (mostly animal people in this case). So I wouldn't expect CE's actual work to reflect that conversation given it only had one CE employee and 3 others who weren't!
Hey James (Faville), yes you should publish these reports! I look forward them in published form. (I believe I haven't read the draft for the AS one)
Hi Alene, Thank you for writing this! I am glad that a lot of people (James's) are discussing here. I hope this is the beginning a lot of useful discussions!
Sorry that I missed your comment and therefore the late reply!
Thank you for sharing. Let me clarify your suggestion here, do you mean you suggest me to give my model of accounting for moral significance, rather than just writing about the number of beings involved?
Also, do you mind sharing your credence of the possibility of digital sentience?
I am glad I sort of answered your question!
It happens that I also worry about digital suffering, but I have two great uncertainties:
My uncertainty in 1 is much greater, like maybe 100x to 2. I wonder what your credence in artificial sentience is? It would be very useful for me if you can share. Am I right about my guess that you think, even after adjusting for the probability of creating digital beings vs pr... (read more)
It depends on the probability one assigns to the scenario. If we assume we will 100% get that scenario, my upper estimates would shrink a lot, because presumably digital people would have little incentives to keep non-digital farmed animals. But unless the earth will also be replaced with primarily digital beings, my estimates for the expected number of farmed animals on earth in the far future might still roughly hold.
And it depends on what you mean by "primarily". If that means some small portion of the universe will still be occupied by humans, th... (read more)
Thank you for your comment!
Yes, I recognize that some longtermists bite the bullet and admit that humanity virtually only have instrumental values, but I am not sure if they are the majority, it seems like they are not. In any case, it seems to me that the vast majority of longtermists either think the focus should be humanity, or digital beings. Animals are almost always left out of the picture.
I think you are right that "part of this" is a strategy to avoid weird messaging, but I think most longtermists I discussed with do not think that huma... (read more)
Hi Dony! Thank you for your comment!
I think I disagree with your view here. Let me explain why.
Consider these two objective functions:
I think we shouldn't expect that optimizing for 1 would always, robustly, ensure that 2 is also optimized at the same time. I think highly intelligent systems are quite likely to identify ways to optimize for 1 that do not optimize for 2 at all. In fact, we probably don't need AI for ... (read more)
Thank you Saulius! I basically agree with everything you said here. I would really hope some people from the space governance space can give us some insights here. Do you happen to know some of them?
This sounds great! I particularly liked that you brought up S-risks and MCE. I think these are important considerations.
Wild animal suffering in space
Space governance, moral circle expansion.
Terraforming other planets might cause animals to come to exist in these planets, either because of intentional or unintentional behaviors. These animals might live net negative lives.
Also, we cannot rule out the possibility that there are already wild "animals" (or any form of sentient beings) who might be suffering from net negative lives in other planets. (this does not relate directly to the Fermi Paradox, which is highly intelligent lives, not lives per se)
Relevant rese... (read more)
Preventing factory farming from spreading beyond the earth
Space governance, moral circle expansion (yes I am also proposing a new area of interest.)
Early space advocates such as Gerard O’Neill and Thomas Heppenheimer had both included animal husbandry in their designs of space colonies. In our time, SpaceX, the European Space Agency, the Canadian Space Agency, the Beijing University of Aeronautics and Astronautics, and NASA, have all expressed interests or announced projects to employ fish or insect farming in space.
This, if successful, might m... (read more)
Thank you for the great post! I wrote a reply that is too long for here: https://forum.effectivealtruism.org/posts/AZyJdher64htcpKti/re-some-thoughts-on-vegetarianism-and-veganism
I actually told some people to do this kind of diet. Even though I feel very uncertain about it.
I was always baffled by the fact that in Asia, when a lot of people speak of "cutting meat consumption", they start by cutting the meat of cows When I tried to convince them that they should do the reverse, they look extremely surprised. It's kind of a cultural thing here that cutting cow's meat first is seen as standard, everyone kind of "knows it has to be the case".
What is your view on how longtermism relates to or affects animal welfare work? Are you interested in potentially supporting someone to look into this intersection? If yes, what might be some of the sub-topics that you might be interested in? Thank you!
I think the last useful thing in this thread might be your last reply above. But I am going to share my final thoughts anyway.
I think I am still not convinced that the suspicion that animal/MCE advocates had "suddenly embraced longtermism" (in the loose sense, not the EA/philosophical/Toby Ordian sense) is justified, even if the animal advocates I said (like the ones in MFA) haven't thought explicitly about the future beyond 100+ yrs, because they might have thought that they roughly had, maybe in a tacit assumption that what is being achieved ... (read more)