All of Fai's Comments + Replies

Steering AI to care for animals, and soon

This might be a bit late, but I reckon it might be quite relevant to put this here in this thread. Here's my paper with Peter Singer on AI Ethics: The Case for Including Animals:

Reducing nightmares as a cause area

I also think the same way about infants' experiences. I don't remember anything from before around 3 or 4 years old, but that doesn't mean my first 3 years of life didn't matter to me at the time.

I agree with everything you said Michael. And this makes me think of the hernia operation that I can no longer remember. My mother told me that the doctor said he would give me some anesthetic. But I was still tied up by adhesive tapes and I was struggling violently during the operation, so much that the bruises on my limbs are still there after a week.

Global Animal Slaughter Statistics & Charts: 2022 Update

Thank you for the amazing study! 


I have a question. It is said that " To avoid confusion or inaccurate comparisons, we’ve opted to include a separate time series for fish, which can be found in the second tab in the interactive line charts."

Would it be better for the article to be called "Global Land Animal Slaughter Statistics" instead of "Global Animal Slaughter Statistics"?

Hello Fai, Thank you for your question! We do show the statistics for fishes, just separated in some of the comparison charts due to the massive number difference.
Meat Externalities

Thank you for writing this! It's interesting and encouraging to learn that welfare economists are starting to take animals into account, and even calculate the impact on monetary terms!

Wild animal welfare in the far future

instead try to support uploading animals into a simulated environment under our control.

But what about those physical ones that will still exist?

we are not obligated to give them pleasure

What about humans? Just trying to know if you hold this because you hold something like a pleasure-pain imparity, or that you think there is something special about humans that makes us obliged to give them pleasure, but not the animals.

Finally on the question of predation, some thoughts on this. I do tend towards allowing it for at least conditional on backup/waiver. My r

... (read more)
Well, on the physical animals, well it's a long, hard process to change values to get it in the overton window, and as the saying goes, in order to take a thousand mile journey, you have to take the first step. There's a bad habit of confronting large problems and then trying to discredit solving them because of the things you don't solve, when it won't be solved at all by inaction. My reason for saying that we're not obligated to give them pleasure is because I don't agree with the hedonic imperative mentioned, and in general hedonic utilitarianism because I view pleasure/pain as just one part of my morality that isn't focused on. For much the same reason I also tend to avoid suffering focused ethics, which is focused on preventing disvalue or dolorium primarily. It's not about the difference between animals and humans. On the predation thing, I will send you a link to what making or changing the predator-prey relation from a natural to an artificial one that is morally acceptable in my eyes, primarily because of the fact that death isn't final in a simulation. Here's the link: [] Sorry for taking so long to make this comment.
Wild animal welfare in the far future

Transformative technologies like AI or nanotech change the world a lot. I’m very uncertain how this might affect wildlife

I think it's possible that in the future most locations on earth or even the universe, could be monitored by AI and nanobots, and managed according to certain objectives. (Not suggesting this is good for wild animals, unless if elimination is part of the objective of the AIs)

Wild animal welfare in the far future

But I still want to differentiate between animals who are farmed for food or other purposes on space settlements, and animals who are freely roaming in spaces created for humans to explore 

Makes sense to separate them for cause prioritization and division of labor. What motivated me to question whether they differ from a philosophical sense is somehow responding to challenges such as naturalistic fallacy, "we should care more about suffering we cause directly than those not caused by us", etc.

Also, animals might not be only farmed for food. Scientists... (read more)

There's also no proof that non-biological systems have to be outcompeted by biological brains either, so that cancels out.
Wild animal welfare in the far future

It’s unclear if there would be farmed or “wild” animals in such space settlements. 

Thank you for writing this! I want to express my view that in certain cases, such as in space settlements, the line between "wild" or "farmed" animals could blur. If the "wild" (maybe you put the word in quotes for that reason) animals were intentionally brought about, fed, monitored, and managed, what makes them not "farmed"? 

Yes, you had expressed this thought in this article (which I link to somewhere in this text) and that's what influenced me to use quotes. But I still want to differentiate between animals who are farmed for food or other purposes on space settlements, and animals who are freely roaming in spaces created for humans to explore (similar to nature reserves). Perhaps the latter group could be called "managed animals". For example, in the case of Bernal Sphere,  animals would be farmed in a dedicated sector of a space settlement (as you can see in this illu... (read more)

Legal Impact for Chickens files its first lawsuit! Costco shareholder derivative case

Thank you for the work, and the report. This is very important.

Thank you Fai!
My vision of a good future, part I

In a good world, no one will get sick, age, or die unless they want to.


I really admire your vision, and also that you have mentioned your concern for nonhuman sentient beings such as nonhuman animals. But I am also afraid this framing of the vision is only possible if we limit ourselves to humans, human-like things, or maybe plus a few chosen animals.  Maybe some superintelligent AI in the future could solve this, but my limited mind just can't imagine how every single animal in the world can attain all of these.

1Jeffrey Ladish1mo
I think it will require us to reshape / redesign most ecosystems & probably pretty large parts of many / most animals. This seems difficult but well within the bounds of a superintelligence's capabilities. I think that at least within a few decades of greater-than-human-AGI we'll have superintelligence, so in the good future I think we can solve this problem.
The Future Might Not Be So Great

But wouldn't a new post on this topic serve the same purpose of expressing and discussing this concern, without having the effects of affecting this topic?

The Future Might Not Be So Great

I think this topic is more relevant than the original one. 

Relevant with respect to what? For me, the most sensible standard to use here seems to be "whether it is relevant to the original topic of the post (the thesis being brought up, or its antithesis)".  Yes, the topic of personal behavior is relevant to EA's stability and therefore how much good we can do, or even the long-term future. But considering that there are other ways of letting people know what is being communicated here, such as starting a new post, I don't think we should use thi... (read more)

4Guy Raveh1mo
Thanks for the detailed reply. I think you raised good points and I'll only comment on some of them. Mainly, I think raising the issue somewhere else wouldn't be nearly as effective, both in terms of directly engaging Jacy and of making his readers aware. I noticed the post much before John made his comment. I didn't read it thoroughly or vote then, so I haven't changed my decision - but yes, I guess I'd be very reluctant to upvote now. So my analysis of myself wasn't entirely right. Hmm. Should I have not replied then? ... I considered it, but eventually decided some parts of the reply were important enough.
The Future Might Not Be So Great

I think that's a strong reason for people other than Jacy to work on this topic.

Watching the dynamic here I suspect this might likely be true. But I would still like to point out that there should be a norm about how these situations should be handled. This likely won't be the last EA forum post that goes this way. 

To be honest I am deeply disappointed and very worried that this post has gone this way. I admit that I might be feeling so because I am very sympathetic to the key views described in this post. But I think one might be able to imagine how they feel if certain monumental posts that are crucial to the causes/worldviews they care dearly about, went this way.

The Future Might Not Be So Great

But what about the impact on the topic itself? Having the discussion heavily directed to a largely irrelevant topic, and affecting its down/upvoting situation, doesn't do the original topic justice. And this topic could potentially be very important for the long-term future.

-6Guy Raveh1mo

I think that's a strong reason for people other than Jacy to work on this topic.

The Future Might Not Be So Great

Maybe another typo? : "Bostrom argues that if humanizes could colonize the Virgo supercluster", should that be "humanity" or "humans"?

Good catch!
Transcript of a talk on The non-identity problem by Derek Parfit at EAGxOxford 2016

Thank you so much! I used this in my research just last week. I can now revise this more easily!

Steering AI to care for animals, and soon

I and a few other people are discussing how to start some new charities along the lines of animals and longtermism, which includes AI. So maybe that's what we need in EA before we can talk about where we can donate to help steer AI to better care for animals.

Steering AI to care for animals, and soon

Hi Cate, thank you for your courage to express potentially controversial claims, and I upvoted (but not strongly) for this reason.

I am not a computer or AI scientist. But my guess is that you are probably right, if by "predictable" we mean "predictable to humans only". For example, in a paper (not yet published) Peter Singer and I argue that self-driving cars should identify animals that might be on the way and dodge them. But we are aware that the costs of detection and computation will rise, and that the AI will have more constraints in its optimization ... (read more)

Steering AI to care for animals, and soon

So if an AI being aligned means that it cares about animals to the extent humans do, it could still be unaligned with respect to the animals' own values to the extent humans are mistaken about them (which we most certainly are).


I very much agree with this. This will actually be one of the topics I will research in the next 12 months, with Peter Singer.

Love this. It's one of the things on my "possible questions to think about at some point" list. My motivation would be 1. Try to figure out what specific animals care about. (A simple sanity check here is to try to figure out what a human cares about, which is hard enough. Try expand this question to humans from different cultures, and it quickly gets more and more complicated.) 2. Try to figure out how I'm figuring out what animals care about. This is the primary question, because we want to generalize the strategies for helping beings that care about different things than us. This is usefwl not just for animals, but also as a high-level approach to the pointers problem in the human case as well. Most of the value of the project comes from 2, so I would pay very carefwl attention to what I'm doing when trying to answer 1. Once I make an insight on 1, what general features led me to that insight?
Megaprojects for animals

I really like this idea. In addition to financial supports, maybe EA should formally take a stance on this?

That woud be amazing! I'm not well connected within the EA community so if somebody can help out with this that would be awesome!
Steering AI to care for animals, and soon

But you are introducing a regress here. Already, EAs care about animal welfare and consider AI important.


But I think it's much more like, some EAs care about animal welfare, and some EAs care about AI, and less care about both things. More importantly, of the relatively few people who care about both AI and animals, quite few of them care about them in a connected way. 

Thus, I doubt that any AI safety agreements would omit non-human animals.

I actually doubt any AI safety agreements would explicitly include non-human animals. If you look at the p... (read more)

It is not that people do not care [] as in do not consider the issue, they just do not prioritize it in their actions since they do not think that is how they make the highest impact (e. g. due to specialization). Sure, that makes sense. Discourse on this topic has not extensively taken place so far, so the ways of connecting the two have not been much advanced. Yes, perhaps it is the best when it is implied that animals are included. Then, animals are included in statements other than that of the Montreal University, such as the Asilomar Principles []. "[B]eneficial intelligence" and how "legal systems [can] be more fair and efficient" should be researched by teams that "actively cooperate" on the objective of "Shared Benefit." Perhaps, by 'people' they meant persons, so any entities that currently have or should have legal personhood status, such as non-human animals. But AI, even now, is smarter. It can read anything, so can figure that 'good' means 'all sentience benefits.' I have not yet asked [] IGPT-3 [] but just asking Google [] and skimming the results, it is clear that various forms of sentience should be considered. Perhaps, it is a matter of making AI realize this by asking them a few questions. It will be the same result, given relevant early-on entertainment in key questions. Just, less utility monsters [] will be included i
Steering AI to care for animals, and soon

A project called Evolving Language was also hiring a ML researcher  to "push the boundaries of unsupervised and minimally supervised learning problems defined on animal vocalizations and on human language data".

There's also deepsqueak which studies rat squeaks using DL. But their motive seems to be to do better, and more, animal testing. (not suggesting this is neccessarily net bad)

Steering AI to care for animals, and soon

Ah yes! I think copy and paste probably didn't work at that time, or my brain! I fixed it.

Steering AI to care for animals, and soon

I am so glad to see people interested in this topic! What do you think of my ideas on AI for animals written here?

And I don't think we have to wait for full AGI to do something for wild animals with AI. For example, it seems to me that with image recognition and autopilot, an AI drone can identify wild animals that have absolutely no chance of surviving (fatally injured, about to be engulfed by forest fire), and then euthanize them to shorten their suffering.

Steering AI to care for animals, and soon

Hi Andrew, I am glad that you raised this. I agree that animal welfare matters and AI will likely decide most of what happens in the future. I also agree that this is overlooked, both by AI people and animal welfare people. One very important aspect is how AI will tranform the factory farming industry, which might change the effectiveness of a lot of interventions farmed animal advocates are using.

I have been researching the ethics of AI concerning nonhuman animals over the last year, supervised by Peter Singer. Along with two other authors, we wrote a pap... (read more)

Fai, your link to the paper didn't work for me, is this the correct link?
Megaprojects for animals

Thank you for writing this! I have been thinking about some ideas that could become mega projects, just throwing some of them out here (you have already listed some of them)

  • Pay to install  electric stunners in "small fish slaughter machines" which is popular in China. The idea is to pay way higher than the cost of installing such stunners so that the whole industry that produces this machine is disrupted. I am doing research on this potential project. My tentative judgement is that installing such stunners might be cheap - it could be as simple as con
... (read more)
These are interesting ideas! I think that AI systems designed with animal welfare in mind would be more reliant on computer vision and sensory data than NLP, since animals don't speak in human tongues. This blog post about using biologgers to measure animal welfare [] comes to mind.
I'm nervous about implementing AI solutions in the near-term, because, as you allude, what they are used to achieve is matter of who's programming them :/
These are very interesting. The electric stunning can be both beneficial in the way that animals, if they are at least intuitively aware what they live for - to maybe be eaten or produce animal products and be eaten, then if this is they all their life just chill and then it is just a stun, then it's quite ok. If they could they would probably contribute further, by some advancement, but since we currently only can use their contributions in this way, they may be quite ok just chilling taking care of their life. I read there at least were issues with the stunning machines in US slaughterhouses - simple technical issues - poor placement or inadequate current. Also, ritual killing is an issue. Stunning is more elegant and should be the new ritual. Electrical bath for crayfish makes sense too. It can be just a simple electrode which prevents the issues of crawling (and thus loss of crayfish and capital). Of course, the alternative of eating rare tofus can be even better but for the time being - there should be manufacturers that would gladly produce this device. AI monitoring welfare - I would not implement it, maybe in a few years when institutions become more interested in monitoring - it's a moonshot plus there may be other tech solutions with higher marginal cost-effectiveness. For example, actually, if you focus on cricket farms - I think that if they miss simple nutrients, such as salt, they eat each others. This can extensively slump the atmosphere there for large numbers of individuals. So, some maybe salinity/humidity/etc monitoring device that even a worker can go around with and just poke around and depending on the values nutrients are automatically dispersed. Of course, insect welfare research should perhaps be prioritized because what if crickets just love the thrill of eating others and being eaten since they live to the fullest or suffer in any case so optimal salinity makes very little difference. I think plant-based is more promising with the cost
A longtermist critique of “The expected value of extinction risk reduction is positive”

Thank you for the great post! I think my post might be relevant to 2.1.1. Animals [1.1]. 

(my post discusses about factory farmed animals in the long-term future, but that doesn't mean I only worry about that as the only source of animal suffering in the long-term)

Thanks for the kind feedback. :) I appreciated your post as well—I worry that many longtermists are too complacent about the inevitability of the end of animal farming (or its analogues for digital minds).
Who is protecting animals in the long-term future?

I disagree here. Even though I think it's more likely than not space factory farming won't go on forever, it's not impossible that it will stay, and the chance isn't like vanishingly low. I wrote a post on it.

Also, for cause prioritization., we need to look at the expected values from the tail scenarios. Even if the chances could be as low as 0.5%, or 0.1%, the huge stake might mean the expected values could still be astronomical, which is what I argue for space factory farming. I think what we need to do is to prove why factory farming will go away in the... (read more)

Who is protecting animals in the long-term future?

I think worry about factory farm AI being overall negative, and much less likely overall positive. First, it might reduce diseases, but that also means factory farms can therefore keep animals more crowdedly because they have better disease control. Second, AI would decrease the cost of animal products, causing more demand, and therefore increase the number of animals farmed. Third, lower prices mean animal products will be harder to be replaced by alternatives. Fourth, I argue that AI that are told to improve or satisfice animal welfare cannot do so rubustly. Please refer to my comment above to James Ozden.

Who is protecting animals in the long-term future?

Hi James (Banks), I wrote a post on why PB/CM might not eliminate factory farming. Would be great if you can give me some feedback there.

Who is protecting animals in the long-term future?

Yes. But the focus is a little bit different. Abraham's post was mainly worrying about wild animals, not space factory farming. Actually, Abraham assumed that factory farming will end soon, which I think we shouldn't assume without very very strong reasons.

Who is protecting animals in the long-term future?

Hey James (Ozden), I am really glad that CE discussed this!  I thought about them too, so wonder if you and CE would like to discuss? (CE rejected my proposal on AI x animals x longtermism, but I think they made the right call, these ideas were too immature and under-researched to set up a new charity!)

I now work as Peter Singer's  RA (contractor) at Princeton, on AI and animals. We touched on AI alignment, and we co-authored a paper on speciesist algorithmic bias in AI systems (language models, search algorithms), with two other professors, whic... (read more)

Great to learn about your paper Fai, I didn't know about it till now, and this topic is quite interesting. I think when longtermism talks about the far future it's usually "of humanity" that follows and this always scared me, because I was not sure either this is speciesist or is there some silent assumption that we should also care about sentient beings. I don't think there were animal-focused considerations in Toby Ord's book (I might be wrong here) and similar publications? I would gladly then read your paper. I quickly jumped to the conclusion of it, and it kinds of confirm my intuitions in regards to AI (but also long-term future work in general): "Up to now, the AI fairness community has largely disregarded this particular dimension of discrimination. Even more so, the field of AI ethics hitherto has had an anthropocentric tailoring. Hence, despite the longstanding discourse about AI fairness, comprising lots of papers critically scrutinizing machine biases regarding race, gender, political orientation, religion, etc., this is the first paper to describe speciesist biases in various common-place AI applications like image recognition, language models, or recommender systems. Accordingly, we follow the calls of another large corpus of literature, this time from animal ethics, pointing from different angles at the ethical necessity of taking animals directly into consideration [48,155–158]..."

I haven't read this fully (yet! will respond soon) but very quick clarification - Charity Entrepreneurship weren't talking about this as an organisation. Rather, there's a few different orgs with a bunch of individuals who use the CE office and happened to be talking about it (mostly animal people in this case). So I wouldn't expect CE's actual work to reflect that conversation given it only had one CE employee and 3 others who weren't!

Who is protecting animals in the long-term future?

Hey James (Faville), yes you should publish these reports! I look forward them in published form. (I believe I haven't read the draft for the AS one)

Who is protecting animals in the long-term future?

Hi Alene, Thank you for writing this! I am glad that a lot of people (James's) are discussing here. I hope this is the beginning a lot of useful discussions!

Why the expected numbers of farmed animals in the far future might be huge

Sorry that I missed your comment and therefore the late reply! 

Thank you for sharing. Let me clarify your suggestion here, do you mean you suggest me to give my model of accounting for moral significance, rather than just writing about the number of beings involved?

Also, do you mind sharing your credence of the possibility of digital sentience?

Yes, that's an accurate characterization of my suggestion. Re: digital sentience, intuitively something in the 80-90% range?
Why the expected numbers of farmed animals in the far future might be huge

I am glad I sort of answered your question!  

It happens that I also worry about digital suffering, but I have two great uncertainties:

  1. Whether artificial consciousness is possible.
  2. If 1 is possible, whether these beings can have the capacity for positive and negative experiences.

My uncertainty in 1 is much greater, like maybe 100x to 2. I wonder what your credence in artificial sentience is? It would be very useful for me if you can share. Am I right about my guess that you think, even after adjusting for the probability of creating digital beings vs pr... (read more)

2Rohin Shah5mo
I'm pretty confident artificial consciousness is possible, though I haven't looked into it much. This is primarily because it seems like consciousness will be a property of the cognition, and independent of the substrate running that cognition. As an intuition pump, suppose we understand in great detail the exact equations governing the firing of synapses in the brain, and we then recreate my brain in software using these equations. I claim that, given an environment that mimics the real world (i.e. inputs to the optic nerve that are identical to what the retina would have received, similarly for the other senses, and outputs to all of the muscles, including the tongue (for speech)), that the resulting system would do exactly what I would do (including e.g. saying that I am conscious when asked). It seems very likely that this system too is conscious. (I'm also confident that digital beings can have the capacity for positive and negative experiences.) If you ask me about particular digital "beings" (e.g. AI systems, databases, Google search), then I become a lot more uncertain about (1) and (2).
Why the expected numbers of farmed animals in the far future might be huge

It depends on the probability one assigns to the scenario. If we assume we will 100% get that scenario, my upper estimates would shrink a lot, because presumably digital people would have little incentives to keep non-digital farmed animals. But unless the earth will also be replaced with primarily digital beings, my estimates for the expected number of farmed animals on earth in the far future might still roughly hold. 

And it depends on what you mean by "primarily". If that means some small portion of the universe will still be occupied by humans, th... (read more)

3Rohin Shah5mo
Thanks! (I was mainly thinking about the ratio of digital people : animals, which is what matters for choosing between actions that help all digital people and actions that help all animals.)
Why the expected numbers of farmed animals in the far future might be huge

Thank you for your comment! 

Yes, I recognize that some longtermists bite the bullet and admit that humanity virtually only have instrumental values, but I am not sure if they are the majority, it seems like they are not.  In any case, it seems to me that the vast majority of longtermists either think the focus should be humanity, or digital beings. Animals are almost always left out of the picture.

I think you are right that "part of this" is a strategy to avoid weird messaging, but I think most longtermists I discussed with do not think that huma... (read more)

Yes, all those first points make sense. I did want to just point to where I see the most likely cruxes. Re: neuron count, the idea would be to use various transformations of neuron counts, or of a particular type of neuron. I think it's a judgment call whether to leave it to the readers to judge; I would prefer giving what one thinks is the most plausible benchmark way of counting and then giving the tools to adjust from there, but your approach is sensible too.
Why the expected numbers of farmed animals in the far future might be huge

Hi Dony!  Thank you for your comment!

I think I disagree with your view here. Let me explain why.

Consider these two objective functions:

  1. Maximize the efficiency of raising tilapia (or any species of animals)
  2. Minimize the chance that the tilapia raised live net negative lives

I think we shouldn't expect that optimizing for 1 would always, robustly, ensure that 2 is also optimized at the same time. I think highly intelligent systems are quite likely to identify ways to optimize for 1 that do not optimize for 2 at all. In fact, we probably don't need AI for ... (read more)

Why the expected numbers of farmed animals in the far future might be huge

Thank you Saulius! I basically agree with everything you said here. I would really hope some people from the space governance space can give us some insights here. Do you happen to know some of them?

Unfortunately, I don't
The Future Fund’s Project Ideas Competition

This sounds great! I particularly liked that you brought up S-risks and MCE. I think these are important considerations.

The Future Fund’s Project Ideas Competition

Wild animal suffering in space

Space governance, moral circle expansion.


Terraforming other planets might cause animals to come to exist in these planets, either because of intentional or unintentional behaviors. These animals might live net negative lives. 

Also, we cannot rule out the possibility that there are already wild "animals" (or any form of sentient beings) who might be suffering from net negative lives in other planets. (this does not relate directly to the Fermi Paradox, which is highly intelligent lives, not lives per se)

Relevant rese... (read more)

5Dawn Drescher5mo
Brian Tomasik [] and Michael Dello-Iacovo [] have related articles.
6Dawn Drescher5mo
Another great concern of mine is that even if biological humans are completely replaced with ems or de novo artificial intelligence, these processes will probably run on great server farms that likely produce heat and need cooling. That results in a temperature gradient that might make it possible for small sentient beings, such as invertebrates, to live there. Their conditions may be bad, they may be r-strategists and suffer in great proportions, and they may also be numerous if these AI server farms spread throughout the whole light cone of the future. My intuition is that very few people (maybe Simon Eckerström Liedholm?) have thought about this so far, so maybe there are easy interventions to make that less likely to happen.
Here's a related question [] I asked.
The Future Fund’s Project Ideas Competition

Preventing factory farming from spreading beyond the earth

Space governance, moral circle expansion (yes I am also proposing a new area of interest.)


Early space advocates such as Gerard O’Neill and Thomas Heppenheimer had both included animal husbandry in their designs of space colonies. In our time, SpaceX, the European Space Agency, the Canadian Space Agency, the Beijing University of Aeronautics and Astronautics, and NASA, have all expressed interests or announced projects to employ fish or insect farming in space. 

This, if successful, might m... (read more)

Re: Some thoughts on vegetarianism and veganism

I actually told some people to do this kind of diet. Even though I feel very uncertain about it.


 I was always baffled by the fact that in Asia, when a lot of people speak of "cutting meat consumption", they start by cutting the meat of cows When I tried to convince them that they should do the reverse, they look extremely surprised. It's kind of a cultural thing here that cutting cow's meat first is seen as standard, everyone kind of "knows it has to be the case".

I think that's normal in Canada and probably many other Western countries. People think mammals matter more individually (or like them or identify more with them), are less healthy to eat and worse for the environment to farm. It's plausible to me that the average chicken matters more than the average farmed mammal because of how much worse chicken lives seem to be. I started my transition to veganism by cutting out meat from mammals, too, although I think I was only starting to get into EA at the time.
Animal Welfare Fund: Ask us anything!

What is your view on how longtermism relates to or affects animal welfare work? Are you interested in potentially supporting someone to look into this intersection? If yes, what might be some of the sub-topics that you might be interested in? Thank you!

Good question! In short, I think it may be important but I feel pretty unsure about what the implications are. I guess it generally updates me somewhat towards some of the more speculative things that fall inside our remit, including wild animals and invertebrate welfare. But basically, I think that longtermism is still way underexplored... so when we start talking about longtermism’s intersection with something like animal welfare, I think it is just really really underexplored. At this point, there may have been a few blog posts looking at that intersection. So yes, I would be interested in potentially supporting someone to look further into this intersection and believe we mentioned a point on that in our RFP. Quick thoughts, in terms of subtopics that could be interesting (only if the right person(s) were to do it): * Further examine, from a longtermist perspective, to what extent is wild animal welfare or invertebrate welfare important * Further examine plausible ways emerging tech may entrench bad practices for animals * Explore how likely animal friendly values are to be adopted/accounted for by an AI (obviously do so in a way that isn’t going to put any AI folks offside) * Doing some initial scoping out of possible ways a philanthropist might give if they were interested in building a field/subfield around AW and longtermism That said, I don’t feel I have particularly well developed thoughts on what subtopics look most promising at this intersection. I would also be keen to further hear from the community what sub-topics may seem promising, and if anyone is interested in completing work/research at this intersection, please reach out!
Extinction risk reduction and moral circle expansion: Speculating suspicious convergence

I think the last useful thing in this thread might be your last reply above. But I am going to share my final thoughts anyway.

I think I am still not convinced that the suspicion that animal/MCE advocates  had "suddenly embraced longtermism" (in the loose sense, not the EA/philosophical/Toby Ordian sense)  is justified, even if the animal advocates I said (like the ones in MFA) haven't thought explicitly about the future beyond 100+ yrs, because they might have thought that they roughly had, maybe in a tacit assumption that what is being achieved ... (read more)

This is quite an interesting observation/claim. I guess this I've observed something kind-of similar with many non-EA people interested in reducing nuclear risks: * It seems they often do frame their work around reducing risks of extinction or permanent collapse of civilization * But they usually don't say much about precisely why this would be bad, and in particular how this cuts off all the possible value humanity could experience/create in future * But really the way they seem differ from EA longtermists who are interested in reducing nuclear risk isn't the above point, but rather how they seem to too uncritically and overconfidently assume that any nuclear exchange would cause extinction and that whatever interventions they're advocating for would substantially reduce the risk So this all seems to tie into a more abstract, broad question about the extent to which the EA community's distinctiveness comes from its moral views (or its strong commitment to actually acting on them) vs its epistemic norms, empirical views, etc. Though the two factors obviously interrelate in many ways. For example, if one cares about the whole long-term future and is genuinely very committed to actually making a difference to that (rather than just doing things that feel virtuous in relation to that goal), that could create strong incentives to actually form accurate beliefs, not jump to conclusions, recognise reasons why some problem might not be an extremely huge deal (since those reasons could push in favour of working on another problem instead), etc.
Load More