Fai

1145Joined Jun 2020

Bio

Contractor RA to Peter Singer, Princeton

Comments
113

Thank you for your replies Saulius. 

Participating in CLR's fellowship does make you more informed about their internal views. Thank you for sharing that. I am personally not convinced by CLR's open publications that those are things that would in expectation reduce s-risk substantially. But mabye that's due to my lack of mathematical and computer science capabilities. 

I agree but I don’t think this changes much. There can be so many digital minds for so long, that in terms of expected value, I think that digital minds dominate even if you think that there is only a 10% chance that they can be sentient, and a 1% chance that they will exist in high numbers (which I think is unreasonable). 

I would have the same conclusion if have the same probabilities you assigned, and the same meaning of "high numbers".  I believe my credence on this should depend on whether we are the only planet with civilization now. If yes, and if by high numbers it means >10,000x of that of the expected number of wild animals there will be in the universe, my current credence that there actually will be a high number of digital beings created is <1/10000 (in fact, contrary to what you believe, I think a significant portion of this would come from the urge to simulate the whole universe's history of wild animals) BTW, I change my credence on this topics rapidly and with orders or magnitudes of changes, and there are many considerations related to this. So I might have changed my mind the next time we discuss this.

But I do have other considerations that would likely make me conclude that, if there are ways to reduce digital being suffering, this is a or even the priority. These considerations can be summarized to one question: If sentient digital beings can exist and will exist, how deeply will they suffer? It seems to me that on digital (or even non-biological analog) hardware, suffering can go much stronger and run much faster than on biological hardware.

I second the recommendation for Saulius to continue to work on farmed animal welfare. But I disagree with the view that uncertainty alone can undermine the whole case of longtermism.

Hi Ula. I just somehow want to let you know that I used to work on animal welfare and I moved on to work on AI. But I didn't stop doing animal welfare, because I do AI&animals.

Hi Saulius, I wonder if have factored in your points 2&3 above in your view that you think digital beings are a priority for longtermism, and factory farming a priority for non-longtermist animal welfare. It seems that both cause areas, if taken consistently and seriously enough, would go against (organic) human interests and is not what most people want.

Fai1mo1112

But didn't the OP also use expected value calculation to conclude that digital minds are going to dominate the value in the future, while admitting the tractability for helping digital minds might be even lower than helping wild animals?

This is a great question. I totally missed this consideration while reading this post but this question is imperative to keep in mind while thinking about this topic.

Thank you for writing this! I have major disagreements with you on this.

However, to me, WAW doesn’t seem to be the most important thing for the far future - not even close. Digital minds could be much more efficient, thrive in environments where biological beings can’t, utilize more resources, and seem more likely to exist in huge numbers. 

(on a separate paragraph) The tractability of trying to reduce digital mind suffering might be even lower than for longtermist animal welfare work, but the scale is much much higher.

The first passage I quoted is plausible, or even likely to be true (I don't have informed views on this yet). But even assuming this is true there is something wrong with using this argument to claim that "Hence, some other longtermist work seems much more promising to me than longtermist animal welfare work." That something wrong is the difference in standards of rigor you applied to the two cause areas. You applied a high level of rigor in evaluating the tractability of WAW as a non-longtermist cause area (so much that you even wrote a short form on it) and concluded that "There seem to be no cost-effective interventions to pursue now". But you didn't use the same level of rigor in evaluating the tractability for helping future digital minds, in fact, I believe you didn't attempt to evaluate it at all. If you use the same standard for WAW and digital minds as cause areas, either you would evaluate none of the cause areas and lead to conclusions like "WAW is far more important than factory farming" (which I believe is a view you moved away from partly because you evaluated tractability). Alternatively, you evaluate both of them, in which case you might not necessarily conclude that WAW is far less important than digital minds from the longtermist perspective.

In fact, I think it's likely that your prioritization between digital mind and WAW might switch. First, there are still huge uncertainties about whether there will actually be digital minds who are actually sentient. The uncertainties are much higher than for wild animals. We know both that sentient wild animals can exist, and that a lot of them will exist (for a certain amount of time), but we have uncertainties on whether sentient digital minds are possible, and also, if they are possible, whether they will actually be produced in huge numbers. Also, in terms of tractability, there is little evidence for most people to think that there is anything we can do now to help future digital minds. As far as my knowledge goes, Holden Karnofsky, Sentience Institute (SI), and the Center on Long-Term Risk (CLR) are the only three EA-affiliated entities that work on digital minds. They might provide some evidence that something can be done, but I suspect the update is not much, as CLR doesn't disclose most of their research, SI is still in a very early stage of their digital mind research, and Holden Karnofsky doesn't seem to have said much about what we can do to help digital minds particularly. Of course, research to figure out whether there could be interventions could itself be an impactful intervention. But that's true for WAW too. If this is a reason that digital mind being more imporatant than longtermist animal welfare (note: this would imply digital minds welfare is also more important than "longtermist human welfare" ), then I wonder why the same argument form won't make WAW way more important than factory farming, and lead you to conclude: "The tractability of trying to reduce wild animal suffering might be lower than work in tackling factory farming, but the scale is much much higher."

Also, if you do use CLR and SI as your main evidence in believing that helping digital minds is tractable, I am afraid you might have to change another conclusion in your post. SI is not entirely optimistic that the future with digital minds is going to be positive (and from chatting with their people I believe they seem pessimistic), and CLR seems to think that astronomical suffering from digital minds is pretty much the default future scenario. If you put high credence in their views about digital minds, I can't see how you would conclude that "reducing x-risks is much much more promising". To be fair to SI and CLR, my understanding is that they are strongly opposed to holding extremely unpopular and disturbing ideas such as increasing X-risk for the reason that this will actually increase suffering-risks. I believe this is the correct position to hold for people who think the future is in expectation negative. But I think at the minimum, if you put high credence in SI and CLR's views, you should probably be at least skeptical about the view that decreasing X-risk is a top priority. 

 

NOTE 1: on the last paragraph: I struggled a lot in writing the last sentence because I am clearly being self-defeating by saying this sentence right after expressing what I called "the correct position".)

NOTE 2: Some longtermists define X-risk as the extinction of intelligent lives OR the "permanent and drastic destruction of its potential for desirable future development". In this definition S-risk seems quite clearly a form of X-risk.  So it is possible for someone who solely cares about S-risk to claim that their priority is reducing X-risk. But operationally speaking it seems that S-risk and X-risk are used entirely separately.

NOTE 3: Personally I have a different argument against increasing extinction risk than cooperative reasons. Even if one holds that the future is in expectation negative, it doesn't necessarily follow that it is better for earth-originated intelligent beings to go extinct now because it is possible that most suffering in the future will be caused by intelligent beings not originating from earth. In fact, if there are many non-earth-originated intelligent beings, it seems extremely likely that most of the future suffering (or well-being) will be created by them, not "us". Given that we are a group of intelligent beings who are already thinking S-risk (alas, we have SI and CLR), and by that we proved to be a kind of intelligent being who could at least possibly develop into beings who care about S-risk, maybe this justifies humanity to continue under the negative future view.

Strong upvote. I really admire this piece. Thank you for writing and posting it. I think literatures and arts played a significant (I don't mean major, certainly not sole) role in the moral circle expansion up to the point of all humans, as it was a great way for people to understand the circumstances and hardships of people with strikingly different cultural, linguistic, social, and economic backgrounds from them. Maybe it could work for nonhuman animals too.

I really wish (and I believe very possible) there will one day be an AI that can automatically animate a script so that great scripts like this can gain yet another powerful way of conveying important messages. Yes, the animation is going to be horrific, but it may also save lives. 

Also, I think it is beneficial that this kind of post occasionally appears on the forum.

It’s the third largest fish farmed in the world’s aquaculture production.

I think it's better to make it clear that this is referring to the ranking by weight. In terms of the number of individuals, they are about number 6 or 7 highest.

I agree. A useful resource for farmed fish statistics is OPP's open spreadsheet. It shows that the number of farmed fish alone is 10x of the human population.

Load more