Timothy Chan

338Joined Aug 2021

Bio

Interested in reducing suffering.

Comments
23

Thanks a lot for raising this, Geoffrey. A while back I mentioned some personal feelings and possible risks related to the current Western political climate, from one non-Westerner's perspective. You've articulated my intuitions very nicely here and in that article.

From a  strategic perspective, it seems to me that if AGI takes longer to develop, the more likely it is that the expected decision-making power would be shared globally. EAs should consider that they might end up in that world and it might not be a good idea to create and enforce easily-violated, non-negotiable demands on issues that we're not prioritizing (e.g. it would be quite bad if a Western EA ended up repeatedly reprimanding a potential Chinese collaborator simply because the latter speaks bluntly from the perspective of the former). To be clear, China has some of this as well (mostly relating to its geopolitical history) and I think feeling less strongly about those issues could be beneficial.

I find it odd that many people's ideas about other minds don't involve, or even contradict, the existence of some non-arbitrary function that maps a (finite) number of discrete fundamental physical entities (assuming physics is discrete) in a system to a corresponding number of minds (or some potentially quantifiable property of minds) in that same system.

I have intuitions (which could be incorrect) that "physics is all there is" and that "minds are ultimately physical," and it feels possible, in principle, to unify them somehow and relate "the amount of stuff" in both the physical and mental domains through such a function.

To me, this solution ("count all subsets of all elements within systems") proposed by Brian Tomasik appears to be among plausible non-arbitrary options, and it could also be especially ethically relevant. Solutions such as these that suggest the existence of a very large number of minds imply moral wagers, e.g. to minimize possible suffering in the kinds of minds that are implied to be most numerous (in this case, those that comprise ~half of everything in the universe), which might make them worth investigating further.

Even if physics is continuous rather than discrete, it still seems possible that there could be a mapping from continuous physics to discrete minds. (disclaimer: I don't know much physics, and I haven't thought much about how it relates to the philosophy of mind.)

This is all speculative and counterintuitive. On the other hand, common-sense intuitions developed through evolution might not accurately represent the first-person experiences, or lack thereof, of other systems. They seem to have instead evolved because they helped model complicated systems relevant to fitness by picturing them as similar to one's own mind. Common-sense intuitions aren't necessarily reliable, and counterintuitive conclusions could potentially be true.

Relatedly, it seems good to take efforts to present animal welfare in a less polarizing light, perhaps by avoiding lumping it with other cultural stances of the parts of the political spectrum that it's most associated with.[1]

I've noted previously how polarization also happens on the international level. My basic model of the current situation is that (1) advocacy/actions perceived to be extreme happens in the Anglo-American world -> (2) a lot of people in, say, China find out about it and find such advocacy/actions distressing and associate it with being "Western" , and then start resisting practices from the West (on the other hand, other people might find such advocacy/actions appealing - but this, in some cases, also seems to come with polarization, which gives more reason for opponents to resist it).

I think many positions considered progressive from  perspectives outside of the Anglo-American world are important to advance, but there also seems to be an increased difficulty of doing so because of a perception (which may or may not be accurate) of how it changes a society in ways non-Anglo-American people fear. One solution might be that activists, from the Anglo-American world and elsewhere, should focus on issues closer to the center that are also particularly effective to work on.

  1. ^

    Disclaimer: I find myself leaning conservative with social issues, while leaning progressive with economic issues - using American politics as a baseline - although I also feel as if I should change my stances accordingly as EA-relevant macrostrategic insights are uncovered.

Anecdotally, as another person from a non-Western country - currently living in the West, it's quite disconcerting to me that (certain parts of) Western cultures have become as you described, and I come from a very Westernized city of that country/continent and grew up being taught by Western teachers in Westernized schools where I spoke to everyone in English.

This isn't just limited to EA. It looks like this has and seems likely to continue to increase tensions between Western/non-Western cultures. I can't say for sure that the primary reason that EA would split (perhaps into a more rationalist-leaning side and a more progressive-leaning side) is because of this, but I'm also pessimistic about cooperation/reaching compromises.

I think in context, i.e. following OP's sentence "If the key decision makers of the future decide they have to bring animals to other planets ... introducing herbivores would be preferred ..." - by 'every individual animal', OP means every individual animal brought to other planets - not every single animal in existence. OP also seems to focusing on terraforming rather than space colonization.

So I'm not sure why you think that it's "an unreasonably demanding standard". There are certainly ways of assigning value that would say that creating additional lives with negative experiences makes it worse for those lives compared to refraining from creating them (e.g. minimalist axiologies). These may be rarer within the EA community, but they definitely exist outside of it (e.g. some forms of Buddhist ethics). If that's the case, and we're only talking about the lives created, then opposing bringing animals will indeed help every single animal involved. 

The implications of this is only that we find it preferable not to terraform - which isn't paralysis - just opposition to that particular policy.

I'm not sure eliminate is the right way to put it. Reducing net primary productivity (NPP) in legally acceptable ways (e.g. converting lawns into gravel) could end up being cost-effective, but eliminate seems too strong here.

Doing NPP reduction in less acceptable ways could make a lot of people angry, which seems bad for advocacy to reduce wild animal suffering. As Brian Tomasik pointed out somewhere, most of expected future wild animal suffering wouldn't take place on Earth, so getting societal support to prevent terraforming seems more important.

I guess there are multiple aspects to this. While he might seem to be open at the cost of personal legal risk, it might be that he's also telling an inaccurate story of what happened. (EDIT: slightly edited the wording regarding openness/good faith in this one paragraph after reading Lukas's take)

(Heavy speculation below)

A crucial point given SBF's significant involvement in EA and interest in utilitarianism is whether he actually believed in all of it, and how strongly.

There are some signs he believed in it strongly: being associated with EA rather than a more popular and commonly accepted movement, being very knowledgeable about utilitarianism, early involvement, donations etc.

If he did believe in it strongly, it could be that this is just him "doing [what he believes is] the most good" by potentially being dishonest about some things (whether this was due to bad intentions), in order to, perhaps (in his mind), deflect the harm he's caused EA and the future, at the cost of personal legal risk (which is minor in comparison from the utilitarian perspective). (Then again, at the same time, another (naive) utilitarian strategy might be to say "muhahaha I was evil all along!" and get people to think that he used EA as a cover and that he isn't representative of it? If that also works (in expectation, to him), I'm not so sure why he picked one over the other.)

This is all speculative, and a bit unusual for the average defendant, but SBF is quite unusual (as is EA, to be fair) and we might have to consider these unusual possibilities.

It might be useful to note that from the context of the Kelsey Piper interview, "ethics" might have referred to the ethics of 'rule-following/deontology" (also noted here: https://forum.effectivealtruism.org/posts/vjyWBnCmXjErAN6sZ/kelsey-piper-s-recent-interview-of-sbf?commentId=ZqbYkJrmeRNnmaoio). The part where he sounds like he's talking about EA (he doesn't mention EA directly in the video) would be consistent with that particular interpretation of "ethics was mostly a front".

That's a good point. I hadn't thought about that. I've added your observations to that part.

So if by measuring sentience, you mean to ask 'is X sentient?' and mean something like 'does X experience at least something, anything at all?', then one view is that it's a binary property. All or nothing - 0 or 1 - the lights are either on or off, so to speak.  I think this is why David scare-quoted "less sentience".

In this case, we might be able to speak of probabilities of X being sentient or not sentient. I find that Brian Tomasik's idea of a 'sentience classifier' (replace the moral value output with P(X is sentient)) to be a useful way of thinking about this.

There is a view that there could be fractional experiences. I myself don't understand how that would be possible but there is speculation about it.

However, if by sentience you instead mean intensity of experiences etc. (given that at least something is experienced) then the 2020 moral weight series is a good starting point. I personally disagree with some of their conclusions but it's a really good framework  for thinking about it.

My own view, which is similar if not identical to Tomasik's, is that when thinking about any of these, it's ultimately 'up to you to decide'.  I would add that human intuitions weren't evolved to find the ground truth about what is sentient and the content of such experiences. Instead, they developed due to the need to analyze the behavior of other agents. Our intuitions could be exactly right or entirely wrong or somewhere in between but there's no way to really know because we aren't other minds.

Load More