S

splinter

55 karmaJoined Jul 2021

Comments
9

I am using conscious and sentient as synonyms. Apologies if this is confusing. 

I don't doubt at all that all animals are sentient in the sense that you mean. But I am  referring to the question of whether they have subjective experience -- not just pleasure and pain signals but also a subjective experience of pleasure and pain.

This doesn't feel like a red herring to me. Suffering only takes on a moral valence if it describes a conscious experience.

Thanks for this response. It seems like we are coming at this topic from very different starting assumptions. If I'm understanding you correctly, you're saying that we have no idea whether LLMs are conscious, so it doesn't make sense to draw any inferences from them to other minds.

That's fair enough, but I'm starting from the premise that LLMs in their current form are almost certainly not conscious. Of course, I can't prove this. It's my belief based on my understanding of their architecture. I'm very much not saying they lack consciousness because they aren't instantiated in a biological brain. Rather, I don't think that GPUs performing parallel searches through a probabilistic word space by themselves are likely to support consciousness.

Stepping back a bit: I can't know if any animal other than myself is conscious, even fellow humans. I can only reason through induction that consciousness is a feature of my brain, so other animals that have brains similar in construction to mine may also have consciousness. And I can use the observed output of those brains -- behavior -- as an external proxy for internal function. This makes me highly confident that, for example,  primates are conscious, with my uncertainty growing with greater evolutionary distance.

Now along come LLMs to throw a wrench in that inductive chain. LLMs are -- in my view -- zombies that can do things previously only humans were capable of.  And the truth is, a mosquito's brain doesn't really have all that much in common with a human's. So now I'm even more uncertain -- is complex behavior really a sign for interiority? Does having a brain made of neurons really put lower animals on a continuum with humans? I'm not sure anymore. 

Other animals do share many brain structures with us, but by the same token, most animals lack brain structures that are the most fundamental to what make us human. As far as I am aware (and I will quickly get out of my depth here), only mammals have a neocortex, and small mammals don't have much of one. 

Hopefully this is clear from my post, but ChatGPT hasn't made me rethink my beliefs about primates or even dogs. It definitely has made me more uncertain about invertebrates, reptiles, and  fish. (I have no idea what to think about birds.)

Answer by splinterNov 30, 20229
❤️2

I'm donating 10% of my pre-tax income this year, and most of it will be distributed to the usual suspects identified by GiveWell, Happier Lives Institute, and Animal Charity Evaluators. A small amount will be reserved for some local charities whose work I am familiar with. 

What I would love some advice on is ways to donate to Ukraine. There is probably no way to really know the effectiveness of any donations to Ukraine, but in general I think supporting the norm of respect for national sovereignty is actually quite important, apart from the (also quite important) humanitarian considerations. Does anyone have any thoughts?

The negative reactions to this post are disheartening. I have a degree of affectionate fondness for the parodic levels of overthinking that characterize the EA community, but here you really see the downsides of that overthinking concretely. 

Of course it is meaningful that Eliezer Yudkowsky has made a bunch of terrible predictions in the past that closely echo predictions he continues to make in slightly different form today. Of course it is relevant that he has neither owned up to those earlier terrible predictions or explained how he has learned from those mistakes. Of course we should be more skeptical of similar claims he makes in the future. Of course we should pay more attention to broader consensus or aggregate predictions in the field than in outlier predictions.

This is sensible advice in any complex domain, and saying that we should "evaluate every argument in isolation on its merits" is a type of special pleading or sophistry. Sometimes (often!) the obvious conclusions are the correct ones: even extraordinarily clever people are often wrong; extreme claims that other knowledgeable experts disagree with are often wrong; and people who make extreme claims that prove to be wrong should be strongly discounted when they make further extreme claims.

None of this is to suggest in any what that  Yudkowsky should be ignored, or even is necessarily wrong. But if you yourself are not an expert in AI (as most of us aren't), his past bad predictions are highly relevant indicators when assessing his current predictions. 

I'm not totally sure what #1 means. But it doesn't seem like an argument against privileging future ethics over today's ethics.

I view #2 as very much an argument in favor of privileging future ethics. We don't give moral weight to ghosts and ancestors anymore because we have improved our understanding of the world and no longer view these entities as having consciousness or agency. Insofar as we live in a world that requires tradeoffs, it would be actively immoral to give weight to a ghost's wellbeing  when making a moral decision.

I think this is well-taken, but we should be cautious about the conclusions we draw from it. 

It helps to look at a historical analogy. Most people today (I think) consider the 1960s-era civil rights movement to be on the right side of history. We see the racial apartheid system of Jim Crow America as morally repugnant. We see segregated schools and restaurants and buses as morally repugnant. We see flagrant voter suppression as morally repugnant (google "white primaries" if you want to see what flagrant means). And so we see the people who were at the forefront of the civil rights movement as courageous and noble people who  took great personal risks to advance a morally righteous cause. Because many of them were.

If you dig deeply into the history of the civil rights movement, though, you will also find a lot of normal human stuff. Infighting. Ideological excess. Extremism. Personal rivalry. Some civil rights organizations of the time were organizationally paralyzed by a very 1960s streak of countercultural anti-authoritarianism that has not aged well. They were often heavily inflected with Marxist revolutionary politics that has not aged well. Many in the movement regarded now revered icons like MLK Jr. as overly cautious establishmentarian sellouts more concerned with their place in history than with social change.

My point is not that the civil rights movement was actually terrible. Nor is it that because the movement was right about school integration, it was also right about the virtues of Maoism. My point is that if you look closely enough, history is always a total goddamned mess. And yet, I still feel pretty comfortable saying that we have made progress on slavery.

So yes, I absolutely agree that many contemporary arguments about moral progress and politics will age terribly, and I doubt it will even take very long. Probably in ten years times, many of the debates of today will look quaint and misguided. But this doesn't mean we should lapse into a total relativism. It means we need to look at the right scale and also that we should increase our ethical and epistemic humility in direct proportion to the specificity of the moral question we are asking. 

These are good questions, and I think the answer generally is yes, we should be disposed to treating the future's ethics as superior to our own, although we shouldn't be unquestioning about this.  

The place to start is simply to note that obvious fact that moral standards do shift all the time, often in quite radical ways. So at the very least we ought to assume a stance of skepticism toward any particular moral posture, as we have reason to believe that ethics in general are highly contingent, culture-bound, etc. 

Then the question becomes whether we have reasons to favor some period's moral stances over any others. There are a variety of reasons we might do so:

  1. Knowledge has been increasing monotonically, and in recent years extremely rapidly. Much of this knowledge is scientific , technological, or involves other kinds of expertise, and such knowledge does have a moral valence. E.g., we do not believe in witches anymore. 
  2. Some of our increasing knowledge is historical and philosophical. The Catholic church did a lot of things in the middle ages that to me seem very bad but seemed to the church at the time morally justified. But I also have access to a lot of historical information about the middle ages, and I can situate the church's actions in a broader story about politics, empire, religious conflict, etc., that  undercuts the church's moral claims. Other things being equal, we probably are wise to privilege  later time periods over earlier time periods because  later time periods saw how things turned out. Nazism seemed like a moral imperative to Nazis, but here in 2022, I know how WWII played out. (Spoiler alert: not well!)
  3. The moral changes that have occurred over time are not random, and we can apply meta-ethics to them to try to understand how things have changed. We used to condone slavery and now we abhor it. Is that just happenstance, such that in some alternate history we used to abhor slavery (perhaps for religious reasons) and now embrace it (perhaps because of the logic of capitalism)? Probably not, because across the board the ethical trend has been an extension of rights, franchise, and dignity to widening circles of humans. So we can ask whether we think that is a good ethical trend and draw conclusions about the relative merits of different moral frameworks.
  4. Wealth has also been increasingly more or less monotonically, and insofar as moral behavior might be considered a luxury good, we should suppose that it may be more abundant these days than in past. (This claim deserves a ton of scrutiny. I think it probably is true in some spheres -- e.g., gender equality -- and maybe less so in others.)

I want to stress that I don't think these arguments are absolute proof of anything; they are simply reasons we should be disposed to privilege the broad moral leanings of the future over those of the past. Certainly I think over short time spans, many moral shifts are highly contingent and culture-bound. I also think that broad trends might mask a lot of smaller trends that could bounce around much more randomly. And it is absolutely possible that some long-term trends will be morally degrading. For example, I am also not at all sure that long-term technological trends are well-aligned with human flourishing. 

It is very easy to imagine that future generations will hold moral positions that we find repugnant. Imagine, for example, that in the far future pregnancy is obsolete. The vast majority of human babies are gestated artificially, which people of the future find safer and more convenient than biological pregnancy. Imagine as a consequence of this that viable fetuses become much more abundant, and  people of the future think nothing of raising multiple babies until they are say, three months old, selecting the "best" one based on its personality, sleeping habits, etc., and then painlessly euthanizing the others. Is this a plausible future scenario, or do meta-ethical trends suggest we shouldn't be concerned about it? If we look into our crystal ball and discover that this  is in fact what our ancestors get up to, should we conclude that in the future technological progress will degrade the value of human life in a way that is morally perverse? Or should we conclude instead that technological progress will undermine some of our present-day moral beliefs that aren't as well-grounded as we think they are? I don't have a definitive answer, but I would at least suggest that we should strongly consider the latter.

I'm thinking about 2021 giving now, and in addition to the usual suspects, I've been considering the following:

The Happier Lives Institute. I think the work they are doing is potentially high-leverage, and according to Michael Plant, as of one month ago they hadn't met their 2022 funding goal.

Pro Publica. This one actually is funded by billionaires, so I don't think it counts as small or weird. Nevertheless I have had a sense for a while that Pro Publica punches above its weight in terms of bringing meaningful attention to otherwise under-valued topics I was reminded of this recently when I read the book The Alignment Problem that highlighted some Pro Publica's work in the area of AI risk.