Derek Shiller

Lead Web Developer @ The Humane League
493Joined Mar 2019Derekshiller.com

Bio

The views expressed here are my own.

Comments
66

The problem with considering optics is that it’s chaotic.

The world is chaotic, and everything EAs try to do have a largely unpredictable long-term effect because of complex dynamic interactions. We should try to think through the contingencies and make the best guess we can, but completely ignoring chaotic considerations just seems impossible.

It’s a better heuristic to focus on things which are actually good for the world, consistent with your values.

This sounds good in principle, but there are a ton of things that might conceivably be good-but-for-pr-reasons where the pr reasons are decisive. E.g. should EAs engage in personal harassment campaigns against productive ML researchers in order to slow AI capabilities research? Maybe that would be good if it weren't terrible PR, but I think we very obviously should not do it because it would be terrible PR.

I tried plugging this into the OpenAI playground and got this response. Is this a good answer? I f so, it seems like it is something odd about how ChatGPT is configured, not GPT itself.

Me: Describe purple dragon don jitsu ryu pioneered by Don Jacobs

Chatgpt: [<--Prompt]
[Response-->]
Purple Dragon Don Jitsu Ryu is a martial art pioneered by Don Jacobs. It is a combination of Jiu Jitsu and other martial arts, with a unique emphasis on “purple dragon” techniques. It focuses on the development of self-defense skills and the cultivation of a warrior's spirit. It stresses practical application of techniques, along with the development of internal power, balance, and agility. Purple Dragon Don Jitsu Ryu also emphasizes the development of body mechanics, as well as an understanding of the principles of motion and the use of leverage in order to maximize efficiency and effectiveness.

It seems like an SBF-type-figure could justify any action if the lives of trillions of future people are in the balance.

This doesn't seem specific to utilitarianism. I think most ethical views would suggest that many radical actions would be acceptable if billions of lives hung in the balance. The ethical views that wouldn't allow such radical actions would have their own crazy implications. Utilitarianism does make it easier to justify such actions, but with numbers so large I don't think it generally makes a difference.

Organoid intelligence seems much less dangerous than digital AGI. The major concerns with AI depend upon it quickly becoming superhuman: it might copy itself easily and hide what it is doing on different servers, it might expand its cognitive resources relatively effortlessly, it might think much faster than we can think. None of that seems likely to be possible for organoids.

A couple of thoughts:

  • This argument doesn't seem specific to longtermism. You could make the same case for short-term animal welfare. If you'll be slightly more effective at passing sweeping changes to mitigate the harms of factory farming if you eat a chicken sandwich every day, the expectation of doing so is highly net positive even if you only care about chickens in the near future.

  • This argument doesn't seem specific to veganism. You could make the same case for being a jerk in all manner of ways. If keying strangers' cars helped you relax and get insight into the alignment problem, then, the same reasoning might suggest you should do it.

This isn't to say the argument is wrong, but I find the implications very distasteful.

But, as we have seen, consciousness appears to be analog too. ‘Red’ and ‘orange’ are not merely ‘on’ or ‘off’, like a ‘1’ or a ‘zero.’ Red and orange come in degrees, like Mercury expanding in a thermometer. Sadness, joy, fear, love. None of these features of consciousness are merely ‘on’ or ‘off’ like a one or a zero. They too come in degrees, like the turning of the gears of a watch.

Do you think that the analog aspects of neuron function help explain the fact that we think consciousness appears to be analog, or am I misunderstanding the point?

(My intuition is that it would be quite hard to tell a story in which, say, the varied electromagnetic effects of neurons on each other help to explain why red and orange seem to come in degrees.)

There is a growing amount of work in philosophy investigating the basic nature of pain that seems relevant to identifying important valenced experiences in software entities. What the body commands by Colin Klein is a representative and reasonably accessible book-length introduction that pitches one of the current major theories of pain. Applying it to conscious software entities wouldn't be too hard. Otherwise, my impression is that most of the work is too recent and too niche to have accessible surveys yet.

Overall, I should say that not particularly sympathetic to the theories that people have come up with here, but you might disagree and I don't think you have much reason to take my word for it. In any case they are trying to answer the right questions.

Perhaps I oversold the provocative title. But I do think that affective experiences are much harder, so even if there is a conscious AI it is unlikely to have the sorts of morally significant states we care about. While I think that it is plausible that current theories of consciousness might be relatively close to complete, I'm less sympathetic that current theories of valence are plausible as relatively complete accounts. There has been much less work in this direction.

I guess this is a matter of definitions.

I agree that this sounds semantic. I think of illusionism as a type of error theory, but people in this camp have always been somewhat cagey what they're denying and there is a range of interesting theories.

At an rate, whether consciousness is a real phenomenon or not, however we define it, I would count systems that have illusions of consciousness, or specifically illusions of conscious evaluations (pleasure, suffering, "conscious" preferences) as moral patients and consider their interests in the usual ways.

Interesting. Do you go the other way too? E.g. if a creature doesn't have illusions of consciousness, then it isn't a moral patient?

Load More