AP

Alex P

12 karmaJoined Aug 2022

Posts
2

Sorted by New
1
Alex P
· 2y ago · 1m read

Comments
14

>By "satisfaction" I meant high performance on its mesa-objective

Yeah, I'd agree with this definition.

I don't necessarily agree with your two points of skepticism, for the first one I've already mentioned my reasons, for the second one it's true in principle but it seems almost anything an AI would learn semi-accidentally is going to be much simpler and more intrinsically consistent than human values. But low confidence on both and in any case that's kind of beyond the point, I was mostly trying to understand your perspective on what utility is.

I am familiar with the basics of ML and the concept of mesa-optimizers. "Building copies of itself" (i.e. multiply) is an optimization goal you'd have to specifically train into the system, I don't argue with that, I just think it's a simple and "natural" (in the sense it aligns reasonably well with instrumental convergence) goal that you can robustly train it comparatively easily.

"Satisfaction" however, is not a term that I've met in ML or mesa-optimizers context, and I think the confusion comes from us mapping this term differently onto these domains. In my view, "satisfaction" roughly corresponds to "loss function minimization" in the ML terminology - the lower an AIs loss function, the higher satisfaction it "experiences" (literally or metaphorically, depending on  the kind of AI). Since any AI [built under the modern paradigm] is already working to minimize its own loss function, whatever that happened to be, we wouldn't need to care much about the exact shape of the loss function it learns, except that it should robustly include "building copy of itself". And since we're presumably talking about a super-human AIs here, they would be very good at minimizing that loss function. So e.g. they can have some stupid goal like "maximize paperclips & build copies of self", they'll convert the universe to some mix of paperclips and AIs and experience extremely high satisfaction about it.

But you seem to be meaning something very different when you say "satisfaction"? Do you mind stating explicitly what it is?

My point is, getting the "multiply" part right is sufficient, AI will take care of the "satisfaction" part on its own, especially given that it's able to reprogram itself.

This assumes "[perceived] goal achievement" == "satisfaction" (aka utility), which was my assumption all along, but apparently is only true under preference utilitarianism.

Ok, so here's my take away from the answers so far:

Most flavors of utilitarianism (except for preference utilitarianism) don't consider any goal-having agent achieving those goals as utility. Instead there assumed to be some metric of similarity between the goals and/or mental states of the agent and those of humans, and the agent's achievement of its goals counts the less toward total utility the lower this similarity metric is, so completely alien agents achieving their alien goals and [non-]experiencing alien non-joy about it don't register as adding utility.

How exactly this metric should be formulated is disputed and fuzzy, and quite often a lot of this fuzziness and uncertainty is swept under the rug with the word "sentience" (or something similar) written on it.

Additionally, the proportion of EAs who would seriously consider "all humans replaced by [particular kind of] AIs" as an acceptable outcome may be not as trivial as I assumed.

Please let me know if I'm grossly misunderstanding or misrepresenting something, and thank you everyone for your explanations!

>It's hard to imagine AI systems having this

Why? As per instrumental convergence, any advanced AI is likely to have self-preservation and a negative reward signal it would receive upon a violation of such drive would be functionally very similar to pain (give or take the bodily component, but I don't think it's required? Otherwise simulate a million human minds in agony is OK, and I assume we agree it's not). Likewise, any system with goal-directed agentic behavior would experience some reward from moving towards its goals, which seems functionally very similar to pleasure (or satisfaction or something along these lines).

Can you, um, coherently imagine an agent that does not try to achieve its own goals (assuming it has no conflicting goals)?

That's true, but I think robustly embedding a goal of "multiply" is much easier than actual alignment. You can express it mathematically, you can use evolution, etc.

 

[To reiterate, I'm not advocating for any of this, I think any moral system that labels "humans replaced by AIs" as an acceptable outcome is a broken one]

So two questions (please also see my reply to HjalmarWijk for context)::

  1. Do you on these grounds think that insect suffering (and everything more exotic) is meaningless? Because our last common ancestor with insects hardly have any neurons, and unsurprisingly our neuronal architecture is very different, so there isn't many reasons to expect any isomorphism between our "mental" processes.
  2.  Assuming an AI is sentient (in whatever sense you put into this word) but otherwise not meaningfully isomorphic to humans. How do you define "positive" inner life in that case?

Ok, so the crux of my question was not understanding that non-preference utilitarianism exists, although now I'm even more confused, as I explained in my reply to HjalmarWijk. You also seem to be coming from the assumption that suffering (and I assume pleasure) exists separately from an agent achieving it's goals, so I'm curious to hear your thoughts on how you define them?

 

>So for me there isn't really a paradox to resolve when it comes to propositions like 'the best future is one where an enormous number of highly efficient AGIs are experiencing as much joy as cybernetically possible, meat is inefficient at generating utility'.
 

Does this mean that you can agree with such proposition?

Eliezer seems to come from the position that utility is more or less equal to "achieving this agent's goals, whatever those are" and as such even agents extremely different from humans can have it (example of a trillion times more powerful AI). This is very different from [my understanding of] what HjalmarWijk above says, where utility seems to be defined in a more-or-less universal way and a specific agent can have goals orthogonal or even opposite to utility, so you can have a trillion agents fully achieving their goals and yet not a single "utiliton".

 

Re other ethical systems - I'm mostly asking about utilitarianism, because it's what nearly everyone working on alignment subscribes to, and also I know even less about other systems. But at a first glance, seems like deontological or virtue ethics can have either ways out of this problem? And for relativism or egoism it's a non-issue.

Load more