A

adamShimi

238 karmaJoined Feb 2020

Bio

Full time independent deconfusion researcher (https://www.alignmentforum.org/posts/5Nz4PJgvLCpJd6YTA/looking-deeper-at-deconfusion) in AI Alignment. (Also PhD in the theory of distributed computing).

If you're interested by some research ideas that you see in my posts, know that I keep private docs with the most compressed version of my deconfusion ideas in the process of getting feedback. I can give you access if you PM me!

A list of topics I'm currently doing deconfusion on:

  • Goal-directedness for discussing AI Risk
  • Myopic Decision Theories for dealing with deception (with Evan Hubinger)
  • Universality for many alignment ideas of Paul Christiano
  • Deconfusion itself to get better at it
  • Models of Languages Models to clarify the alignment issues surrounding them.

Comments
24

Thanks for the thoughtful comment!

Behaviour science
I work in this space, and much of the theory seems very relevant to understanding non-human agents. For instance, I wonder there would be value in exploring if models of human behaviour such as COM-B and the FBM could be useful in modelling the actions of AI agents. For instance, if it is useful to theorise that a human agent's behaviour only occurs if they have sufficient motivation and ability and a trigger to act (as per the FBM), it might also be useful to do so for a non-human agent

This sounds like a potentially good analogy, but one has to be careful that it doesn't rely on assumptions that only apply to humans, or to quite bounded agents.

I used to be interested in this (it is basically attitude and behaviour change). 

I wonder if the idea of persuasion and underlying theory is useful for understanding how AI agents should respond to information and choose which information to share with other agents to achieve goals (i.e., to persuade). If so, then communications/processing models such as McGuire, Shannon-Weaver, or Lasswell may be useful. 

Related to that, I wrote a (not very good) paper outlining the concept of persuasion a long time ago, which finished with:
"From a philosophical perspective, we recommend that future research should consider if non-human agents can not only persuade but can also be persuaded. Research already explores how emerging technologies, such as artificial intelligences, may be human-like to varying extents (see Bostrom, 2014; Kurzweil, 2005; Searle, 1980). If we can believe that non-biological beings might be conscious and human-like (Calverley, 2008; Hofstadter & Dennett, 1988) then maybe we should also consider whether these beings will have beliefs, attitudes and behaviours and thus be subject to persuasion?"

The topics of persuasion (both from AIs and of AIs) is indeed an important topic in alignment. There's a general risk that optimization is very easily spent to push for manipulation of human, whether intentionally (training an AI which actually end up wanting to do something else, and so has reason to manipulate us) or unintentionally (training an AI such that it's incentivized to answer what we would prefer rather than the most accurate and appropriate answer).

For the persuasion of AIs by AIs, there are some initial thoughts around memetics for AIs, but they are not fully formed yet.

Systems thinking 
I am still a novice in this area and what I know is probably outdated. I wonder if there could be value in drawing on concepts in systems thinking when attempting to manage AI. As an example, this model suggests 12 leverage points for systems change (based  on this work). Could we model/manage an agent's behavioural outcomes in the same way?

Don't know much about this literature, but it makes me think of more structural takes on the alignment problem, that emphasize the importance of the structure of society funneling and pushing optimization, rather than the individual power of agents to alter it.

I am interested to know what you think, if you have time. Do any of these areas seem fruitful? Are they irrelevant, or are there better approaches already in use?

So, as can be seen above, none of these ideas sounds bad or impossible to make work, but judging them correctly would require far more effort put into analyzing them. Maybe you should apply for the fellowship, especially for behavioral work on which you're more of an expert? ;)

I am very aware that I don't have a good understanding of how AI agent's behaviour is modelled with the AI safety/governance literature. I also don't understand exactly i) what differences there are between those approaches and the approaches used in behavioural science/social science or ii) justifications for different approaches would be needed for each. 

Can you (or anyone else) recommend things that I should read/watch to improve my understanding?

It's a very good question, and shamefully I don't have any answer that's completely satisfying. But here are the next best things, some resources that will give you a more rounded perspective of alignment:

  • Richard Ngo's AGI safety from first principles, a condensed starter that presents the main line of arguments in a modern (post ML revolution) way.
  • Rob Miles's YouTube channel on alignment, with great videos on many different topics.
  • Andrew Critch and David Krueger's ARCHES, a survey of alignment problems and perspectives that puts more emphasis than most on structural approaches.

Hum, I think I wrote my point badly on the comment above. What I mean isn't that formal methods will never be useful, just that they're not really useful yet, and will require more pure AI safety research to be useful.

The general reason is that all formal methods try to show that a program follows a specification on a model of computation. Right now, a lot of the work on formal methods applied to AI focus on adapting known formal methods to the specific programs (say Neural Networks) and the right model of computation (in what contexts do you use these programs, how can you abstract their execution to make it simpler). But one point they fail to address is the question of the specification.

Note that when I say specification, I mean a formal specification. In practice, it's usually a modal logic formula, in LTL for example. And here we get at the crux of my argument: nobody knows the specification for almost all AI properties we care about. Nobody knows the specification for "Recognizing kittens" or "Answering correctly a question in English". And even for safety questions, we don't have yet a specification of "doesn't manipulate us" or "is aligned". That's the work that still needs to be done, and that's what people like Paul Christiano and Evan Hubinger, among others, are doing. But until we have such properties, the formal methods will not be really useful to either AI capability or AI safety.

Lastly, I want to point out that working on AI for formal methods is also a means to get money and prestige. I'm not going to go full Hanson and say that's the only reason, but it's still a part of the international situation. I have examples of people getting AI related funding in France, for a project that is really, but really useless for AI.

This post annoyed me. Which is a good thing! It means that you hit where it hurts, and you forced me to reconsider my arguments. I also had to update (a bit) toward your position, because I realized that my "counter-arguments" weren't that strong.

Still, here they are:

  • I agree with the remark that many work will have both capability and safety consequences. But instead of seeing that as an argument to laud the safety aspect of capability-relevant work, I want to look for the differential technical progress. What makes me think that EA safety is more relevant than mainstream AI to safety questions is that for almost all EA safety, the differential progress is in favor of safety, while for most research in mainstream/academic AI, the different progress seems either neutral or in favor of capabilities. (I'll be very interested in counter examples, on both sides)
  • Echoing what Buck wrote, I think you might overestimate the value of research that has potential consequences about safety but is not about it. And thus I do think there's a significant value gain to focus on safety problems specifically.
  • About Formal Methods, it isn't even useful for AI capabilities, even less for AI safety. I want to write a post about that at some point, but when you're unable to specify what you want, Formal Methods cannot save your ass.

With all that being said, I'm glad you wrote this post and I think I'll revisit it and think more about it.

Since many other answers treat the more general ideas, I want to focus on the "volontary" sadness of reading/watching/listening sad stories. I was curious about this myself, because I noticed that reading only "positive" and "joyous" stories eventually feel empty.

The answer seem that sad elements in a story bring more depth than the fun/joyous ones. In that sense, sadness in stories act as a signal of deepness, but also a way to access some deeper part of our emotions and internal life.

I'm reminded of Mark Manson's quote from this article:

If I ask you, “What do you want out of life?” and you say something like, “I want to be happy and have a great family and a job I like,” it’s so ubiquitous that it doesn’t even mean anything.
A more interesting question, a question that perhaps you’ve never considered before, is what pain do you want in your life? What are you willing to struggle for? Because that seems to be a greater determinant of how our lives turn out.

Maybe sadness and pain just tell us more about other and ourselves, and that's what we find so enthralling.

Thanks for that very in-depth answer!

I was indeed thinking about 3., even if 1. and 2. are also important. And I get that the main value of these diagrams is to force an explicit and as formal as possible statement to be made.

I guess my question was more about, given two different causal diagrams for the same risk (made by different researchers for example), have you an idea of how to compare them? Like finding the first difference along the causal path, or others means of comparison. This seems important because even with clean descriptions of our views, we can still talk past each other if we cannot see where the difference truly lies.

Great post! I feel these diagrams will be really useful for clarifying the possible interventions and parts of the existential risks.

Do you think they'll also serve for comparing different positions on a specific existential risk, like the trajectories in this post? Or do you envision the diagram for a specific risk as a summary of all causal pathways to this risk?

What about diseases? I admit I know little about this period of history, but the accounts I read (for example in Guns, Germs and Steel) place the advantage in the spread of diseases to the Americas.

Basically, because the Americas lacked many big domesticated mammals, they could not have cities like European ones with cattle everywhere. The conditions of living in these big cities caused the spread of diseases. And when going to the Americas, the conquistadors took these diseases with them to a population which had never experienced them, causing most of the deaths of the early conquests.

(This is the picture from the few sources I've read. So it might be wrong or inaccurate, but if it is, I am very curious of why.)

Also interested. I did not think about it before, but since the old generation dying is one way scientific and intellectual changes are completely accepted, that would probably have some big impact on our intellectual landscape and culture.

I'm curious about the article, but the link points to nothing. ^^

Load more