Pivocajs

64Joined Dec 2017

Bio

Vojta Kovarik. AI alignment and game theory researcher.

Comments
16

Nitpicky feedback on the presentation:

If I am understanding it correctly, the current format of the tables makes them fundamentally incapable of expressing evidence for insects being unable to feel pain. (The colour coding goes from green=evidence for to red=no evidence, and how would you express ??=evidence against?) I would be more comfortable with a format without this issue, in particularly since it seems justified to expect the authors to be biased towards wanting to find evidence for. [Just to be clear, I am not pushing against the results, or against for caring about insects. Just against the particular presentation :-).]

After thinking about it more, I would interpret (parts of) the post as follows:

  • To the extent that we found research on these orders O and criteria C, each of the orders satisfies each of the criteria.
  • We are not saying anything about the degree to which a particular O satisfies a particular C. [Uhm, I am not sure why. Are the criteria extremely binary, even if you measure them statistically? Or were you looking at the degrees, and every O satisfied every C to a high enough degree that you just decided not to talk about it in the post?]
  • To recap: you don't talk about the degrees-of-satisfying-criteria, and any research that existed pointed towards sufficient-degree-of-C, for any O and C. Given this, the tables in this post essentially just depict "How much quality-adjusted research we found on this."
  • In particular, the tables do not depict anything like "Do we think these insects can feel pain, according to this measure?". Actually, you believe that probably once there is enough high-quality research, the research will conclude that all insects will satisfy all of the criteria. (Or all orders of insects sufficiently similar to the ones you studied.)
    [Here, I mean "believe" in the Bayesian sense where if you had to bet, this is what you would bet on. Not in the sense of you being confident that all the research will come up this way. In particular, no offense meant by this :-) .]

Is this interpretation correct? If so, then I register the complaint that the post is a bit confusing --- not particularly sure why, just noticing that it made me confused. Perhaps it's the thing where I first understood the tables/conclusions as "how much pain do these types of insects feel?". (And I expect others might get similarly confused.)

I saw the line "found no good evidence that anything failed any criterion", but just to check explicitly: What do the confidence levels mean? In particular, should I read "low confidence" as "weak evidence that X feels pain-as-operationalized-by-Criterion Y"? Or as "strong evidence that X does not  feel pain-as-operationalized-by-Criterion Y"?

In other words:

  • Suppose you did the same evaluation for the order Rock-optera (uhm, I mean literal rocks). (And suppose there was literature on that :-).) How would the corresponding row look like? All white, or would you need to add a new colour for that?
  • Suppose you found 1000 high-quality papers on order X and Criterion Y, and all of them suggested that X is precisely borderline between satisfying Y vs not satisfying it. How would this show up in the tables?

"National greatness, and staying ahead of China for world influence requires that we have the biggest economy. To do that, we need more people." -Matt Yglesias, One Billion Americans.

Yeah, the guy who has chosen to have one child is going to inspire me to make the sacrifices involved in having four. It might be good for America, but the ‘ask’ here looks like it is that I sacrifice my utility for Matt’s one kid, and thus is not cooperate-cooperate. I’ll jump when you jump.

Two pushbacks here:

(1) The counterargument seems rather weak here, right? Even if Matt Yglesias had no kids, that doesn't mean his argument isn't valid. EG, if a single non-vegan person claims that more people should be vegan, will you view that as evidence that people should not be vegan? ;-) (Not that I disagree with your claim. Just with your argument.)

(2) Did you actually read One Billion Americans, or are you just taking the citation and interpretting it as Matt Yglesias making an argument for people having more children? I didn't read the book, so I am not sure. But I listened to a podcast with Matt Yglesias, about the book, and my impression was that he was primarily arguing for changing the immigration policy, and (if memory serves) not really making any strong claims about how many kids people should have.

Yes, sure, probabilities are only in the map. But I don't think that matters for this. Or I just don't see what argument you are making here. (CLT is in the map, expectations are taken in the map, and decisions are made in the map (then somehow translated into the territory via actions). I don't see how that says anything about what EV reasoning relies on.)

Agree with Acylhalide's point - you only need to be non-Dutchbookable by bets that you could actually be exposed to.

To address a potential misunderstanding: I agree with both Sharmake's examples. But they don't imply you have to maximise expected utility always. Just when the assumptions apply.

More generally: expected utility maximisation is an instrumental principle. But it is justified by some assumptions, which don't always hold.

Yes, the expected utility is larger. The claim is that there is nothing incoherent about not maximising expected utility in this case.

To try rephrasing: Principle 1: if you have to choose a X% chance of getting some outcome A, and a >=X% chance of a strictly better outcome B, you should take B. Principle 2: if you will be facing a long series of comparably significant choices, you should decide each of them based on expected utility maximisation. Principle 3: you should do expected utility maximisation for every single choice. Even if that is the last/most important choice you will ever make.

The claim is that: P1 is solid. P2 follows from P1 (via Central Limit Theorem, or whatever math). But P3 does not follow from P1/P2, and there will be cases where it might be justified to not obey P3. (Like the case with 51% chance of doubling the world's goodness, 49% chance of destroying it.)

Note that I am not claiming it's wrong to do expected utility maximisation in all scenarios. Just saying that it both doing and not doing it is OK. And therefore it is (very?) non-strategic to associate your philosophical movement with it. (Given that most people's intuitions seem to be against it.)

Does this explanation make sense? Maybe I should change the title to something with expected utility?

This gave me an idea for an experiment/argument. Posting here, in case somebody wants come up with a more thought-out version of it and do it.

[On describing what would change his mind:] You couldn’t find weird behaviors [in the AI], no matter how hard you tried.

People like to take an AI, poke it, and then argue "it is doing [all these silly mistakes], therefore [not AGI/not something to worry about/...]". Now, the conclusion might be right, but the argument is wrong --- even dangerous things can be stupid is some settings. Nevertheless, the argument seems convincing. 

My prediction is that people make a lot of mistakes[1] that would seem equally laughable if it was AI that made them. Except that we are so used to them that we don't appreciate it.  So if one buys the argument above, they should conclude that humans are also [not general intelligence/something to worry about/...]. So perhaps if we presented the human mistakes right, it could become a memorable counterargument to "AI makes silly mistakes, hence no need to worry about it".

Some example formats:

  • "Look at these silly AI mistakes! Surprise, that's normal people." or
  • "Quizz: AI mistake or human mistake?"

(uhm, or "Quizz: AI or Trump?"; wouldn't mention this, except bots on that guy already exist).

Obligatory disclaimer: It might turn out that humans really don't make [any sorts of] [silly mistakes current AI makes], or make [so few that it doesn't matter]. If you could operationalize this, that would also be valuable.

  1. ^

    What is "these mistakes"? I don't know . Exercise for the reader.

Some thoughts:
 1) Most importantly: In your planning, I would explicitly include the variable of how happy you are. In particular, if the AI Safety option would result in a break-up of a long-term & happy relationship, or cause you to be otherwise miserable, it is totally legitimate to not do the AI Safety option. Even if it was higher "direct" impact. (If you need an impact-motivated excuse - which might even be true - then think about the indirect impact of avoiding signalling "we only want people who are so hardcore that they will be miserable just to do this job".)

2) My guess: Given that you think your QC work is unlikely to be relevant to AI Safety, I personally believe that (ignoring the effect on you), the AI Safety job is higher impact.

3) Why is it hard to hire world experts to work on this? (Some thoughts, possibly overlapping with what other people wrote.)

  • "world experts in AI/ML" are - kinda tautologically - experts in AI/ML, not in AI Safety. (EG, "even" you and me have more "AI Safety" expertise than most AI/ML experts.)
  • Most problems around AI Safety seem vague, and thus hard to delegate to people who don't have their own models of the topic. Such models take time to develop. So these people might not be productive for a year (or two? or more? I am not sure) even if they are genuine about AI Safety work.
  • Top people might be more motivated by prestige than money. (And being "bought off" seems bad from this point of view, I guess.)
  • Top people might be more motivated by personal beliefs than money. (So the bottleneck is convincing them, not money.)

4) I am tempted to say that all the people who could be effectively bought with money are already being bought with money, so you donating doesn't help here. But I think a more careful phrasing is "recruiting existing experts is bottlenecked on other things than money (including people coming up with good recruiting strategies)".

5) Phrased differently: In our quest for developing the AI Safety field, there is basically no tradeoff between "hiring 'more junior' people (like you)" and "recruiting senior people", even if those more junior people would go earning to give otherwise.

Two considerations seem very relevant here:
(1) Is your primary goal to help Ukranians, or to make this more costly for Russia?
(2) Do you think the extra money is likely to change the outcome of the war, or merely the duration?

Load More