DM

David Mathers

2689 karmaJoined Dec 2021

Comments
269

Ok, I slightly overstated the point. This time, the supers selected were not a (mostly) random draw from the set of supers. But they were in the original X-risk tournament, and in that case too, they were not persuaded to change their credences via further interaction with the concerned (that is the X-risk experts.) Then, when we took the more skeptical of them and gave them yet more exposure to AI safety arguments, that still failed to move the skeptics. I think taken together, these two results show that AI safety arguments are not all that persuasive to the average super. (More precisely, that no amount of exposure to them will persuade all supers as a group to the point where they get a median significantly above 0.75% in X-risk by the centuries end.) 

Ok yes, in this case they were. 

But this is a follow-up to the original X-risk tournament, where the selection really was fairly random (obviously not perfectly so, but it's not clear in what direction selection effects in which supers participated biased things.) And in the original tournament, the supers were also fairly unpersuaded (mostly) by the case for AI X-risk. Or rather, to avoid putting it in too binary a way, they didn't not move their credences further on hearing more argument after the initial round of forecasting. (I do think the supers level of concern was enough to motivate worrying about AI given how bad extinction is, so "unpersuaded" is a little misleading.) At that point, people then said 'they didn't spend the  enough time on it, and they didn't get the right experts'. Now, we have tried further with different experts, more time and effort lots of back and forth etc. and those who participated in the second round are still not moved. Now, it is possible that the only reason the participants were not moved 2nd time round was because they were more skeptical than some other supers the first time round. (Though the difference between medians of 0.1% and 0.3% medians in X-risk by 2100 is not that great.) But I think if you get 'in imperfect conditions, a random smart crowd were not moved at all, then we tried the more skeptical ones in much better conditions and they still weren't moved at all', the most likely conclusion is that even people from the less skeptical half of the distribution from the first go round would not have moved their credences either had they participated in the second round. Of course, the evidence would be even stronger if the people had been randomly selected the first time as well as the second. 

TL;DR Lots of things are believed by some smart, informed, mostly well calibrated people. It's when your arguments are persuasive to (roughly) randomly selected smart, informed, well-calibrated people that we should start being really confident in them. (As a rough heuristic, not an exceptionless rule.) 

I agree this is quite different from the standard GJ forecasting problem. And that GJ forecasters* are primarily selected for and experienced with forecasting quite different sorts of questions. 

But my claim is not "trust them, they are well-calibrated on this". It's more "if your reason for thinking X will happen is a complex multi-stage argument, and a bunch of smart people with no particular reason to be biased, who are also selected for being careful and rational on at least some complicated emotive stuff, spend hours and hours on your argument and come away with a very different opinion on its strength, you probably shouldn't trust the argument much (though this is less clear if the argument depends on technical scientific or mathematical knowledge they lack**)". That is, I am not saying "supers are well-calibrated, so the risk probably is about 1 in 1000". I agree the case for that is not all that strong. I am saying "if the concerned group's credences are based in a multi-step, non-formal argument whose persuasiveness the supers feel very differently about, that is bad sign for how well-justified those credences are." 

Actually, in some ways, it might look better for AI X-risk work being a good use of money if the supers were obviously well-calibrated on this. A  1 in 1000 chance of an outcome as bad as extinction is likely worth spending some small portion of world GDP on preventing. And AI safety spending so far is a drop a bucket compared to world GDP. (Yeah, I know technical the D stands for domestic so "world GDP" can't be quite the right term, but I forget the right one!). Indeed "AI risk is at least 1 in 1000" is how Greaves and MacAskill justify the "we can make a big difference to the long-term future in expectation" in 'The Case for Strong Longtermism'. (If a 1 in 1000 estimate is relatively robust, I think it is a big mistake to call this "Pascal's Mugging".) 

 *(of whom I'm one as it happens, though I didn't work on this: did work on the original X-risk forecasting tournament.)

**I am open to argument that this actually is the case here. 

I think that is probably the explanation yes. But I don't think it gets rid of the problem for the concerned camp that usually, long complex arguments about how the future will go are wrong. This is not a sporting contest, where the concerned camp are doing well if they take a position that's harder to argue for and make a good go of it. It's closer to the mark to say that if you want to track truth you should (usually, mostly) avoid the positions that are hard to argue for.

I'm not saying no one should ever be moved by a big long complicated argument*. But I think that if your argument fails to move a bunch of smart people, selected for good predictive track record to anything like your view of the matter, that is an extremely strong signal that your complicated argument is nowhere near good enough to escape the general sensible prior that long complicated arguments about how the future will go are wrong. This is particularly the case when your assessment of the argument might be biased, which I think is true for AI safety people: if they are right, then they are some of the most important people, maybe even THE most important people in history, not to mention the quasi-religious sense of meaning people always draw from apocalyptic salvation v. damnation type stories. Meanwhile the GJ superforecasters don't really have much to lose if they decide  "oh, I am wrong, looking at the arguments, the risk is more like 2-3% than 1 in 1000". (I am not claiming that there is zero reason for the supers to be biased against the hypothesis, but just that the situation is not very symmetric.) I think I would feel quite different about what this exercise (probably) shows, if the supers had all gone up to 1-2%, even though that is a lot lower than the concerned group. 

I do wonder (though I think other factors are more important in explaining the opinions of the concerned group) whether familiarity with academic philosophy helps people be less persuaded by long complicated arguments. Philosophy is absolutely full of arguments that have plausible premises and are very convincing to their proponents, but which nonetheless fail to produce convergence amongst the community. After seeing a lot of that, I got used to not putting that much faith in argument. (Though plenty philosophers remain dogmatic, and there are controversial philosophical views I hold with a reasonable amount of confidence.) I wonder if LessWrong functions a bit like a version of academic philosophy where there is-like philosophy-a strong culture of taking arguments seriously and trying to have them shape your views-but where consensus actually is reached on some big picture stuff. That might make people who were shaped by LW intellectually rather more optimistic about the power of argument (even as many of them would insist LW is not "philosophy".) But it could just be an effect of homogeneity of personalities among LW users, rather than a sign that LW was converging on truth. 

*(Although personally, I am much more moved by "hmmm, creating a new class of agents more powerful than us could end with them on top; probably very bad from our perspective" than I am by anything more complicated. This is, I think a kind of base rate argument, based off of things like the history of colonialism and empire; but of course the analogy is quite weak, given that we get to create the new agents ourselves.) 

'The concerned group also was more willing to place weight on theoretical arguments with multiple steps of logic, while the skeptics tended to doubt the usefulness of such arguments for forecasting the future.'

Seems to like it's wrong to say that this is a general "difference in worldview", until we know whether "the concerned group" (i.e. the people who think X-risk from AI is high) think this is the right approach to all/most/many questions, or just apply it to AI X-risk in particular. If the latter, there's a risk it's just special pleading for an idea they are attached to, whereas if the former is true, they might (or might not) be wrong, but it's not necessarily bias. 

Extremely minor and pedantic correction: Ōe Kenzaburō is male, not female: https://en.wikipedia.org/wiki/Kenzabur%C5%8D_%C5%8Ce  (I don't think that makes any significant difference to the point you're making, I just hate letting mistakes rest uncorrected!) 

' Rosenberg and the Churchlands are anti-realists about intentionality— they deny that our mental states can truly be “about” anything in the world..' 

Taken literally this is insane. It means no one has ever thought about going out to the shops for some milk. If it's extended to language (and why wouldn't it?) it means that we can't say that science sometimes succeeds in representing the world's reasonably well, since nothing represents anything. It is also very different from the view that mental states are real, but they are behavioral dispositions, not inner representations in the brain, since the latter view is perfectly compatible with known facts like "people sometimes want a beer". 

I'm also suspicious of what the world "truly" is doing in this sentence if it's not redundant. What exactly is the difference between "our mental states can be about things in the world" and "truly our mental states can be about things in the world"? 

Only glanced at one or two sections but the "goal realism is anti-Darwinian" section seems possibly irrelevant to the argument to me. When you first introduce "goal realism" it seems like it is a view that goals are actual internal things somehow "written down" in the brain/neural net/other physical mind, so that you could modify the bit of the system where the goal is written down and get different behaviour, rather than there really being nothing that is the representation of the AIs goals, because "goals" are just behavioral dispositions. But the view your criticizing in the "goal realism is anti-Darwinian" section is the view that there is always a precise fact of the matter about what exactly is being represented at a particular point in time, rather than several different equally good candidates for what is represented. But I can think of representations are physically real vehicles-say, that some combination of neuron firings is the representation of flys/black dots that causes frogs to snap at them-without thinking it is completely determinate what-flies or black dots-is represented by those neuron firings. Determinacy of what a representation represents is not guaranteed just by the fact that a representation exists. ~

EDIT: Also, is Olah-style interpretability working presuming "representation realism"? Does it provide evidence for it? Evidence for realism about goals specifically? If not, why not? 

Reply

 

Load more