Dr. David Mathers

1121Joined Dec 2021

Comments
100

This goes considerably beyond 'international treaties with teeth are plausibly necessary here': 

'If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.'

Eliezer is proposing attacks on any countries that are building AI-above-a-certain-level, whether or not they sign up to the treaty. That is not a treaty enforcement mechanism. I also think "with teeth" kind of obscures by abstraction here (since it doesn't necessarily sound like it means war/violence, but that's what's being proposed.)

People who know that they are outliers amongst experts in how likely they think X is (as I think being 99% sure of doom is, particular combined with short-ish timelines), should be cautious about taking extreme actions on the basis of an outlying view, even if they think they have performed a personal adjustment to down-weight their confidence to take account of the fact that other experts disagree, and still ended up north of 99%. Otherwise you get the problem that extreme actions are taken even when most experts think they will be bad. In that sense integrity of the kind your praising is actually potentially very bad and dangerous, even if there are some readings of "rational" on which it counts as rational. 

Of course, what Eliezer is doing is not taking extreme actions, but recommending governments do so in certain circumstances, and that is much less obviously a bad thing to do, since govs will also hear from experts who are closer to the median expert. 

When a very prominent member of the community is calling for governments to pre-commit to pre-emptive military strikes against countries allowing the construction of powerful AI in the relatively near-term, including against nuclear powers*, it's really time for people to actually take seriously the stuff about rejecting naive utilitarianism where you do crazy-sounding stuff if a quick expected value calcualtion makes it look maximizing. 

*At least I assume that's what he means by being prepared to risk a higher chance of nuclear war.

I think that it is possible to buy that humans only have 14 times as painful maximum pains/pleasurable maximal pleasure than bees, and still think 14 bees=1 human is silly. You just have to reject hedonism about well-being. I have strong feelings about saving humans over animals, but I have no intuition whatsoever that if my parents' dog burns her paw it hurts less than when I burn my hand. The whole idea that animals have less intense sensations than us seems to me less like a commonsense claim, and more like something people committed to both hedonism and antispeciesism made up to reconcile their intuitive repugnant at results like 10 pigs or whatever=1 human. (Bees are kind of a special case because lots of people are confident they aren't conscious at all.)

Remember that AGI is a pretty vague term by itself, and some people are forecasting on the specific definition under the Metaculus questions. This matters because those definitions don't require anything inherently transformative like us being able to automate all labour, or scientific research. Rather they involve a bunch of technical benchmarks that aren't that important on their own, which are being presumed to correlate with the transformative stuff we actually care about.

I feel like an equally informative version of this is "people are more critical about the bad behavior of those they disagree with politically ", and then it sounds relevant yes, but far less sinister and discrediting.

What "organization" do you currently have evidence is "running" a negative PR campaign against us because we directly threaten its interests? We're not a threat to TIME magazine in any way I can see. 

How would you decide how to prioritize spending between humans and animals in a way that didn't raise this issue? This feels to me like a disguised argument against any concern for animals whatsoever, since the actual numbers in the comparison aren't really what's generating the intuitive repugnance so much as the comparison at all, as evidenced by 'faced with a dying baby or a billion dying insects you'd save the baby'. Is your view that all animals rights charity is creepy because the money could have been spent on people instead? Or just that making explicit that doing animal rights charity means not helping people instead, and so implies a view about trade-offs is creepy? Lying about why you're doing what you're doing is also, by definition, untrustworthy. 

I also think what your doing is a bit sleazy here: you're  blurring the line, I think, between 'even if this is right, it ought to be obvious to you that you should lie about it for PR reasons, you weirdo' and 'this is obviously ridiculous, as seen by the fact that normal people disagree', so that you can't get pinned by the objections to either individually.  (They're consistent, so you can believe both, but they are distinct.) 

FTX and Alameda sound extremely bad (obviously worse in effect than Leverage!)  to me in a way that is not particularly "cult", although I get that's a bit vague (and stories of SBF threatening people are closer to that, as opposed to the massive fraud.) As for the other stuff, I haven't heard the relevant stories, but you may be right, I am not particularly clued into this stuff, and it's possible it's just coincidence I have heard about crazy founder worship, and sleep deprivation etc. ,  vague stuff about cleansing yourself of "metaphorically" demonic forces, at Leverage but not at those other places. I recall bullying accusations against someone high up at Nonlinear, but not the details.  Probably I shouldn't have made the relative comparison between rationalists and non-rationalists, because I haven't really been following who all the orgs are, and what they've been doing. Though on the other hand, I feel like the rationalists have hit a high enough level of cult-y incidents that the default is probably that other orgs are less like that.  But maybe I should have just stuck to 'there are conflicting reports on whether epistemics are actually all that good in the Bay scene, and some reasonable evidence against.' 

Load more