Dr. David Mathers

Posts

Sorted by New

Topic Contributions

Comments

On Deference and Yudkowsky's AI Risk Estimates

It seems really bad, from a communications/PR point of view, to write something that was ambiguous in this way. Like, bad enough that it makes me slightly worried that MIRI will commit some kind of big communications error that gets into the newspapers and does big damage to the reputation of EA as a whole.

On Deference and Yudkowsky's AI Risk Estimates

'Here’s one data point I can offer from my own life: Through a mixture of college classes and other reading, I’m pretty confident I had already encountered the heuristics and biases literature, Bayes’ theorem, Bayesian epistemology, the ethos of working to overcome bias, arguments for the many worlds interpretation, the expected utility framework, population ethics, and a number of other ‘rationalist-associated’ ideas before I engaged with the effective altruism or rationalist communities.'

I think some of this is just a result of being a community founded partly by analytic philosophers. (though as a philosopher I would say that!). 

I think it's normal to encounter some of these ideas in undergrad philosophy programs. At my undergrad back in 2005-09 there was a whole upper-level undergraduate course in decision theory. I don't think that's true everywhere all the time, but I'd be surprised if it was wildly unusual. I can't remember if we covered population ethics in any class, but I do remember discovering Parfit on the Repugnant Conclusion in 2nd-year of undergrad because one of my ethics lecturers said Reasons and Persons was a super-important book. In terms of the Oxford phil scene where the term "effective altruism" was born, the main titled professorship in ethics at that time was held by John Broome, a utilitarianism-sympathetic former economist, who had written famous stuff on expected utility theory. I can't remember if he was the PhD supervisor of anyone important to the founding of EA, but I'd be astounded if some of the phil. people involved in that had not been reading his stuff and talking to him about it.  Most of the phil. physics people at Oxford were gung-ho for many worlds, it's not a fringe view in philosophy of physics as far as I know. (Though I think Oxford was kind of a centre for it and there was more dissent elsewhere.)  As far as I can tell, Bayesian epistemology in at least some senses of that term is a fairly well-known approach in philosophy of science. Philosophers specializing in epistemology might more often ignore it, but they know it's there. And not all of them ignore it! I'm not an epistemologist, by my doctoral supervisor was, and it's not unusual for his work to refer to Bayesian ideas in modelling stuff about how to evaluate evidence. (I.e. in uhm, defending the fine-tuning argument for the existence of God, which might not be the best use, but still!: https://www.yoaavisaacs.com/uploads/6/9/2/0/69204575/ms_for_fine-tuning_fine-tuning.pdf). (John was my supervisor, not Yoav.) 

A high interest in bias stuff might genuinely be more an Eliezer/LessWrong legacy though. 

On Deference and Yudkowsky's AI Risk Estimates

'If you'd always assumed he's wrong about literally everything, it should be telling for you that OP had to go 15 years back to get good examples.' How strong evidence this is also depends on whether he has made many resolvable predictions since 15-years ago, right? If he hasn't it's not very telling. To be clear, I genuinely don't know if he has or hasn't.

On Deference and Yudkowsky's AI Risk Estimates

For all I know, you maybe right or not (insofar as I follow what's being insinuated), but whilst I freely admit that l, like anyone who wants to work in EA, have self-interested incentives to not be too critical of Eliezer, there is no specific secret "latent issue" that I personally am aware of and consciously avoiding talking about. Honest.

On Deference and Yudkowsky's AI Risk Estimates

Several thoughts:

  1. I'm not sure I can argue for this, but it feels weird and off-putting to me that all this energy is being spent discussing how good a track-record one guy has, especially one guy with a very charismatic and assertive writing-style, and a history of attempting to provide very general guidance for how to think across all topics (though I guess any philosophical theory of rationality does the last thing.) It just feels like a bad sign to me, though that could just be for dubious social reasons.

  2. The question of how much to defer to E.Y. isn't answered just by things like "he has possibly the best track record in the world on this issue." If he's out of step with other experts, and by a long way, we need to have reason to think he outperforms the aggregate of experts before we weight him more than the aggregate and it's entirely normal, I'd have thought, for the aggregate to significantly outperform the single best individual. (I'm not making as strong a claim as that the best individual outperforming the aggregate is super-unusual and unlikely.) Of course if you think he's nearly as good as the aggregate, then you should still move a decent amount in his direction. But even that is quite a strong claim that goes beyond him being in the handful of individuals with the best track record.

  3. It strikes me that some of the people criticizing this post on the grounds that actually E.Y. has a great track record keep citing "he's been right that there is significant X-risk from A.I., when almost everyone else missed that' for a couple of reasons.

Firstly, this isn't actually a prediction that has been resolved as correct in any kind of unambiguous way. Sure, a lot of very smart people in the EA community now agree. (And I agree the risk is worth assigning EA resources to as well, to be clear.) But we should be wary of substituting the judgment of the community that a prediction looks rational, for a track record of predictions that have actually resolved successfully in my view. (I think the later is better evidence than the former in most cases.)

Secondly, I feel like E.Y. being right about the importance of A.I.-risk is actually not very surprising, conditional on the key assumption here about E.Y. that Ben is relying on in telling people to be cautious about the probabilities and timelines that E.Y. gives for A.I. doom, but that even given this, IF Ben's assumption is correct it's still a good reason to doubt E.Y.'s p(doom). Suppose, as is being alleged here, someone has a general bias, for whatever reasons towards the view that doom from some technological source or other is likely and imminent. Does that make it especially surprising that that individual finds an important source of doom most people have missed? Not especially that I can see: sure they will be less rational on the topic perhaps, but a) a bias towards p(doom) wbeing high doesn't necessarily imply being poor ranking sources of doom-risk by relative importance, and b) there is probably a counter-effect where bias towards doom makes you more likely to find underrated doom-risks, because you spend more time looking. Of course, finding a doom-risk larger than most others that approx. everyone had missed would still be a very impressive achievement. But the question Ben's addressing isn't "is E.Y. a smart person with insights about A.I. risk?" but rather "how much should we update on E.Y.'s views about p(near-term A.I. doom)?" Suppose significant bias towards doom is genuinely evidenced by E.Y.'s earlier nanotech prediction (which to be fair is only 1 data point) and a good record at identifying neglected important doom sources is only weak evidence that E.Y. lacks the bias. Then we'd be right to only update a little towards doom, even if E.Y.'s record on A.I. risk was impressive in some ways.

DeepMind’s generalist AI, Gato: A non-technical explainer

Ah, I made an error here, I misread what was in which thread and thought Amber was talking about Gwern's comment rather than your original post. The post itself is fine!  Sorry!

DeepMind’s generalist AI, Gato: A non-technical explainer

For what it's worth, as a layperson, I found it pretty hard to follow properly. I also think there's a selection effect where people who found it easy will post but people who found it hard won't. 

Bad Omens in Current Community Building

I suspect that it varies within the domain of X-risk focused work how weird and cultish it looks to the average person. I think both A.I. risk stuff and a generic "reduce extinction risk" framing will look more "religious" to the average person than "we are worried about pandemics an nuclear wars."

EA will likely get more attention soon

Also, I doubt Torres is writing in bad faith exactly. "Bad faith" to me has connotations of 'is saying stuff they know to be untrue', when with Torres I'm sure he believes what he's saying he's just angry about it, and anger biases. 

EA will likely get more attention soon

In my view, Phil Torres' stuff, whilst not entirely fair, and quite nasty rhetorically, is far from the worst this could get. He actually is familiar with what some people within EA think in detail, reports that information fairly accurately, even if he misleads by omission somewhat*, and makes  criticisms of controversial philosophical assumptions of some leading EAs that have some genuine bite, and might be endorsed by many moral philosophers. His stuff actually falls into the dangerous sweet spot where legitimate ideas, like 'is adding happy people actually good anyway' get associated with less fair criticism-"Nick Beckstead did white supremacy when he briefly talked about different flow-through effects of saving lives in different places", potentially biasing us against the legit stuff in a dangerous way. 

But there could-again, in my view-easily be a wave of criticism coming from people who share Torres' political viewpoint and tendency towards heated rhetoric, but who, unlike him, haven't really taken the time to understand EA /longtermist/AI safety ideas in the first place. I've already seen one decently well-known anti-"tech" figure on twitter re-tweet a tweet that in it's entirety consisted of "long-termism is eugenics!".  People should prepare emotionally (I have already mildly lost my temper on twitter in a way I shouldn't have, but at least I'm not anyone important!) for keeping their cool in the face of criticisms that is:
-Poorly argued 
-Very rhetorically forceful
-Based on straightforward misunderstandings
-Involves infuriatingly confident statements of highly contestable philosophical and empirical assumptions. 
-Deploy guilt-by-association tactics of an obviously unreasonable sort**: i.e. so-and-so once attended a conference with Peter Thiel, therefore they share [authoritarian view] with Thiel.
-Attacks motives not just ideas
-Gendered in a way that will play directly to the personal insecurities of some male EAs.

Alas, stuff can be all those things and also identify some genuine errors we're making. It's important we remain open to that, and also don't get too polarized politically by this kind of stuff ourselves. 

* (i.e. he leaves out reasons to be longtermist that don't depend on total utilitarianism or adding happy people being good, doesn't discuss why you might reject person-affecting population ethics etc.)

** I say "of an unreasonable sort" because in principle people's associations can be legitimately criticized if they have bad effects, just like anything else. 

Load More