All of Dr. David Mathers's Comments + Replies

Who wants to be hired? (May-September 2022)

Dr. David Mathers

Location: Penicuik, Scotland, United Kingdom
Remote: Yes
Willing to relocate: Yes 
Skills: Research, especially in philosophy, forecasting.
CV: Dr. David Mathers-CV-June 2022 - Google Docs
Email: davidabm1@gmail.com

Notes: 
-PhD in philosophy from the University of Oxford. 
-Giving What We Can member since 2012
-Last summer as a research intern for Rethink Priorities I wrote a 50 000 word report in 4 months on a technical issue in the philosophy of mind relevant to cause priortization.
-Currently taking part in a forecasting tournamen... (read more)

On Deference and Yudkowsky's AI Risk Estimates

It seems really bad, from a communications/PR point of view, to write something that was ambiguous in this way. Like, bad enough that it makes me slightly worried that MIRI will commit some kind of big communications error that gets into the newspapers and does big damage to the reputation of EA as a whole.

On Deference and Yudkowsky's AI Risk Estimates

'Here’s one data point I can offer from my own life: Through a mixture of college classes and other reading, I’m pretty confident I had already encountered the heuristics and biases literature, Bayes’ theorem, Bayesian epistemology, the ethos of working to overcome bias, arguments for the many worlds interpretation, the expected utility framework, population ethics, and a number of other ‘rationalist-associated’ ideas before I engaged with the effective altruism or rationalist communities.'

I think some of this is just a result of being a community founded ... (read more)

Speaking for myself, I was interested in a lot of the same things in the LW cluster (Bayes, approaches to uncertainty, human biases, utilitarianism, philosophy, avoiding the news) before I came across LessWrong or EA. The feeling is much more like "I found people who can describe these ideas well" than "oh these are interesting and novel ideas to me." (I had the same realization when I learned about utilitarianism...much more of a feeling that "this is the articulation of clearly correct ideas, believing otherwise seems dumb").

That said, some of the ideas ... (read more)

the main titled professorship in ethics at that time was held by John Broome, a utilitarianism-sympathetic former economist, who had written famous stuff on expected utility theory. I can't remember if he was the PhD supervisor of anyone important to the founding of EA, but I'd be astounded if some of the phil. people involved in that had not been reading his stuff and talking to him about it.

Indeed, Broome co-supervised the doctoral theses of both Toby Ord and Will MacAskill. And Broome was,  in fact, the person who advised Will to get in touch with Toby, before the two had met.

3Guy Raveh2mo
Veering entirely off-topic here, but how does the many worlds hypothesis tie in with all the rest of the rationality/EA stuff?
On Deference and Yudkowsky's AI Risk Estimates

'If you'd always assumed he's wrong about literally everything, it should be telling for you that OP had to go 15 years back to get good examples.' How strong evidence this is also depends on whether he has made many resolvable predictions since 15-years ago, right? If he hasn't it's not very telling. To be clear, I genuinely don't know if he has or hasn't.

7Guy Raveh2mo
Sounds reasonable. Though predictions aren't the only thing one can be demonstratably wrong about.
On Deference and Yudkowsky's AI Risk Estimates

For all I know, you maybe right or not (insofar as I follow what's being insinuated), but whilst I freely admit that l, like anyone who wants to work in EA, have self-interested incentives to not be too critical of Eliezer, there is no specific secret "latent issue" that I personally am aware of and consciously avoiding talking about. Honest.

4Charles He2mo
I am grateful for your considerate comment and your reply. I had no belief or thought about dishonesty. Maybe I should have added[1] [#fnp5euupvvvhg]: * "this is for onlookers" * "this is trying to rationalize/explain why this post exists, that has 234 karma and 156 votes, yet only talks about high school stuff." I posted my comment because this situation is hurting onlookers and producing bycatch? I don't really know what to do here (as a communications thing) and I have incentives not to be involved? 1. ^ [#fnrefp5euupvvvhg]But this is sort of getting into the elliptical rhetoric and self-referential stuff, that is sort of related to the problem in the first place.
On Deference and Yudkowsky's AI Risk Estimates

Several thoughts:

  1. I'm not sure I can argue for this, but it feels weird and off-putting to me that all this energy is being spent discussing how good a track-record one guy has, especially one guy with a very charismatic and assertive writing-style, and a history of attempting to provide very general guidance for how to think across all topics (though I guess any philosophical theory of rationality does the last thing.) It just feels like a bad sign to me, though that could just be for dubious social reasons.

  2. The question of how much to defer to E.Y.

... (read more)
-28Charles He2mo
DeepMind’s generalist AI, Gato: A non-technical explainer

Ah, I made an error here, I misread what was in which thread and thought Amber was talking about Gwern's comment rather than your original post. The post itself is fine!  Sorry!

3frances_lorenz3mo
Oh that's totally okay, thanks for clarifying!! And good to get more feedback because I was/am still trying to collect info on how accessible this is
DeepMind’s generalist AI, Gato: A non-technical explainer

For what it's worth, as a layperson, I found it pretty hard to follow properly. I also think there's a selection effect where people who found it easy will post but people who found it hard won't. 

2frances_lorenz3mo
this is really good to know, thank you!! I'm thinking we hit more of a 'familiar with some technical concepts/lingo' accessibility level rather than being accessible to people who truly have no/little familiarity with the field/concepts. Curious if that seems right or not (maybe some aspects of this post are just broadly confusing). I was hoping this could be accessible to anyone so will have to try and hit that mark better in the future.
Bad Omens in Current Community Building

I suspect that it varies within the domain of X-risk focused work how weird and cultish it looks to the average person. I think both A.I. risk stuff and a generic "reduce extinction risk" framing will look more "religious" to the average person than "we are worried about pandemics an nuclear wars."

EA will likely get more attention soon

Also, I doubt Torres is writing in bad faith exactly. "Bad faith" to me has connotations of 'is saying stuff they know to be untrue', when with Torres I'm sure he believes what he's saying he's just angry about it, and anger biases. 

6ZachWeems3mo
Agreed. My model is, he has a number of frustrations with EA. That on its own isn't a big deal. There are plenty of valid, invalid, and arguable gripes with various aspects of EA. But he also has a major bucket error where the concept of "far-right" is applied to a much bigger Category of bad stuff. Since some aspects of EA & longtermism seem to be X to him, and X goes in the Category, and stuff in the Category is far-right, EA must have far-right aspects. To inform people of the problem, he writes articles claiming they're far-right. If EA's say his claims are factually false, he thinks the respondents are fooling themselves. After all, they're ignoring his wider point that EA has stuff from the Category, in favor of the nitpicky technicalities of his examples. He may even think they're trying to motte & bailey people into thinking EA & longtermism can't possibly have X. To me, it sounds like his narrative is now that he's waging a PR battle against Bad Guys. I'm not sure what the Category is, though. At first I thought it was an entirely emotional thing- stuff that make him sufficiently angry, or a certain flavor of angry, or anything where he can't verbalize why it makes him angry, are assumed to be far-right. But I don't think that fits his actions. I don't expect many people can decide "this makes me mad, so it's full of white supremacy and other ills", run a years-long vendetta on that basis, and still have a nuanced conversation about which parts aren't bad. Now I think X has a "shape"- with time & motivation, in a safe environment, Torres could give a consistent definition of what X is and isn't. And with more of those, he could explain what it is & why he hates it without any references to far-right stuff. Maybe he could even do an ELI5 of why X goes in the same Category as far right stuff in the first place. But not much chance of this actually happening, since it requires him being vulnerable with a mistrusted representative of the Bad Guys.
3MikeJ3mo
Yes, i’m always unsure of what “bad faith” really means. I often see it cited as a main reason to engage or not engage with an argument. But I don’t know why it should matter to me what a writer or journalist intends deep down. I would hope that “good faith” doesn’t just mean aligned on overall goals already. To be more specific, i keep seeing reference hidden context behind Phil Torres’s pieces. To someone who doesn’t have the time to read through many cryptic old threads, it just makes me skeptical that the bad faith criticism is useful in discounting or not discounting an argument.
EA will likely get more attention soon

In my view, Phil Torres' stuff, whilst not entirely fair, and quite nasty rhetorically, is far from the worst this could get. He actually is familiar with what some people within EA think in detail, reports that information fairly accurately, even if he misleads by omission somewhat*, and makes  criticisms of controversial philosophical assumptions of some leading EAs that have some genuine bite, and might be endorsed by many moral philosophers. His stuff actually falls into the dangerous sweet spot where legitimate ideas, like 'is adding happy people... (read more)

2jacquesthibs3mo
Great points, here’s my impression: Meta-point: I am not suggesting we do anything about this or that we start insulting people and losing our temper (my comment is not intended to be prescriptive). That would be bad and it is not the culture I want within EA. I do think it is, in general, the right call to avoid fanning the flames. However, my first comment is meant to point at something that is already happening: many people uninformed about EA are not being introduced in a fair and balanced way, and first impressions matter. And lastly, I did not mean to imply that Torres’ stuff was the worse we can expect. I am still reading Torres’ stuff with an open-mind to take away the good criticism (while keeping the entire context in consideration). Regarding the articles: His way of writing is by telling the general story in a way that it’s obvious he knows a lot about EA and had been involved in the past, but then he bends the truth as much as possible so that the reader leaves with a misrepresentation of EA and what EAs really believe and take action on. Since this is a pattern in his writings, it’s hard not to believe he might be doing this because it gives him plausible deniability since what he’s saying is often not “wrong”, but it is bent to the point that the reader ends up inferring things that are false. To me, in the case of his latest article, you could leave with the impression that Bostrom and MacAskill (as well as the entirety of EA) both think that the whole world should stop spending any money towards philanthropy that helps anyone in the present (and if you do, only to those who are privileged). The uninformed reader can leave with the impression that EA doesn’t even actually care about human lives. The way he writes gives him credibility to the uninformed because it’s not just an all-out attack where it is obvious to the reader what his intentions are. Whatever you want to call it, this does not seem good faith to me. I welcome criticism of EA and l
1howdoyousay?3mo
Thanks for this thoughtful challenge, and in particular flagging what future provocations could look like so we can prepare ourselves and let our more reactive selves come to the fore, less of the child selves. In fact, I think I'll reflect on this list for a long time to ensure I continue not to respond on Twitter!
The AI Messiah

Obvious point, but you could assign significant credence to this being the right take, and still think working on A.I. risk is very good in expectation, given exceptional neglectedness and how bad an A.I. takeover could be. Something feels sleazy and motivated about this line of defence to me, but I find it hard to see where it goes wrong. 

2Linch3mo
I'm not sure if we're picking up on the same notion of sleaziness, and I guess it depends on what you mean by "significant credence", and "working on A.I risk" but I think it's hard to imagine someone doing really good mission-critical research work if they come into it from a perspective of "oh I don't think AI risk is at all an issue but smart people disagree and there's a small chance that I'm wrong and the EV is higher than working on other issues." Though I think it's plausible my psychology is less well-suited to "grim determination" than most people in EA. (Donations or engineering in comparison seem comparatively much more reasonable).
EA frontpages should compare/contrast with other movements

One (probably surmountable but non-trivial in my view) problem with this is that once you start trying to draft a statement about exactly what attitude we have to capitalism/economics you'll start to see underlying diversity beneath "don't want to abolish capitalism." This, I predict, will make it trickier than it seems to come up with anything clear and punchy that everyone can sign onto. In particular, leaving aside for a minute people with actually anti-capitalist views, you'll start to see a split between people with actual neo-liberal or libertarian e... (read more)

1acylhalide3mo
I see. I think your answer exactly as you've said it, would be useful to add to intro pages. Like maybe a survey result can be added. Also yeah I don't think this should be a defining feature of EA or something to rally under, but it is important info that could be presented to someone coming to EA first time who is already looking for this info.
Democratising Risk - or how EA deals with critics

Yeah, you're probably right. It's just I got a strong "history=Western history" vibe from the comment I was responding to, but maybe that was unfair!

Democratising Risk - or how EA deals with critics

Most whites had abhorent views on race at certain points in the past (probably not before 1500 though, unless Medieval antisemitism counts) but that is weak evidence that most people did, since whites were always a minority. I'm not sure many of us know what if any racial views people held in Nigeria, Iran, China or India in 1780.

i'd be pretty surprised if almost everyone didn't have strongly racist views in 1780. Anti-black views are very prevalent in India and China today, as I understand it. eg Gandhi had pretty racist attitudes.

I seem to remember learning about rampant racism in China helping to cause the Taiping rebellion? And there are enormous amounts of racism and sectarianism today outside Western countries - look at the Rohingya genocide, the Rwanda genocide, the Nigerian civil war, the current Ethiopian civil war, and the Lebanese political crisis for a few examples.

Every one of these examples should be taken with skepticism as this is far outside my area of expertise. But while I agree with the sentiment that we often conflate the history of the world with the history of white people, I'm not sure it's true in this specific case.

Response to Recent Criticisms of Longtermism

For what it's worth, I think the basic critique of total utilitarianism of 'it's just obviously more important to save a life than to bring a new one into existence' is actually very strong. I think insofar as longtermist folk don't see that, it's probably a) because it's so obvious that they are bored with it now and b) Torres tone is so obnoxious and plausibly motivated by personal animosity. But neither of those are good reason to reject the objection!

First, longtermism is not committed to total utilitarianism.

Second, population ethics is notoriously difficult, and all views have extremely counterintuitive implications. To assess the plausibility of total utilitarianism—to which longtermism is not committed—, you need to do the hard work of engaging with the relevant literature and arguments. Epithets like "genocidal" and "white supremacist" are not a good substitute for that engagement. [EDIT: I hope it was clear that by "you", I didn't mean "you, Dr Mathers".]

If you think you have valid objections to ... (read more)