David Mathers

4051 karmaJoined


If you know how to do this, maybe it'd be useful to do it. (Maybe not though, I've never actually seen anyone defend "the market assigns a non-negligible probability to an intelligence explosion.)

I haven't had time to read the whole thing yet, but I disagree that the problem Wilkinson is pointing to with his argument is just that it is hard to know where to put the cut, because putting it anywhere is arbitrary. The issue to me seems more like, for any of the individual pairs in the sequence, looked at in isolation, rejecting the view that the very, very slightly lower probability of the much, MUCH better outcome is preferable, seems insane. Why would you ever reject an option with a  trillion trillion times better outcome, just because it was 1x10^-999999999999999999999999999999999999 less likely to happen than trillion trillion times worse outcome (assuming for both options, if you don't get the prize, the result is neutral)? The fact that it is hard to say where is the best place in the sequence to first make that apparently insane choice seems also concerning, but less central to me? 

I strongly endorse the overall vibe/message of titotal's post here, but I'd add, as a philosopher, that EA philosophers are also a fairly professionally impressive bunch.

Peter Singer is a leading academic ethicist by any standards. The GPI in Oxford's broadly EA-aligned work is regularly published in leading journals. I think it is fair to say Derek Parfit was broadly aligned with EA, and a key influence on the actually EA philosophers, and many philosophers would tell you he was a genuinely great philosopher. Many of the most controversial EA ideas like longtermism have roots in his work. Longtermism is less like a view believed only by a few marginalised scientists, and more like say, a controversial new interpretation of quantum mechanics that most physicists reject, but some young people at top departments like and which you can publish work defending in leading journals.

I want to say just "trust the market", but unfotunately, if OpenAI has a high but not astronomical valuation, then even if the market is right, that could mean "almost certainly will be quite useful and profitable, chance of near-term AGI almost zero' or it could mean "probably won't be very useful or profitable at all, but 1 in 1000 chance of near-term AGI supports high valuation nonetheless" or many things inbetween those two poles. So I guess we are sort of stuck with our own judgment? 

It's got nothing to do with crime is my main point.

There's no reason to blame the Rationalist influence on the community for SBF that I can see. What would the connection be?

I don't see why we'd expect less factory farms under socialism, except via us being poorer in general. And I feel like "make everything worse for humans to make things better for animals" feels a bit "cartoon utilitarian super-villain", even if I'm not sure what is wrong with it. It's also not why socialists support socialism, even if many are also pro-animal. On the other hand, if socialism worked as intended, why would factory farming decrease? 

I think two things are being conflated here into a 3rd position no one holds

-Some people don't like the big R community very much.

-Some people don't think improving the world's small-r rationality/epistemics should be a leading EA cause area.

Are getting conflated into:

-People don't think it's important to try hard at being small-r rational. 


I agree that some people might be running together the first two claims, and that is bad, since they are independent, and it could easily be high impact to work on improving collective epistemics in the outside world even if the big R rationalist community was bad in various ways. But holding the first two claims (which I think I do moderately) doesn't imply the third. I think the rationalists are often not that rational in practice, and are too open to racism and sexim. And I also (weakly) think that we don't currently know enough about "improving epistemics" for it to be a tractable cause area. But obviously I still want us to make decisions rationally, in the small-r sense internally. Who wouldn't! Being against small-r rationality is like being against kindness or virtue; no one thinks of themselves as taking that stand. 

For what it's worth, I was one of the most anti-Hanania/Manifest people in the original big thread, and I don't think I'm all that "cancel-y" overall. I'm opposed to people being fired from universities for edgy right-wing opinions on empirical matters, and I'm definitely opposed to them being cut off from all jobs. I do think people should not hire open neo-Nazis (or for that matter left-wingers who believe in genuinely deranged antisemitic conspiracy theories) for normal jobs, but I don't think any of the Manifest speakers fell in that category. But I see a difference between the role of universities-find out the truth no matter what by permitting very broad debate-and the role of a group like EA that has a particular viewpoint and no obligation to invite in people who disagree with it. 

I don't think anyone heavily involved in global health stuff has ever said they endorse scientific racism. But I don't think this is true about eugenics. Of the two people most associated with the founding of GWWC, you've criticized Will yourself here on the grounds that you thought some of the stuff he says in WWOTF about cloning scientific geniuses is too eugenicist. And Toby Ord was Bostrom's co-author on a paper defending attempts to increase the average IQ,  through genetic engineering, that I'm guessing you would oppose:

(As I've said elsewhere, I have more complicated feelings about genetic enhancement. I think it is potentially beneficial, but also tends to be correlated with bad politics, and it could be the negative social effects of allowing it outweigh the benefits.) 

Load more