H

Habryka

18965 karmaJoined Sep 2014

Bio

Project lead of LessWrong 2.0, often helping the EA Forum with various issues with the forum. If something is broken on the site, it's a good chance it's my fault (Sorry!).

Comments
1191

Topic contributions
1

I take a very longtermist and technology-development focused view on things, so the GHD achievements weigh a lot less in my calculus. 

The vast majority of world-changing technology was developed or distributed through for-profit companies. My sense is nonprofits are also more likely to cause harm than for-profits (for reasons that would require its own essay to go into, but are related to their lack of feedback loops).

This is an extremely rich guy who isn't donating any of his money.

FWIW, I totally don't consider "donating" a necessary component of taking effective altruistic action. Most charities seem much less effective than the most effective for-profit organizations, and most of the good in the world seems achieved by for-profit companies. 

I don't have a particularly strong take on Bryan Johnson, but using "donations" as a proxy seems pretty bad to me.

Less than a year ago Deepmind and Google Brain were two separate companies (both making cutting-edge contributions to AI development). My guess is if you broke off Deepmind from Google you would now just pretty quickly get competition between Deepmind and Google Brain (and more broadly just make the situation around slowing things down a more multilateral situation).

But more concretely, anti-trust action makes all kinds of coordination harder. After an anti-trust action that destroyed billions of dollars in economic value, the ability to get people in the same room and even consider coordinating goes down a lot, since that action itself might invite further anti-trust action.

Huh, fwiw I thought this proposal would increase AI risk, since it would increase competitive dynamics (and generally make coordinating on slowing down harder). I at least didn't read this post as x-risk motivated (though I admit I was confused what it's primary motivation was).

Yeah, that's a decent link. I do think this comment is more about whether anti-recommendations for organizations should be held to a similar standard. My comment also included some criticisms of Sean personally, which I think do also make sense to treat separately, though at least I definitely intend to also try to debias my statements about individuals after my experiences with SBF in-particular on this dimension.

Hmm, I agree that there was some aggression here, but I felt like Sean was the person who first brought up direct criticism of a specific person, and very harsh one at that (harsher than mine I think). 

Like, Sean's comment basically said "I think it was directly Bostrom's fault that FHI died a slow painful death, and this could have been avoided with the injection of just a bit of competence in the relevant domain". My comment is more specific, but I don't really see it as harsher. I also have a prior to not go into critiques of individual people, but that's what Sean did in this context (of course Bostrom's judgement is relevant, but I think in that case so is Sean's).

Pushback (in the form of arguments) is totally reasonable! It seems very normal that if someone is arguing for some collective path of action, using non-shared assumptions, that there is pushback. 

The thing that feels weirder is to invoke social censure, or to insist on pushback when someone is talking about their own beliefs and not clearly advocating for some collective path of action. I really don't think it's common for people to push back when someone is expressing some personal belief of theirs that is only affecting their own actions. 

In this case, I think it's somewhat ambiguous whether there I am was arguing for a collective path of action, or just explaining my private beliefs. By making a public comment I at least asserted some claim towards relevance towards others, but I also didn't explicitly say that I was trying to get anyone else to really change behavior.

And in either case, invoking social censure on the basis of someone expressing a belief of theirs without also giving a comprehensive argument for that belief seems rare (not unheard of, since there are many places in the world where uniform ideologies are enforced, though I don't think EA has historically been such a place, nor wants to be such a place).

This also roughly matches my impression. I do think I would prefer the EA community to either go towards more centralized governance or less centralized governance in the relevant way, but I agree that given how things are, the EA Forum team has less leeway with moderation than the LW team. 

 I think this might be one of the LTFF writeups Oli mentions (apologies if wrong), and seems like a good place to start

Yep, that's the one I was thinking about. I've changed my mind on some of the things in that section in the (many) years since I wrote it, but it still seems like a decent starting point.

When people make claims, we expect there to be some justification proportional to the claims made.

To be clear, I also absolutely do not hold myself to this standard. I feel totally fine, and encourage others to do as well, to casually mention controversial and important beliefs of theirs whenever it seems relevant, without an obligation to fully back up that claim. Indeed, I am pretty confused what norm you are referring to here, since I also can't think of this norm in almost any context I am in. 

If someone mentions they believe in god, I don't expect that this means they are ready or want to have a conversation about theology with me right then and there. When someone says they vote libertarian in the US general election I totally don't expect to have a conversation with them about macroeconomic principles right there. People express large broad claims all the time without wanting to go into all the details.

Load more