A

Agrippa

873 karmaJoined Dec 2018

Posts
3

Sorted by New
3
· 2y ago · 1m read

Comments
125

Given your position I am concerned about the arms race accelerationism messaging in this post. Substantively, the major claims of this post are "China AI progress poses a serious threat we must overcome via AI progress (that is, we are in an arms race)" and "society may regulate AI such that projects that don't meet a very high standard of safety will not be deployable". The argument is that pursuing safety follows from these premises, mostly the latter. 

This can be interpreted in a number of ways, charitably or uncharitably. Independent of that, I do not think it is really a good idea to talk this way about AI, re: geopolitics. It has a very bad track record with other stuff such as nukes, and I'm not sure who the intended audience is (are capabilities CEOs China hawks who can only be convinced to slow down if framed in terms of beating China? big if true)

Hmm. I think if I had been in an abusive situation such as the ones OP describes, and I (privately) went to the Community Health team about it, and the only outcomes were what you just listed, I would have considered it a waste of my time and emotional energy. 

Edit: waste of my time relative to "going public", that is.

We were familiar with many (but not all) of the concerns raised in Ben’s post based on our own investigation.

What happened as a result of this, before Ben posted? 

Thanks for writing, I hope things change. 
PS: I think the name "Ratrick Bayesman" will live in my head for at least 5 years

Yeah. (as a note I am also a fan of the animal welfare stuff).
This is good suggestion. 

I think most of this stuff is too dry to hold my attention by itself. I would like a social environment that was engaging yet systematically directed my attention more often to things I care about. This happens naturally if I am around people who are interesting/fun but also highly engaged and motivated about a topic. As such I have focused on community and community spaces more than, for example, finding a good randomista newsletter or extracting randomista posts from the forums. 

Another reason to focus on community interaction, is that it is both much more fun and much more useful to help with creative problem solving. But forum posts tend to report the results of problem solving / report news. I would rather be engaging with people before that step, but I don't know of a place where one could go to participate in that aside from employment. In contrast, I do have a sense of where one could go to participate in this kind of group or community re: AI safety. 

from private convos I am pretty sure that the tweet about mike vassar is in reference to this https://forum.effectivealtruism.org/posts/7b9ZDTAYQY9k6FZHS/abuse-in-lesswrong-and-rationalist-communities-in-bloomberg?commentId=FCcEMhiwtkmr7wS84 (which is about Mike Vassar, not Jacy)

there may or may not be other things informing it, but it's not about Jacy.

"It doesn't exist" is too strong for sure. I consider GiveWell central to the randomista part and it was my entrypoint into EA at large. Founder's Pledge was also pretty randomista back when I was applying for a job there in college. I don't know anything about HLI. 

There may be a thriving community around GiveWell etc that I am ignorant to. Or maybe if I tried to filter out non-randomista stuff from my mind then I would naturally focus more on randomista stuff when engaging EA feeds. 

The reality is that I find stuff like "people just doing AI capabilities work and calling themselves EA" to be quite emotionally triggering and when I'm exposed to it thats what my attention goes to (if I'm not, as is more often the case, avoiding the situation entirely). Naturally this probably makes me pretty blind to other stuff going on in EA channels. There are pretty strong selection effects on my attention here. 

All of that said, I do think that community building in EA looks completely different than how it would look if it were the GiveWell movement.

17. I get a lot of messages these days about people wanting me to moderate or censor various forms of discussion on LessWrong that I think seem pretty innocuous to me, and the generators of this usually seem to be reputation related. E.g. recently I've had multiple pretty influential people ping me to delete or threaten moderation action against the authors of posts and comments talking about: How OpenAI doesn't seem to take AI Alignment very seriously, why gene drives against Malaria seem like a good idea, why working on intelligence enhancement is a good idea. In all of these cases the person asking me to moderate did not leave any comment of their own trying to argue for their position, before asking me to censor the content. I find this pretty stressful, and also like, most of the relevant ideas feel like stuff that people would have just felt comfortable discussing openly on LW 7 years ago or so (not like, everyone, but there wouldn't have been so much of a chilling effect so that nobody brings up these topics).


First of all, yikes.
Second of all, I think I could always sense that things were like this (broadly speaking), but simultaneously worried I was just paranoid and deranged. I think that this dynamic has been quite bad for my mental health. 

  • I think Doing Good Better was already substantially misleading about the methodology that the EA community has actually historically used to find top interventions. Indeed it was very "randomista" flavored in a way that I think really set up a lot of people to be disappointed when they encountered EA and then realized that actual cause prioritization is nowhere close to this formalized and clear-cut.

I feel like I joined EA for this "randomista" flavored version of the movement. I don't really feel like the version of EA I thought I was joining exists even though, as you describe here, it gets a lot of lip service (because it's uncontroversially good and inspiring!!!!). I found it validating for you to point this out.

If it does exist, it hasn't recruited me despite my pretty concentrated efforts over several years. And I'm not sure why it wouldn't. 

I don't have a problem with longtermist principles. As far as I'm concerned maybe the best way to promote longterm good really is to take huge risks at the expense of community health / downside risks / integreity ala SBF (among others). But I don't want to spend my life participating in some scheme to ruthlessly attain power and convert it into good, and I sure as hell don't want to spend my life participating in that as a pawn. I liked the randomista + earn to give version of the movement because I could just do things that were definitely good to do in the company of others doing the same. I feel like that movement has been starved out by this other thing wearing it as a mask.

My critique seems resilient to this consideration. The fact that managers do not publicly criticize employees is not evidence of discomfort or awkwardness. Under the very obvious model of "how would a manager get what they want re: an employee", public criticism is not a sensical lever to want to use.

Load more