H

Habryka

16305 karmaJoined Sep 2014

Bio

Project lead of LessWrong 2.0, often helping the EA Forum with various issues with the forum. If something is broken on the site, it's a good chance it's my fault (Sorry!).

Comments
963

Looking forward to how it plays out! LessWrong made the intentional decision to not do it, because I thought posts are too large and have too many claims and agreement/disagreement didn't really have much natural grounding any more, but we'll see how it goes. I am glad to have two similar forums so we can see experiments like this play out. 

My current model is that actually very few people who went to DC and did "AI Policy work" chose a career that was well-suited to proposing policies that help with existential risk from AI. In-general people tried to choose more of a path of "try to be helpful to the US government" and "become influential in the AI-adjacent parts of the US government", but there are almost no people working in DC whose actual job it is to think about the intersection of AI policy and existential risk. Mostly just people whose job it is to "become influential in the US government so that later they can steer the AI existential risk conversation in a better way".

I find this very sad and consider it one of our worst mistakes, though I am also not confident in that model, and am curious whether people have alternative models.

(I gave it a small-downvote) I currently think that representation of the person in question is pretty inaccurate. I have various problems with them, one of the primary ones is that they threatened an EA community institution with a libel lawsuit, which you might have picked up I am not a huge fan of, but your comment to me seemed to be more likely to mislead (and to somewhat miasmically propagate a narrative I consider untrustworthy), and also that specific request for privacy still strikes me as illegitimate (as I have commented on the relevant posts).

The OP is not titled "An incomplete list of activities that EA orgs should think about before doing" it is "An incomplete list of activites that EA orgs Probably Shouldn't Do". I agree that most of the things listed in the OP seem reasonable to think about and take into account in a risk analysis, but I doubt the OP is actually contributing to people doing much more of that. 

I would love a post that would go into more detail on "when each of these seems appropriate to me", which seems much more helpful to me.

I think it depends a lot on what you mean by "a post like this". Like, I do think I would just really like more investigation and more airing of suspicions around, and yeah, that includes people's concerns with Lightcone. 

Some of these seem fine to me as norms, some of them seem bad. Some concrete cases: 

Live with coworkers, especially when there is a power differential and especially when there is a direct report relationship

Many startups start from someone's living room. LessWrong was built in the Event Horizon living room. This was great, I don't think it hurt anyone, and it also helped the organization survive through the pandemic, which I think was quite good.

Retain someone as a full-time contractor or grant recipient for the long term, especially when it might not adhere to legal guidelines

I don't understand this. There exist many long-term contracting relationships that Lightcone engages in. Seems totally fine to me. Also, many people prefer to be grant recipients instead of employees, those come with totally different relationship dynamics.

Offer employer-provided housing for more than a predefined and very short period of time, thereby making an employee’s housing dependent on their continued employment and allowing an employer access to an employee’s personal living space

I also find this kind of dicey, though at least in Lightcone's case I think it's definitely worth it, and I know of many other cases where it seems likely worth it. We own a large event venue, and we are currently offering one employee free housing in exchange for being on-call for things that happen in the night. This seems like a fair trade to me and very standard (one of the sections of our hotel is indeed explicitly zoned as a "care-taker unit" for this exact purpose). 

Date the partner of their funder/grantee, especially when substantial conflict-of-interest mechanisms are not active

This seems really quite a lot too micromanagey to me. I agree that there should be COI mechanisms in place, but this seems like it's really trying to enforce norms on parts of people's lives that really are their business.

This continues to feel quite a bit too micromanagy to me. Mostly these are the complaints that seemed significant to Ben (which also roughly aligned with my assessment). 

The post was already like 100+ hours of effort to write. I don't think "more contextualizing" is a good use of at least our time (though if other people want to do this kind of job and would do more of that, then that seems maybe fine to me).

Like, again, I think if some people want to update that all weirdness is bad, then that's up to them. It is not my job, and indeed would be a violation of what I consider cooperative behavior, to filter evidence so that the situation here only supports my (or Ben's) position about how organizations should operate.

Yep, not clear what to do about that. Seems kind of sad, and I've strong-downvoted the relevant comment. I don't think it's mine or Ben's job to micromanage people's models of how organizations should operate.

I might be confused here, but it sure seemed easy to hand over money, but hard to verify that the insurance would actually kick in in the relevant situation, and wouldn't end up being voided for some random reason.

Using "preoccupied" feels a bit strawmanny here. People using this situation as a way to enforce general conservativism in a naive way was one of the top concerns that kept coming up when I talked to Ben about the post and investigation. 

The post has a lot of details that should allow people to make a more detailed model than "weird is bad", but I don't think it would be better for it to take a stronger stance on the causes of the problems that its providing evidence for, since getting the facts out is IMO more important. 

It would seem low-integrity by my standards to decline to pursue this case because I would be worried that people would misunderstand the facts in a way that would cause inconvenient political movements for me. It seems like a lot of people have a justified interest in knowing what happened here, and I don't want to optimize hard against that, just because they will predictably learn a different lesson than I have. The right thing to do is to argue in favor of my position after the facts are out, not to withhold information like this.

Also, I think the key components of this story are IMO mostly about the threats of retaliation and associated information control, which I think mostly comes across to readers (at least based on the comments I've seen so far), and also really doesn't seem like it has much to do with general weirdness. If anything this kind of information control is more common in the broader world, and things like libel suits are more frequent.

Load more