90Joined May 2021


Sorted by New


As someone who has recently been in the AI Safety org interview circuit, about 50% of interviews were traditional Leetcode style algorithmic/coding puzzle and 50% were more practical. This seems pretty typical compared to industry.

The EA orgs I interviewed with were very candid about their approach, and I was much less surprised by the style of interview I got than I was surprised when interviewing in industry. Anthropic, Ought, and CEA all very explicitly lay out what their interviews look like publicly. My experience was that the interviews matched the public description very well.

Thanks! Current volume is reasonable but I will totally forward some your way if I get overwhelmed.

I forgot to mention in the body, but I should thank Yonatan for putting a draft of this together and encouraging me to post it. Thanks! I've been meaning to do this for a while.

This is a touchingly earnest comment. Also is your ldap qiurui? If those words mean nothing to you, I've got the wrong guy :)

I would not have applied without this post, and I think it also seriously increased my probability of applying to a variety of AI research roles (which I'd been putting off for years).

This almost perfectly matches my experience as a full-stack programmer at a FAANG. I especially appreciate the point that getting along well with your team-mates is a huge deal. It is a surprisingly consistent source of enjoyment in my job that I can joke and post memes to my team.

Fair enough. I would personally find it less off-putting if you framed it in terms of collecting feedback instead of focusing on the downvotes. For example, suppose I saw a thread starting with:

'I'm curious on feedback to this post. Please take this survey[link]'

and then the survey itself has questions about the positions 1/2/3/4/5 mentioned, and a question on whether the respondent up/downvoted.

Then that seems like a fine thread. You're collecting genuine feedback, maybe it seems a little over the top, but it doesn't come across as speculation on why someone disliked something. There's also an easy way for me to provide that feedback without making a public statement that people can then argue with. If I downvote something, there is a very good chance that I don't want to spend time explaining my reasoning on a public thread where I'm in a social contract to reply to objections.

I want to say that I didn't downvote the post (I think its a relatively neat idea, and has garnered at least one good submission).

On the other hand, I find speculation on 'why the downvotes?' to be unproductive.  Its reasonable to encourage people explain their opinions, but I've generally found that threads about downvotes are low quality with lots of guesses and trying to put words in other people's mouths. I don't think you're doing that here very much, but it isn't the kind of thread I'd like to see often if at all.

It also seems odd that there are so rarely threads in the other direction, asking people to explain why they liked a particular post :)

This is an excellent point. Making a new name for an existing concept is generally bad, but utilitarianism (and the associated 'for the greater good') has been absolutely savaged in public perception.

I want to mention that I like the rounded version a lot, and the angular version is better than the current 'weird 5 stars' but not quite as neat. I think the fact that the angular version looks almost exactly like a capital sigma is what throws me off (sigma means a lot of stuff).

I definitely sympathize with the argument against having a symbol for an idea. Both the good and the bad of symbolization is that it leads to identification.

Load More