J

JackM

4044 karmaJoined

Bio

Feel free to message me on here.

Comments
673

The person who gets the role is obviously going to be highly intelligent, probably socially adept, and highly-qualified with experience working in AI etc. etc. OpenAI wouldn't hire someone who wasn't.

The question is do you want this person also to care about safety. If so I would think advertising on the EA job board would increase the chance of this.

If you think EAs or people who look at the 80K Hours job board are for some reason less good epistemically than others then you will have to explain why because I believe the opposite.

You're referring to job boards generally but we're talking about the 80K job board which is no typical job board.

I would expect someone who will do a good job to be someone going in wanting to stop OpenAI destroying the world. That seems to be someone who would read the 80K Hours job board. 80K is all about preserving the future.

They of course also have to be good at navigating organizational social scenes while holding onto their own epistemics which in my opinion are skills commonly found in the EA community!

I think the evidence we have from OpenAI is that it isn't very helpful to "be a safety conscious person there

It's insanely hard to have an outsized impact in this world. Of course it's hard to change things from inside OpenAI, but that doesn't mean we shouldn't try. If we succeed it could mean everything. You're probably going to have lower expected value pretty much anywhere else IMO, even if it does seem intractable to change things at OpenAI.

I think it's especially not helpful if you're a low-context person, who reads an OpenAI job board posting, and isn't going in with a specific plan to operate in an adversarial environment. 

Surely this isn't the typical EA though?

If OpenAI doesn't hire an EA they will just hire someone else. I'm not sure if you tackle this point directly (sorry if I missed it) but doesn't it straightforwardly seem better to have someone safety-conscious in these roles rather than someone who isn't safety-conscious? 

To reiterate, it's not like if we remove these roles from the job board that they will less likely be filled. They would still definitely be filled, just by someone less safety-conscious in expectation. And I'm not sure the person who would get the role would be "less talented" in expectation because there are just so many talented ML researchers - so I'm not sure removing roles from the job board would slow down capabilities development much if at all.

I get a sense that your argument is somewhat grounded in deontology/virtue ethics (i.e. "don't support a bad organization") but perhaps not so much in terms of consequentialism?

That's great of course. I still wouldn't have chosen your title. But thank you for spreading the word to those who have influence!

Thanks for sharing. Your post title is very misleading though. I wouldn't be surprised if Mr Beast has never even heard of EA. I'm not against clickbaity titles which are more or less accurate but exaggerated, but "Mr Beast is now officially an EA!" seems simply incorrect. Not a huge deal, but I was quite excited when I clicked on this post only to be left a bit disappointed. May be worth clarifying in the text that Mr Beast hasn't actually signalled agreement with EA principles.

Hmm, I don't see why ensuring the best people go to Anthropic necessarily means they will take safety less seriously. I can actually imagine the opposite effect as if Anthropic catches up or even overtakes OpenAI then their incentive to cut corners should actually decrease because it's more likely that they can win the race without cutting corners. Right now their only hope to win the race is to cut corners.

Ultimately what matters most is what the leadership's views are. I suspect that Sam Altman never really cared that much about safety, but my sense is that the Amodeis do.

What are you suggesting? That if we direct safety conscious people to Anthropic that it will make it more likely that Anthropic will start to cut corners? Not sure what your point is.

I've just thought of a counter-argument to my point. If OpenAI isn't safe it may be worth trying to ensure a safer AI lab (say Anthropic) wins the race to AGI. So it might be worth suggesting that talented people go to Anthropic rather than AGI, even if it is part of product or capabilities teams.

Load more