inb4 OpenAI board put their whole bankroll short on Microsoft stock, will sell on Monday for XX billion and build their own chip factory. 😁
I just want to remind that simply having some of the company budget allocated to pay people to spend their time thinking about and studying the potential impacts of the technology the company is developing, is in itself a good thing.
About the possibility that they would come to the conclusion the most rational thing would be to stop the development - I think the concern here is moot anyway because of the many player dilemma in the AI space (if one stops the others don't have to), which is (I think) impossible to solve from inside any single company anyway.
Thanks for the writeup. I wonder do you (or anyone else here) have some more specific tips on where to look for high quality resources for HR people evaluating job applications? Would love to get some inside info!
@JakubK I think that your interpretation of OP's quote is somewhat less useful, in the sense that it only retroactively explains a behavior of a "unilateralist" - i.e. an actor that had already made a decision. I find that for the purposes of a generic actor, it has less utility to ask themselves "Am I (acting as) a unilateralist?" instead of "How many other actors are there capable of, yet not acting?"Beside the abstracntess of it, I see that there is actually quite some overlap with what you would call "unilateralists" and simply "courageous actors". This is because there usually is what I would refer to as an "activation energy" for actions to be made, which is basically a bias where the majority of actors are more likely not to act when the true value of an action is net neutral or slightly positive. And precisely in the scenario where the true value of an action is only slightly (but significantly) positive, you would then need a courageous actor to "overcome the activation energy" and do the right thing.
I'm not sure if activation energy is the right term, but it is an observed phenomenon in ethics, see the comparison of the Fat Man with the classical Trolley Switch scenario. To sum it up, I think that looking at the unilateralist's curse simply as a statistical phenomenon is fundamentally wrong, and one must include moral dimensions in deciding whether to take the action no one has taken despite having the option to. That said, the general advice in the post like information sharing etc. are indeed helpful and applicable.