AI strategy & governance. Blog: Not Optional.
I appreciate it; I'm pretty sure I have better options than finishing my Bachelor's; details are out-of-scope here but happy to chat sometime.
TLDR: AI governance; maybe adjacent stuff.
Skills & background: AI governance research; email me for info on my recent work.
Location: flexible.
LinkedIn: linkedin.com/in/zsp/.
Email: zacharysteinperlman at gmail.
Other notes: no college degree.
You don't need EA or AI safety motives to explain the event. Later reporting suggested that it was caused by (1) Sutskever and other OpenAI executives telling the board that Altman often lied (WSJ, WaPo, New Yorker) and (2) Altman dishonestly attempting to remove Toner from the board (over the obvious pretext that her coauthored paper Decoding Intentions was too critical of OpenAI, plus allegedly falsely telling board members that McCauley wanted Toner removed) (NYT, New Yorker). As far as I know, there's ~no evidence that EA or AI safety motives were relevant, besides the composition of the board. This isn't much of a mystery.
See generally gwern's comments.
Thanks!
General curiosity. Looking at it, I'm interested in my total-hours and karma-change. I wish there was a good way to remind me of... all about how I interacted with the forum in 2022, but wrapped doesn't do that (and probably ~can't do it; probably I should just skim my posts from that year...)
I object to your translation of actual-votes into approval-votes and RCV-votes, at least in the case of my vote. I gave almost all of my points to my top pick, almost all of the rest to my second pick, almost all of the rest to my third pick, and so forth until I was sure I had chosen something that would make top 3. But e.g. I would have approved of multiple. (Sidenote: I claim my strategy is optimal under very reasonable assumptions/approximations. You shouldn't distribute points like you're trying to build a diverse portfolio.)
we are convinced this push towards decentralization will make the EA ecosystem more resilient and better enable our projects to pursue their own goals.
I'm surprised. Why? What was wrong with the EV sponsorship system?
(I've seen Elizabeth's and Ozzie's posts on this topic and didn't think the downsides of sponsorship were decisive. Curious which downsides were decisive for you.)
[Edit: someone offline told me probably shared legal liability is pretty costly.]
My favorite AI governance research since this post (putting less thought into this list):
I mostly haven't really read recent research on compute governance (e.g. 1, 2) or international governance (e.g. 1, 2, 3). Probably some of that would be on this list if I did.
I'm looking forward to the final version of the RAND report on securing model weights.
Feel free to mention your favorite recent AI governance research here.