Zach Stein-Perlman

Looking for new projects
4234 karmaJoined Nov 2020Working (0-5 years)Berkeley, CA, USA

Bio

Participation
1

AI strategy & governance. Blog: Not Optional.

Comments
411

Topic contributions
1

My favorite AI governance research since this post (putting less thought into this list):

  1. Responsible Scaling Policies (METR 2023)
  2. Deployment corrections (IAPS: O'Brien et al. 2023)
  3. Open-Sourcing Highly Capable Foundation Models (GovAI: Seger et al. 2023)
  4. Do companies’ AI Safety Policies meet government best practice? (CFI: Ó hÉigeartaigh et al. 2023)
  5. AI capabilities can be significantly improved without expensive retraining (Davidson et al. 2023)

I mostly haven't really read recent research on compute governance (e.g. 1, 2) or international governance (e.g. 1, 2, 3). Probably some of that would be on this list if I did.

I'm looking forward to the final version of the RAND report on securing model weights.

Feel free to mention your favorite recent AI governance research here.

I appreciate it; I'm pretty sure I have better options than finishing my Bachelor's; details are out-of-scope here but happy to chat sometime.

TLDR: AI governance; maybe adjacent stuff.

Skills & background: AI governance research; email me for info on my recent work.

Location: flexible.

LinkedIn: linkedin.com/in/zsp/.

Email: zacharysteinperlman at gmail.

Other notes: no college degree.

  1. I've left AI Impacts; I'm looking for jobs/projects in AI governance. I have plenty of runway; I'm looking for impact, not income. Let me know if you have suggestions!
    1. (Edit to clarify: I had a good experience with AI Impacts.)
  2. PSA about credentials (in particular, a bachelor's degree): they're important even for working in EA and AI safety.
    1. When I dropped out of college to work on AI safety, I thought credentials are mostly important as evidence-of-performance, for people who aren't familiar with my work, and are necessary in high-bureaucracy institutions (academia, government). It turns out that credentials are important—for working with even many people who know you (such that the credential provides no extra evidence) and are willing to defy conventions—for rational, optics-y reasons. It seems even many AI governance professionals/orgs are worried (often rationally) about appearing unserious by hiring or publicly-collaborating-with the uncredentialed, or something. Plus irrationally-credentialist organizations are very common/important, and may even comprise a substantial fraction of EA jobs and x-risk-focused AI governance jobs (which I expected to be more convention-defying), and sometimes an organization/institution is credentialist even when it's led by weird AI safety people (those people operate under constraints).
      1. Disclaimer: the evidence-from-my-experiences for these claims is pretty weak. This point's epistemic status is more considerations + impressions from a few experiences than facts/PSA.
      2. Upshot: I'd caution people against dropping out of college to increase impact unless they have a great plan.
      3. (Edit to clarify: this paragraph is not about AI Impacts — it's about everyone else.)

You don't need EA or AI safety motives to explain the event. Later reporting suggested that it was caused by (1) Sutskever and other OpenAI executives telling the board that Altman often lied (WSJ, WaPo, New Yorker) and (2) Altman dishonestly attempting to remove Toner from the board (over the obvious pretext that her coauthored paper Decoding Intentions was too critical of OpenAI, plus allegedly falsely telling board members that McCauley wanted Toner removed) (NYT, New Yorker). As far as I know, there's ~no evidence that EA or AI safety motives were relevant, besides the composition of the board. This isn't much of a mystery.

See generally gwern's comments.

Thanks!

General curiosity. Looking at it, I'm interested in my total-hours and karma-change. I wish there was a good way to remind me of... all about how I interacted with the forum in 2022, but wrapped doesn't do that (and probably ~can't do it; probably I should just skim my posts from that year...)

Cool. Is it still possible to see my 2022 wrapped?

I object to your translation of actual-votes into approval-votes and RCV-votes, at least in the case of my vote. I gave almost all of my points to my top pick, almost all of the rest to my second pick, almost all of the rest to my third pick, and so forth until I was sure I had chosen something that would make top 3. But e.g. I would have approved of multiple. (Sidenote: I claim my strategy is optimal under very reasonable assumptions/approximations. You shouldn't distribute points like you're trying to build a diverse portfolio.)

we are convinced this push towards decentralization will make the EA ecosystem more resilient and better enable our projects to pursue their own goals.

I'm surprised. Why? What was wrong with the EV sponsorship system?

(I've seen Elizabeth's and Ozzie's posts on this topic and didn't think the downsides of sponsorship were decisive. Curious which downsides were decisive for you.)

[Edit: someone offline told me probably shared legal liability is pretty costly.]

Load more