Here's some quick takes on what you can do if you want to contribute to AI safety or governance (they may generalise, but no guarantees). Paraphrased from a longer talk I gave, transcript here.
* First, there’s still tons of alpha left in having good takes.
* (Matt Reardon originally said this to me and I was like, “what, no way”, but now I think he was right and this is still true – thanks Matt!)
* You might be surprised, because there’s many people doing AI safety and governance work, but I think there’s still plenty of demand for good takes, and you can distinguish yourself professionally by being a reliable source of them.
* But how do you have good takes?
* I think the thing you do to form good takes, oversimplifying only slightly, is you read Learning by Writing and you go “yes, that’s how I should orient to the reading and writing that I do,” and then you do that a bunch of times with your reading and writing on AI safety and governance work, and then you share your writing somewhere and have lots of conversations with people about it and change your mind and learn more, and that’s how you have good takes.
* What to read?
* Start with the basics (e.g. BlueDot’s courses, other reading lists) then work from there on what’s interesting x important
* Write in public
* Usually, if you haven’t got evidence of your takes being excellent, it’s not that useful to just generally voice your takes. I think having takes and backing them up with some evidence, or saying things like “I read this thing, here’s my summary, here’s what I think” is useful. But it’s kind of hard to get readers to care if you’re just like “I’m some guy, here are my takes.”
* Some especially useful kinds of writing
* In order to get people to care about your takes, you could do useful kinds of writing first, like:
* Explaining important concepts
* E.g., evals awareness, non-LLM architectures (should I care? why?) , AI control, best arguments for/against sho