Here's some quick takes on what you can do if you want to contribute to AI safety or governance (they may generalise, but no guarantees). Paraphrased from a longer talk I gave, transcript here.Â
* First, thereâs still tons of alpha left in having good takes.
* (Matt Reardon originally said this to me and I was like, âwhat, no wayâ, but now I think he was right and this is still true â thanks Matt!)
* You might be surprised, because thereâs many people doing AI safety and governance work, but I think thereâs still plenty of demand for good takes, and you can distinguish yourself professionally by being a reliable source of them.
* But how do you have good takes?
* I think the thing you do to form good takes, oversimplifying only slightly, is you read Learning by Writing and you go âyes, thatâs how I should orient to the reading and writing that I do,â and then you do that a bunch of times with your reading and writing on AI safety and governance work, and then you share your writing somewhere and have lots of conversations with people about it and change your mind and learn more, and thatâs how you have good takes.
* What to read?
* Start with the basics (e.g. BlueDotâs courses, other reading lists) then work from there on whatâs interesting x important
* Write in public
* Usually, if you havenât got evidence of your takes being excellent, itâs not that useful to just generally voice your takes. I think having takes and backing them up with some evidence, or saying things like âI read this thing, hereâs my summary, hereâs what I thinkâ is useful. But itâs kind of hard to get readers to care if youâre just like âIâm some guy, here are my takes.â
* Some especially useful kinds of writing
* In order to get people to care about your takes, you could do useful kinds of writing first, like:
* Explaining important concepts
* E.g., evals awareness, non-LLM architectures (should I care? why?) , AI control, best arguments for/against sho