Academically/professionally interested in AI governance (research, policy, communications, and strategy), technology policy, longtermism, healthy doses of moral philosophy, the social sciences, and blog writing.
Hater of factory farms, enjoyer of effective charities.
Reach out to me if you want work with me or collaborate in any way.
Reach out to me if you have questions about anything. I'll do my best to answer, and I promise I'll be friendly!
+1, I would like things like that too. I agree that having much of the great object-level work in the field route through forums (alongside a lot of other material that is not so great) is probably not optimal.
I will say though that going into this, I was not particularly impressed with the suite of beginner articles out there — sans some of Kelsey Piper's writing — and so I doubt we're anywhere close to approaching the net-negative territory for the marginal intro piece.
One approach to this might be a soft norm of trying to arxiv-ify things that would be publishable on arxiv without much additional effort.
I'm sorry to hear that you're stressed and anxious about AI. You're certainly not alone here, and what you're feeling is absolutely valid.
More generally, I'd suggest checking out resources from the Mental Health Navigator service. Some of them might be helpful for coping with these feelings.
More specifically, maybe I can offer a take on this events that's potentially worth considering. One off-the-cuff reaction I've had to Bing's weird, aggressive replies is that they might be good for raising awareness and making the concerns about AI risk much more salient. I'm far more scared about worlds where systems' bad behaviour is hidden until things get really bad, such that the world is lulled into a false sense of complacency up until that point. Having a very prominent system exhibit odd behaviour could be helpful for galvanising action.
I’m appreciative for Shakeel Hashim. Comms roles seem hard in general. Comms roles for EA seem even harder than that. Comms roles for EA during the last 3 months sound unbelievably hard and stressful.
(Note: Shakeel is a personal friend of mine, but I don’t think that has much influence on how appreciative I am of the work he’s doing, and everyone else managing these crises).
Yeah, fair point. When I wrote this, I roughly followed this process:
I think it would’ve been more informative if I wrote the bullet points with an explicit aim to add probabilities to them, rather than writing them and thinking after “ah yeah, I should more clearly express my certainty with these”.
Thank you to the CAIS team (and other colleagues) for putting this together. This is such a valuable contribution to the broader AI risk discourse.