JulianHazell

1632 karmaJoined Dec 2020Pursuing a graduate degree (e.g. Master's)Working (0-5 years)

Bio

Academically/professionally interested in AI governance (research, policy, communications, and strategy), technology policy, longtermism, healthy doses of moral philosophy, the social sciences, and blog writing.

Hater of factory farms, enjoyer of effective charities.

julian[dot]hazell[at]mansfield.ox.ac.uk

How others can help me

Reach out to me if you want work with me or collaborate in any way.

How I can help others

Reach out to me if you have questions about anything. I'll do my best to answer, and I promise I'll be friendly!

Comments
50

Thank you to the CAIS team (and other colleagues) for putting this together. This is such a valuable contribution to the broader AI risk discourse.

This is great, thanks for the change. As someone who aspires to use evidence and careful reasoning to determine how to best use my altruistic resources, I sometimes get uncomfortable when people call me an effective altruist.

+1, I would like things like that too. I agree that having much of the great object-level work in the field route through forums (alongside a lot of other material that is not so great) is probably not optimal.

I will say though that going into this, I was not particularly impressed with the suite of beginner articles out there — sans some of Kelsey Piper's writing — and so I doubt we're anywhere close to approaching the net-negative territory for the marginal intro piece.

One approach to this might be a soft norm of trying to arxiv-ify things that would be publishable on arxiv without much additional effort.

Very cool! I’m excited to see where this project goes.

Thanks for taking the time to write up your views on this. I'd be keen on reading more posts like this from other folks with backgrounds in ML — particularly those who aren't already already in the EA/LessWrong/AIS sphere.

Answer by JulianHazellFeb 15, 20231811

I'm sorry to hear that you're stressed and anxious about AI. You're certainly not alone here, and what you're feeling is absolutely valid.

More generally, I'd suggest checking out resources from the Mental Health Navigator service. Some of them might be helpful for coping with these feelings.

More specifically, maybe I can offer a take on this events that's potentially worth considering. One off-the-cuff reaction I've had to Bing's weird, aggressive replies is that they might be good for raising awareness and making the concerns about AI risk much more salient. I'm far more scared about worlds where systems' bad behaviour is hidden until things get really bad, such that the world is lulled into a false sense of complacency up until that point. Having a very prominent system exhibit odd behaviour could be helpful for galvanising action.

I’m appreciative for Shakeel Hashim. Comms roles seem hard in general. Comms roles for EA seem even harder than that. Comms roles for EA during the last 3 months sound unbelievably hard and stressful.

(Note: Shakeel is a personal friend of mine, but I don’t think that has much influence on how appreciative I am of the work he’s doing, and everyone else managing these crises).

Yeah, fair point. When I wrote this, I roughly followed this process:

  • Write article
  • Summarize overall takes in bullet points
  • Add some probabilities to show roughly how certain I am of those bullet points, where this process was something like “okay I’ll re-read this and see how confident I am that each bullet is true”

I think it would’ve been more informative if I wrote the bullet points with an explicit aim to add probabilities to them, rather than writing them and thinking after “ah yeah, I should more clearly express my certainty with these”.

I think I was just reading all of those claims together and trying to subjectively guess how likely I find them all to be. So to split them up, in order of each claim: 90%, 90%, 80%.

Load more