JulianHazell

Pursuing a graduate degree (e.g. Master's)
Working (0-5 years experience)
1439Joined Dec 2020

Bio

Academically/professionally interested in AI governance (research, policy, communications, and strategy), technology policy, longtermism, healthy doses of moral philosophy, the social sciences, and blog writing.

Hater of factory farms, enjoyer of effective charities.

julian[dot]hazell[at]mansfield.ox.ac.uk

How others can help me

Reach out to me if you want work with me or collaborate in any way.

How I can help others

Reach out to me if you have questions about anything. I'll do my best to answer, and I promise I'll be friendly!

Comments
46

Thanks for taking the time to write up your views on this. I'd be keen on reading more posts like this from other folks with backgrounds in ML — particularly those who aren't already already in the EA/LessWrong/AIS sphere.

I'm sorry to hear that you're stressed and anxious about AI. You're certainly not alone here, and what you're feeling is absolutely valid.

More generally, I'd suggest checking out resources from the Mental Health Navigator service. Some of them might be helpful for coping with these feelings.

More specifically, maybe I can offer a take on this events that's potentially worth considering. One off-the-cuff reaction I've had to Bing's weird, aggressive replies is that they might be good for raising awareness and making the concerns about AI risk much more salient. I'm far more scared about worlds where systems' bad behaviour is hidden until things get really bad, such that the world is lulled into a false sense of complacency up until that point. Having a very prominent system exhibit odd behaviour could be helpful for galvanising action.

I’m appreciative for Shakeel Hashim. Comms roles seem hard in general. Comms roles for EA seem even harder than that. Comms roles for EA during the last 3 months sound unbelievably hard and stressful.

(Note: Shakeel is a personal friend of mine, but I don’t think that has much influence on how appreciative I am of the work he’s doing, and everyone else managing these crises).

Yeah, fair point. When I wrote this, I roughly followed this process:

  • Write article
  • Summarize overall takes in bullet points
  • Add some probabilities to show roughly how certain I am of those bullet points, where this process was something like “okay I’ll re-read this and see how confident I am that each bullet is true”

I think it would’ve been more informative if I wrote the bullet points with an explicit aim to add probabilities to them, rather than writing them and thinking after “ah yeah, I should more clearly express my certainty with these”.

I think I was just reading all of those claims together and trying to subjectively guess how likely I find them all to be. So to split them up, in order of each claim: 90%, 90%, 80%.

That said, if OpenPhilanthropy is pursuing this grant under a hits-based approach, it might be less controversial if they were to acknowledge this.

 

In this case — and many, actually — I think it's fair to assume they are. OP is pretty explicit about taking a hits-based giving approach.

I would like to know why. I found the post insightful.

Yeah, I'm also similarly sceptical that a highly publicised/discussed portion of one of the most hyped industries — one that borders on a buzzword at times — has not captured the attention or consideration of the market. Seems hard to imagine given the remarkably salient progress we've seen in 2022.

That phrasing is better, IMO. Thanks Michael.

I think the debate between HLI and GW is great. I've certainly learned a lot, and have slightly updated my views about where I should give. I agree that competition between charities (and charity evaluators) is something to strive for, and I hope HLI keeps challenging GiveWell in this regard.

Load More