SOH

Sean_o_h

Research Programme Director @ University of Cambridge
2777 karmaJoined Dec 2014

Bio

I direct the AI:Futures and Responsibility Programme (https://www.ai-far.org/) at the University of Cambridge, which works on AI strategy, safety and governance. I also work on global catastrophic risks with the Centre for the Study of Existential Risk and AI strategy/policy with the Centre for the Future of Intelligence.

Comments
182

Good to know, thank you.

Yeah, unfortunately I suspect that "he claimed to be an altruist doing good! As part of this weird framework/community!" is going to be substantial part of what makes this an interesting story for writers/media, and what makes it more interesting than "he was doing criminal things in crypto" (which I suspect is just not that interesting on its own at this point, even at such a large scale).

Thank you for all your work, and I'm excited for your ongoing and future projects Will, they sound very valuable! But I hope and trust you will be giving equal attention to your well-being in the near-term. These challenges will need your skills, thoughtfulness and compassion for decades to come. Thank you for being so frank - I know you won't be alone in having found this last year challenging mental health-wise, and it can help to hear others be open about it.

Sean_o_h
11d49
12
2
1
1
1

Stated more eloquently than I could have, SYA.

I'd also add that, were I to be offering advice to K & E, I'd probably advise taking more time. Reacting aggressively or defensively is all too human when facing the hurricane of a community's public opinion - and that is probably not in anyone's best interest. Taking the time to sit with the issues, and later respond more reflectively as you describe, seems advisable.

Balanced against that, whatever you think about the events described, this is likely to have been a very difficult experience to go through in such a public way from their perspective - one of them described it in this thread as "the worst thing to ever happen to me". That may have affected their ability to respond promptly.

+1; except that I would say we should expect to see more, and more high-profile.

AI xrisk is now moving from "weird idea that some academics and oddballs buy into" to "topic which is influencing and motivating significant policy interventions", including on things that will meaningfully matter to people/groups/companies if put into action (e.g. licensing, potential restriction of open-sourcing, external oversight bodies, compute monitoring etc).

The former, for a lot of people (e.g. folks in AI/CS who didn't 'buy' xrisk) was a minor annoyance. The latter is something that will concern them - either because they see the specific interventions as a risk to their work, or because they feel policy is being influenced in a major way by people who are misguided.

I would think it's reasonable to anticipate more of this.

Sure, I agree with that. I also have parallel conversations with AI ethics colleagues - you're never going to be able to make much headway with a few of the most hardcore safety people that your justice/bias etc work is anything but a trivial waste of time; anyone sane is working on averting the coming doom.

Don't need to convince everyone; and there will always be some background of articles like this. But it'll be a lot better if there's a core of cooperative work too, on the things that benefit from cooperation. 

My favourite recent example of (2) is this paper:
https://arxiv.org/pdf/2302.10329.pdf

Other examples might include my coauthored papers with Stephen Cave (ethics/justice), e.g. 

https://dl.acm.org/doi/10.1145/3278721.3278780

Another would be Haydn Belfield's new collaboration with Kerry McInerney
http://lcfi.ac.uk/projects/ai-futures-and-responsibility/global-politics-ai/

Jess Whittlestone's online engagements with Seth Lazar have been pretty productive, I thought.

Some are hostile but not all, and there are disagreements and divisions just as deep if not deeper in AI ethics as there are in EA or any other broad community with multiple important aims that you can think of.

External oversight over the power of big tech is a good goal to help accomplish. This is from one of the leading AI ethics orgs; it could almost as easily have come from an org like GovAI:
https://ainowinstitute.org/publication/gpai-is-high-risk-should-not-be-excluded-from-eu-ai-act

I've heard versions of the claim multiple times, including from people i'd expect to know better, so having the survey data to back it up might be helpful even if we're confident we know. the answer.

>"Despite the potential risks, EAs broadly believe super-intelligent AI should be pursued at all costs." This is inaccurate imo.

Could we get a survey on a few versions of this question? I think it's actually super-rare in EA. 

e.g. 

"i believe super-intelligent AI should be pursued at all costs"

"I believe the benefits outweigh the risks of pursuing superintelligent AI"

"I believe if risk of doom can be agreed to be <0.2, then the benefits of AI outweight the risks"

"I believe even if misalignment risk can be reduced to near 0, pursuing superintelligence is undesirable"

Load more