All of Sean_o_h's Comments + Replies

Huge congratulations, you have made the world better. Thank you.

I quite liked it, but I'd happily give up praise posts if it meant not having the denouncement posts. 

8
Nathan Young
14d
Though sometimes denouncement posts are net positive right? Like probably not the nonlinear one, but I guess more denouncement of SBF prior would have been good. 

Supervolcanoes being unlikely to be a human extinction risk was also my conclusion when I looked into it for an extinction risk review (currently under peer review) late last year, from speaking to volcanologists - McGraw (2024) was not released at that point so I'm grateful for this analysis and to be pointed to the paper.

Datapoint: I put money in my pension.

I know this is a tangent, but I think at least in the US putting money in tax-advantaged retirement accounts still usually makes sense. I'll take the Roth 401k case, since it's the easiest to argue for:

  • In worlds that somehow end up as a vague continuation of the status quo, you'll want to have money at retirement.

  • The money is less locked up then it sounds:

    • If you want to withdraw just the contributions (and so untaxed) you can roll a Roth 401k over to a Roth IRA, if your employer allows this.

    • Five years from when you open your account there are

... (read more)

I agree. I suspect that responses to calls for evidence over the years played a big role in introducing and normalising xrisk research ideas in the UK context, before the big moves we've seen in the last year.

e.g. a few representative examples
(2016) https://data.parliament.uk/writtenevidence/committeeevidence.svc/evidencedocument/science-and-technology-committee/robotics-and-artificial-intelligence/written/32690.pdf

(2017)

https://data.parliament.uk/writtenevidence/committeeevidence.svc/evidencedocument/artificial-intelligence-committee/artificial-intelligen... (read more)

Not a perfect translation, but I like proto-EA and leading Irish language poet Sean O Riordain writing a poem about moral circle expansion back in 1971. (It reads a lot better in the original language).

https://comhar.ie/iris/81/5/ni-ceadmhach-neamhshuim/

Apathy Is Out


There’s not a fly, moth, bee,
man, or woman created by God
whose welfare’s not our responsibility;
to ignore their predicament
isn’t on.

There’s not a madman in Mad Valley
we shouldn’t sit with
and keep company,
since
he’s sick in the head
on our behalf.

There’s not a place, stream or bush, however remote... (read more)

> Cotton-Barratt could have been thrown out without any possibility of discussion. I am reliability told this is the policy of some UK universities.

Depending on what 'discussion' means here, I'd be surprised. It would be illegal to fire someone without due process. Whether discussion would be public as in here is a different matter; there tends to be a push towards confidentiality.

For balance: I've been an advocate for victims in several similar cases in UK universities, at least one of which was considerably more severe than what i've seen described in... (read more)

2
Nathan Young
2mo
So I asked my friend who runs training at universities on this topic and they said that at one university it appeared that way for a while, which is moderately weaker than what I said. So I got that wrong. But that still works as an example. There was a real world place where things were worse than here.

Uh, the word in that screenshot is "meditating". She was asking people to not talk too loudly while she was meditating.

3
ElliotJDavies
3mo
Oh thanks for flagging, I will retract it now 

That is correct. 

I would strongly caution against doing so. Even if it turns out to be seemingly justified in this instance (and I offer no view either way whether it is or not), I cannot think of a more effective way of discouraging victims/whistleblowers from coming forward (in other cases in this community) in future situations. 

I think norms should strongly push against taking seriously any public accusation made anonymously in most circumstances. I feel like we have taken a norm that was appropriate to a very limited set of circumstances and tried to make a grand moral principle out of it, and it doesn't work. Giving some anonymity to victims of sexual assault/harassment, in some circumstances, makes sense because it's a uniquely embarrassing thing to be a victim of due to our cultural taboos around sex.  Anonymity might be appropriate for people revealing problems at their... (read more)

0
Xing Shi Cai
4mo
How much credibility dose he still have left by backtracking?

This is both a very kind and a very helpful thing to offer. This is something that can help people an awful lot in terms of their career. 

Good to know, thank you.

Yeah, unfortunately I suspect that "he claimed to be an altruist doing good! As part of this weird framework/community!" is going to be substantial part of what makes this an interesting story for writers/media, and what makes it more interesting than "he was doing criminal things in crypto" (which I suspect is just not that interesting on its own at this point, even at such a large scale).

Agree with this and also with the point below that the EA angle is kind of too complicated to be super compelling for a broad audience. I thought this New Yorker piece's discussion (which involved EA a decent amount in a way I thought was quite fair -- https://www.newyorker.com/magazine/2023/10/02/inside-sam-bankman-frieds-family-bubble) might give a sense of magnitude (though the NYer audience is going to be more interested in these sort of nuances than most.

The other factors I think are: 1. to what extent there are vivid new tidbits or revelations in Lew... (read more)

The Panorama episode briefly mentioned EA. Peter Singer spoke for a couple of minutes and EA was mainly viewed as charity that would be missing out on money. There seemed to be a lot more interest on the internal discussions within FTX, crypto drama, the politicians, celebrities etc. 

Maybe Panorama is an outlier but potentially EA is not that interesting to most people or seemingly too complicated to explain if you only have an hour.

5
quinn
6mo
Michael Lewis wouldn't do it as a gotcha/sneer, but this is a reason I'll be upset if Adam McKay ends up with the movie. 

Thank you for all your work, and I'm excited for your ongoing and future projects Will, they sound very valuable! But I hope and trust you will be giving equal attention to your well-being in the near-term. These challenges will need your skills, thoughtfulness and compassion for decades to come. Thank you for being so frank - I know you won't be alone in having found this last year challenging mental health-wise, and it can help to hear others be open about it.

Sean_o_h
6mo53
13
2
1
1
1

Stated more eloquently than I could have, SYA.

I'd also add that, were I to be offering advice to K & E, I'd probably advise taking more time. Reacting aggressively or defensively is all too human when facing the hurricane of a community's public opinion - and that is probably not in anyone's best interest. Taking the time to sit with the issues, and later respond more reflectively as you describe, seems advisable.

Balanced against that, whatever you think about the events described, this is likely to have been a very difficult experience to go through in such a public way from their perspective - one of them described it in this thread as "the worst thing to ever happen to me". That may have affected their ability to respond promptly.

[anonymous]6mo77
19
2
8
1

Just want to signal my agreement with this.

My personal guess is that Kat and Emerson acted in ways that were significantly bad for the wellbeing of others. My guess is also that they did so in a manner that calls for them to take responsibility: to apologise, reflect on their behaviour, and work on changing both their environment and their approach to others to ensure this doesn't happen again. I'd guess that they have committed a genuine wrongdoing.

I also think that Kat and Emerson are humans, and this must have been a deeply distressing experience for th... (read more)

+1; except that I would say we should expect to see more, and more high-profile.

AI xrisk is now moving from "weird idea that some academics and oddballs buy into" to "topic which is influencing and motivating significant policy interventions", including on things that will meaningfully matter to people/groups/companies if put into action (e.g. licensing, potential restriction of open-sourcing, external oversight bodies, compute monitoring etc).

The former, for a lot of people (e.g. folks in AI/CS who didn't 'buy' xrisk) was a minor annoyance. The latter is ... (read more)

6
Daniel_Eth
6mo
or because they feel it as a threat to their identity or self-image (I expect these to be even larger pain points than the two you identified)

Sure, I agree with that. I also have parallel conversations with AI ethics colleagues - you're never going to be able to make much headway with a few of the most hardcore safety people that your justice/bias etc work is anything but a trivial waste of time; anyone sane is working on averting the coming doom.

Don't need to convince everyone; and there will always be some background of articles like this. But it'll be a lot better if there's a core of cooperative work too, on the things that benefit from cooperation. 

My favourite recent example of (2) is... (read more)

Some are hostile but not all, and there are disagreements and divisions just as deep if not deeper in AI ethics as there are in EA or any other broad community with multiple important aims that you can think of.

External oversight over the power of big tech is a good goal to help accomplish. This is from one of the leading AI ethics orgs; it could almost as easily have come from an org like GovAI:
https://ainowinstitute.org/publication/gpai-is-high-risk-should-not-be-excluded-from-eu-ai-act

1
Remmelt
6mo
I expect many communities would agree on working to restrict Big Tech's use of AI to consolidate power.  List of quotes from different communities here.
6
Chris Leong
6mo
I know you're probably extremely busy, but if you'd like to see more collaboration between the x-risks community and ai ethics, it might be worth writing up a list of ways in which you think we could collaborate as a top-level post. I'm significantly more enthusiastic about the potential for collaboration after seeing the impact of the FLI letter.
JWS
6mo18
7
1

epistemic status: a frustrated outlet for sad thoughts, could definitely be reworded with more nuance

I really wish I had your positive view on this Sean, but I really don't think there's much chance of inroads unless capabilities advance to an extent that makes xRisk seem even more salient.

Gebru is, imo, never going to view EA positively. And she'll use her influence as strongly as possible in the 'AI Ethics' community. 

Seth Lazar also seems intractably anti-EA. It's annoying how much of this dialogue happens on Twitter/X, especially since it's very d... (read more)

I totally buy "there are lots of good sensible AI ethics people with good ideas, we should co-operate with them". I don't actually think that all of the criticisms of EA from the harshest critics are entirely wrong either. It's only the idea that "be co-operative" will have much effect on whether articles like this get written and hostile quotes from some prominent AI ethics people turn up in them, that I'm a bit skeptical of. My claim is not "AI ethics bad", but "you are unlikely to be able to persuade the most AI hostile figures within AI ethics".

I've heard versions of the claim multiple times, including from people i'd expect to know better, so having the survey data to back it up might be helpful even if we're confident we know. the answer.

3
James Herbert
6mo
Good point! You’re right
3
harfe
6mo
I feel a bit uneasy that EAs should put in a lot of effort into a survey (both the survey designers and takers) just because someone made up something at some point. Maybe asking the people who you'd expect to know better, why they believe what they believe?
[anonymous]6mo13
4
0

I think there are truths that are not so far from it. Some rationalists believe Superintelligent AI is necessary for an amazing future. Strong versions of AI Safety and AI capabilities are complementary memes that start from similar assumptions. 

Where I think most EAs would strongly disagree with is that they would find pursuing SAI "at all costs" to be abhorrent and counter to their fundamental goals. But I also suspect that showing survey data about EA's professed beliefs wouldn't be entirely convincing to some people given the close connections bet... (read more)

>"Despite the potential risks, EAs broadly believe super-intelligent AI should be pursued at all costs." This is inaccurate imo.

Could we get a survey on a few versions of this question? I think it's actually super-rare in EA. 

e.g. 

"i believe super-intelligent AI should be pursued at all costs"

"I believe the benefits outweigh the risks of pursuing superintelligent AI"

"I believe if risk of doom can be agreed to be <0.2, then the benefits of AI outweight the risks"

"I believe even if misalignment risk can be reduced to near 0, pursuing superintelligence is undesirable"

8
David_Moss
6mo
We could potentially survey the EA community on this later this year. Please feel free to reach out if you have specific requests/suggestions for the formulation of the question.
2
James Herbert
6mo
Yeah it's incredibly inaccurate, I don't think it even needs to be surveyed. 

Wout Schellart, Jose Hernandez-Orallo, and Lexin Zhou have started an AI evaluation digest, which includes relevant benchmark papers etc. It's pretty brief, but they're looking for more contributers, so if you want to join in and help make it more comprehensive/contextualised, you should reach out!
https://groups.google.com/g/ai-eval/c/YBLo0fTLvUk

Less directly relevant, but Harry Law also has a new newsletter in the Jack Clark style, but more focused on governance/history/lessons for AI:
https://learningfromexamples.substack.com/p/the-week-in-examples-3-2-september

2
ChanaMessinger
7mo
Thanks!

John's comment points to another interesting tension. 

CSER was indeed intended to be pluralistic and to provide space for heterodox approaches. And the general 'vibe' John gestures towards (I take it he's not intending to be fully literal here - please correct me if I'm misinterpreting, John) is certainly more present at CSER than at other Xrisk orgs. It is also a vibe that is regularly rejected as a majority position in internal CSER full-group discussions. However, some groups and lineages are much more activist and evangelical in their approach tha... (read more)

in response to your first point, I think one of the hopes of creating a pluralistic xrisk community is so that different parts of the community actually understand what work and persepctives each are doing, rather than either characturing them/misrepresenting them (for example, I've heard people outside EA assuming all EA XRisk work is basically just what Bostrom says) or just not knowing what other have to say. Ultimately, I think the workshop that this statement came out of did this really well, and so I hope if there is desire to move towards a more plu... (read more)

Sharing a relevant blog post today by Harry Law on the limits to growth and predictions of doom, and lessons for AI governance, which cites this post.

Apologies that I still owe some replies to the discussion below, I've found it all really helpful (thank you!). I agree with those who say that it would be useful to have some deeper historical analysis of the impact of past 'doomer' predictions on credibility, which is clearly informative to the question of the weight we should assign to the 'cry wolf' concern.

https://www.harrylaw.co.uk/post/ai-governance-and-the-limits-to-growth

I think that (hinges on timelines) is right. Other than the first, I think most of my suggestions come at minimal cost to short-timelines-world, and will help with minimising friction/reputational hit in long-timelines world. Re: the first, not delivering the strongest (and least hedged) version of argument may weaken the message for short-timelines world. But I note that even within this community, there is wide uncertainty and disagreement re: timelines; very short timelines are far from consensus. 

Thanks! Re:

1. I think this is plausible (though I'm unclear on whether you mean 'we as AI risk research community' or 'we as humanity' here)

2. This bias definitely exists, but AI in the last year has cut through to broader society in a huge way (I keep overhearing conversations on chatgpt and other things in cafes, on trains, etc, admittedly in the cambridge/london area; suddenly random family members have takes etc. It's showing up in my wife's social media, and being written about by the political journalists she follows, where it never had before, etc). Ditto (although to a smaller extent) AI xrisk. EA/FTX didn't cut through to anything at all like the same extent.

Yes, I think this is plausible-to-likely, and is a strong counter-argument to the concern I raise here.

Hmm, fwiw, I spontaneously think something like this is overwhelmingly likely. 

Even in the (imo unlikely) case of AI research basically stagnating from now on, I expect AI applications to have effects that will significantly affect the broader public and not make them think anything close to "what a nothingburger" (e.g. like I've heard it happen for nanotechnology). E.g. I'm thinking of things like the broad availabiltiy of personal assistants & AI companions, automating of increasingly many tasks, impacts on education, on the productivity of soft... (read more)

These will still be massive, and massively expensive, training runs though - big operations that will constitute very big strategic decisions only available to the best-resourced actors. 

2
Greg_Colbourn
1y
In the post-AutoGPT world, this seems like it will no longer be the case. There is enough fervour by AGI accelerationists that the required resources could be quickly amassed by crowdfunding (cf. crypto projects raising similar amounts to those needed).
2
Greg_Colbourn
1y
Yes, but they will become increasingly cheaper. A taboo is far stronger than regulation.

This is great! Also, I very much hope that the series on skill-building happens.

I'm not taking a position on the question of whether Nick should stay on as Director, and as noted in the post I'm on record as having been unhappy with his apology (which remains my position)*,  but for balance and completeness I'd like to provide a perspective on the importance of Nick's leadership, at least in the past.

I worked closely with Nick at FHI from 2011 to 2015. While I've not been at FHI much in recent years (due to busyness elsewhere) I remember the FHI of that time being a truly unique-in-academia place; devoted to letting and helping b... (read more)

7
ThomasW
1y
Thanks for sharing your perspective, it's useful to hear!

Reasons I would disagree:
(1) Bing is not going to make us 'not alive' on a coming-year time scale. It's (in my view) a useful and large-scale manifestation of problems with LLMs that can certainly be used to push ideas and memes around safety etc, but it's not a direct global threat.
(2) The people best-placed to deal with EA 'scandal' issues are unlikely to perfectly overlap with the people best-placed to deal wit the opportunities/challenges Bing poses.
(3) I think it's bad practice for a community to justify backburnering pressing community issues with an external issue, unless the case for the external issue is strong; it's a norm that can easily become self-serving.

-3
Evan_Gaensbauer
1y
Strongly upvoted

Thanks for putting this together, very helpful given the growth of activities in the UK!

Strong agree. I've been part of other communities/projects that withered away in this way.

Do you have examples/links?

Rees has also written multiple blurbs for Will MacAskill, Nick Bostrom et al.

Great to see such a detailed, focused, and well-researched analysis of this topic, thank you. I haven't yet read beyond the executive summary yet other than a skim of the longer report, but I'm looking forward to doing so.

A clarification that CSER gets some EA funds (combination of SFF, SoGive, BERI in kind,  individual LTFF projects) but likely 1/3 or less of its budget at any given time. The overall point (all these  are a small fraction of overall EA funds) is not affected.

1
Davidmanheim
2y
I'll just note that lots of what CSER does is much more policy relevant and less philosophical compared to the other orgs mentioned, and it's harder to show impact for more practical policy work than it is to claim impact for conceptual work. That seems to be part of the reason EA funding orgs haven't been funding as much of their budget. 

7.4% actually seems quite high to me (for a university without a long-time established intellectual hub, etc); I would have predicted lower in advance.

4
TylerMaule
2y
EA does seem a bit overrepresented (sort of acknowledged here). Possible reasons: (a) sharing was encouraged post-survey, with some forewarning (b) EAs might be more likely than average to respond to 'Student Values Survey'?

An early output from this project: Research Agenda (pre-review)

Lessons from COVID-19 for GCR governance: a research agenda

The Lessons from Covid-19 Research Agenda offers a structure to study the COVID-19 pandemic and the pandemic response from a Global Catastrophic Risk (GCR) perspective. The agenda sets out the aims of our study, which is to investigate the key decisions and actions (or failures to decide or to act) that significantly altered the course of the pandemic, with the aim of improving disaster preparedness and response in the future. It also a... (read more)

At least these ones involve very different cause areas, so should be obvious from context (as contrasted with two organisations that work on long-term risk where AI risk is a focus).

Also, have some pity for the Partnership on AI and the Global Partnership on AI. 

[disclaimer: acting director of CSER, but writing in personal capacity]. I'd also like to add my strongest endorsement of Carrick - as ASB says, a rare and remarkable combination of intellectual brilliance, drive, and tremendous compassion. It was a privilege to work with him at Oxford for a few years. It would be  wonderful to see more people like Carrick succeeding in politics; I believe it would make for a better world.

-1
Oregon Guy
2y
What are the issues in Oregon that you believe Carrick would be best suited to address?

Seán Ó hÉigeartaigh here. Since I have been named specifically, I would like to make it clear that when I write here, I do so under Sean_o_h, and have only ever done so. I am not Rubi, and I don't know who Rubi is. I ask that the moderators check IP addresses, and reach out to me for any information that can help confirm this.

I am on leave and have not read the rest of this discussion, or the current paper (which I imagine is greatly improved from the draft I saw), so I will not participate further in this discussion at this time.

I note the rider says it's not directed at regular forum users/people necessarily familiar with longtermism. 

The Torres critiques are getting attention in non-longtermist contexts, especially with people not very familiar with the source material being critiqued. I expect to find myself linking to this post regularly when discussing with academic colleagues who have come across the Torres critiques; several sections (the "missing context/selective quotations" section in particular) demonstrate  effectively places in which the critiques are not representing the source material entirely fairly.

Thanks for this article. Just to add another project in this space: CSER's Haydn Belfield and collaborator Shin-Shin Hua are working on a series of papers relating to corporate governance of AI, looking at topics including how to resolve tensions between competition law and cooperation on e.g. AI safety. This work is motivated by similar reasoning as captured in this post. 

The first output (in the yale journal of law and technology) is here
https://yjolt.org/ai-antitrust-reconciling-tensions-between-competition-law-and-cooperative-ai-development

4
SethBaum
2y
Thanks for sharing this - looks like good work.

We have given policy advice to and provided connections and support to various people and groups in the policy space. This includes UK civil servants, CSER staff, the Centre for Long-Term Resilience (CLTR), and the UN.

I'd like to confirm that the APPGFG's advice/connections/support has been very helpful to various of us at CSER. I also think that the APPG has done really good work this year - to Sam, Caroline and Natasha's great credit. Moreover, I think there is a lot to be learned from the very successful and effective policy engagement network that has ... (read more)

For those interested in the 'epistemic security' topic, the most relevant report is here; it's an area we (provisionally) plan to do more on.
https://www.repository.cam.ac.uk/handle/1810/317073

Or a brief overview by the lead author is here:
https://www.bbc.com/future/article/20210209-the-greatest-security-threat-of-the-post-truth-age

Re: Ireland, I don't know much about this later shortage, but an alternative explanation would be lower population density / demand on food/agrarian resources. Not only did something like 1million people die during the great famine, but >1million emigrated; total population dropped a large amount.

1
Ramiro
2y
That's true. It also ocurred to me after I posted it here. Irish population declined steadly after 1840s (6.5 mi), long into 1960s (2.8 mi).

Thanks Linch. I'd had 
P1: People in X are racist

in mind in terms of "serious claim, not to be made lightly", but I acknowledge your well-made points re: burden of proof on the latter.

I also worry about distribution of claims in terms of signal v noise. I think there's a lot of racism in modern society, much of it glaring and harmful, but difficult to address (or sometimes out of the overton window to even speak about). I don't think matters are helped by critiques that go to lengths to read racism into innocuous texts, as the author of one of the critiques above has done in my view (in other materials, and on social media).

8
Linch
2y
I agree that reading racism or white supremacy into innocuous texts is harmful, and for the specific instances I'm aware of, it both involved  selective quote mining, and also the mined quote wasn't very damning even out of context. 

Thanks Halstead. I'll try to respond later, but I'd quickly like to be clear re: my own position that I don't perceive longtermism as racist, and/or am not claiming people within it are racist (I consider this a serious claim not to be made lightly).

and/or am not claiming people within it are racist (I consider this a serious claim not to be made lightly).

Do you mean to say that 

P1: People in X are racist

vs

P2: People in X are not racist

are serious claims that are not to be made lightly? 

(Non- sequitur below, may not be interesting)

For what it's worth, my best guess is that having the burden of proof on P1 is the correct decision procedure in the society we live in, as these accusations have a lot of associated baggage and we don't currently have a socially acceptable way to say naively reaso... (read more)

I agree the racism critique is overstated, but I think there's a more nuanced argument for a need for greater representation/inclusion for xrisk reduction to be very good for everyone.

Quick toy examples (hypothetical):
- If we avoid extinction by very rich, nearly all white people building enough sustainable bunkers, human species continues/rebuilds, but not good for non-white people. 
- If we do enough to avoid the xrisk scenarios  (say, getting stuck at the poles with minimal access to resources needed to progress civilisation or something) in cl... (read more)

[anonymous]2y62
0
0

It seems odd to me to criticise a movement as racist without at least acknowledging that the thing we are working on seems more beneficial for non-white people than the things many other philanthropists work on. The examples you give are hypothetical, so they aren't a criticism of what longtermists do in the real world. Most longtermists are focused on AI, bio and to a lesser extent climate risk. I fail to see how any of that work has the disparate demographic impact described in the hypotheticals. 

Load more