Sean_o_h

Wiki Contributions

Comments

Democratising Risk - or how EA deals with critics

Seán Ó hÉigeartaigh here. Since I have been named specifically, I would like to make it clear that when I write here, I do so under Sean_o_h, and have only ever done so. I am not Rubi, and I don't know who Rubi is. I ask that the moderators check IP addresses, and reach out to me for any information that can help confirm this.

I am on leave and have not read the rest of this discussion, or the current paper (which I imagine is greatly improved from the draft I saw), so I will not participate further in this discussion at this time.

Response to Recent Criticisms of Longtermism

I note the rider says it's not directed at regular forum users/people necessarily familiar with longtermism. 

The Torres critiques are getting attention in non-longtermist contexts, especially with people not very familiar with the source material being critiqued. I expect to find myself linking to this post regularly when discussing with academic colleagues who have come across the Torres critiques; several sections (the "missing context/selective quotations" section in particular) demonstrate  effectively places in which the critiques are not representing the source material entirely fairly.

The case for long-term corporate governance of AI

Thanks for this article. Just to add another project in this space: CSER's Haydn Belfield and collaborator Shin-Shin Hua are working on a series of papers relating to corporate governance of AI, looking at topics including how to resolve tensions between competition law and cooperation on e.g. AI safety. This work is motivated by similar reasoning as captured in this post. 

The first output (in the yale journal of law and technology) is here
https://yjolt.org/ai-antitrust-reconciling-tensions-between-competition-law-and-cooperative-ai-development

APPG for Future Generations Impact Report 2020 - 2021

We have given policy advice to and provided connections and support to various people and groups in the policy space. This includes UK civil servants, CSER staff, the Centre for Long-Term Resilience (CLTR), and the UN.

I'd like to confirm that the APPGFG's advice/connections/support has been very helpful to various of us at CSER. I also think that the APPG has done really good work this year - to Sam, Caroline and Natasha's great credit. Moreover, I think there is a lot to be learned from the very successful and effective policy engagement network that has grown up in the UK in recent years; which includes the APPGFG, the Centre for Long-Term Resilience, and (often with the support and guidance of the former two) input from various of the academic orgs. I think all this is likely to have played a significant role in the UK government's present level of active engagement with issues around GCR/Xrisk and long-term issues.

Prioritization Research for Advancing Wisdom and Intelligence

For those interested in the 'epistemic security' topic, the most relevant report is here; it's an area we (provisionally) plan to do more on.
https://www.repository.cam.ac.uk/handle/1810/317073

Or a brief overview by the lead author is here:
https://www.bbc.com/future/article/20210209-the-greatest-security-threat-of-the-post-truth-age

On famines, food technologies and global shocks

Re: Ireland, I don't know much about this later shortage, but an alternative explanation would be lower population density / demand on food/agrarian resources. Not only did something like 1million people die during the great famine, but >1million emigrated; total population dropped a large amount.

Noticing the skulls, longtermism edition

Thanks Linch. I'd had 
P1: People in X are racist

in mind in terms of "serious claim, not to be made lightly", but I acknowledge your well-made points re: burden of proof on the latter.

I also worry about distribution of claims in terms of signal v noise. I think there's a lot of racism in modern society, much of it glaring and harmful, but difficult to address (or sometimes out of the overton window to even speak about). I don't think matters are helped by critiques that go to lengths to read racism into innocuous texts, as the author of one of the critiques above has done in my view (in other materials, and on social media).

Noticing the skulls, longtermism edition

Thanks Halstead. I'll try to respond later, but I'd quickly like to be clear re: my own position that I don't perceive longtermism as racist, and/or am not claiming people within it are racist (I consider this a serious claim not to be made lightly).

Noticing the skulls, longtermism edition

I agree the racism critique is overstated, but I think there's a more nuanced argument for a need for greater representation/inclusion for xrisk reduction to be very good for everyone.

Quick toy examples (hypothetical):
- If we avoid extinction by very rich, nearly all white people building enough sustainable bunkers, human species continues/rebuilds, but not good for non-white people. 
- If we do enough to avoid the xrisk scenarios  (say, getting stuck at the poles with minimal access to resources needed to progress civilisation or something) in climate change, but not enough to avoid massively disadvantaging most of the global south, we badly exacerbate inequality (maybe better than extinction, but not what we might consider a good outcome).

And so forth. So the more nuanced argument might be we (a) need to avoid extinction, but (b) want to do so in such a way that we don't exacerbate inequality and other harms. We stand a better chance of doing the latter by including a wider array of stakeholders than are currently in the conversation.

Load More