Sean_o_h

Topic Contributions

Comments

Most students who would agree with EA ideas haven't heard of EA yet (results of a large-scale survey)

7.4% actually seems quite high to me (for a university without a long-time established intellectual hub, etc); I would have predicted lower in advance.

The Future Fund’s Project Ideas Competition

An early output from this project: Research Agenda (pre-review)

Lessons from COVID-19 for GCR governance: a research agenda

The Lessons from Covid-19 Research Agenda offers a structure to study the COVID-19 pandemic and the pandemic response from a Global Catastrophic Risk (GCR) perspective. The agenda sets out the aims of our study, which is to investigate the key decisions and actions (or failures to decide or to act) that significantly altered the course of the pandemic, with the aim of improving disaster preparedness and response in the future. It also asks how we can transfer these lessons to other areas of (potential) global catastrophic risk management such as extreme climate change, radical loss of biodiversity and the governance of extreme risks posed by new technologies.

Our study aims to identify key moments- ‘inflection points’- that significantly shaped the catastrophic trajectory of COVID-19. To that end this Research Agenda has identified four broad clusters where such inflection points are likely to exist: pandemic preparedness, early action, vaccines and non-pharmaceutical interventions. The aim is to drill down into each of these clusters to ascertain whether and how the course of the pandemic might have gone differently, both at the national and the global level, using counterfactual analysis. Four aspects are used to assess candidate inflection points within each cluster: 1. the information available at the time; 2. the decision-making processes used; 3. the capacity and ability to implement different courses of action, and 4. the communication of information and decisions to different publics. The Research Agenda identifies crucial questions in each cluster for all four aspects that should enable the identification of the key lessons from COVID-19 and the pandemic response.

Neil Buddy Shah has been appointed CEO of the Clinton Health Access Initiative

At least these ones involve very different cause areas, so should be obvious from context (as contrasted with two organisations that work on long-term risk where AI risk is a focus).

Also, have some pity for the Partnership on AI and the Global Partnership on AI. 

The best $5,800 I’ve ever donated (to pandemic prevention).

[disclaimer: acting director of CSER, but writing in personal capacity]. I'd also like to add my strongest endorsement of Carrick - as ASB says, a rare and remarkable combination of intellectual brilliance, drive, and tremendous compassion. It was a privilege to work with him at Oxford for a few years. It would be  wonderful to see more people like Carrick succeeding in politics; I believe it would make for a better world.

Democratising Risk - or how EA deals with critics

Seán Ó hÉigeartaigh here. Since I have been named specifically, I would like to make it clear that when I write here, I do so under Sean_o_h, and have only ever done so. I am not Rubi, and I don't know who Rubi is. I ask that the moderators check IP addresses, and reach out to me for any information that can help confirm this.

I am on leave and have not read the rest of this discussion, or the current paper (which I imagine is greatly improved from the draft I saw), so I will not participate further in this discussion at this time.

Response to Recent Criticisms of Longtermism

I note the rider says it's not directed at regular forum users/people necessarily familiar with longtermism. 

The Torres critiques are getting attention in non-longtermist contexts, especially with people not very familiar with the source material being critiqued. I expect to find myself linking to this post regularly when discussing with academic colleagues who have come across the Torres critiques; several sections (the "missing context/selective quotations" section in particular) demonstrate  effectively places in which the critiques are not representing the source material entirely fairly.

The case for long-term corporate governance of AI

Thanks for this article. Just to add another project in this space: CSER's Haydn Belfield and collaborator Shin-Shin Hua are working on a series of papers relating to corporate governance of AI, looking at topics including how to resolve tensions between competition law and cooperation on e.g. AI safety. This work is motivated by similar reasoning as captured in this post. 

The first output (in the yale journal of law and technology) is here
https://yjolt.org/ai-antitrust-reconciling-tensions-between-competition-law-and-cooperative-ai-development

APPG for Future Generations Impact Report 2020 - 2021

We have given policy advice to and provided connections and support to various people and groups in the policy space. This includes UK civil servants, CSER staff, the Centre for Long-Term Resilience (CLTR), and the UN.

I'd like to confirm that the APPGFG's advice/connections/support has been very helpful to various of us at CSER. I also think that the APPG has done really good work this year - to Sam, Caroline and Natasha's great credit. Moreover, I think there is a lot to be learned from the very successful and effective policy engagement network that has grown up in the UK in recent years; which includes the APPGFG, the Centre for Long-Term Resilience, and (often with the support and guidance of the former two) input from various of the academic orgs. I think all this is likely to have played a significant role in the UK government's present level of active engagement with issues around GCR/Xrisk and long-term issues.

Prioritization Research for Advancing Wisdom and Intelligence

For those interested in the 'epistemic security' topic, the most relevant report is here; it's an area we (provisionally) plan to do more on.
https://www.repository.cam.ac.uk/handle/1810/317073

Or a brief overview by the lead author is here:
https://www.bbc.com/future/article/20210209-the-greatest-security-threat-of-the-post-truth-age

Load More