I’m Michael Aird, a Senior Research Manager at Rethink Priorities and guest fund manager at the Effective Altruism Infrastructure Fund. Opinions expressed are my own. See here for Rethink Priorities' job openings or expression of interest forms, here for a list of EA-aligned funding sources you could apply to, and here for my top recommended resources for people interested in EA/longtermist research careers. You can give me anonymous feedback here.
With Rethink, I'm mostly focused on co-leading our AI Governance & Strategy team. I also do some nuclear risk research, give input on Rethink's Generalist Longtermism team's work, and do random other stuff.
Previously, I did a range of longtermism-y and research-y things as a Research Scholar at the Future of Humanity Institute, a Summer Research Fellow at the Center on Long-Term Risk, and a Researcher/Writer for Convergence Analysis.
I also post to LessWrong sometimes.
If you think you or I could benefit from us talking, feel free to message me! You might also want to check out my post "Interested in EA/longtermist research careers? Here are my top recommended resources".
Riesgos Catastróficos Globales
Our mission is to conduct research and prioritize global catastrophic risks in the Spanish-speaking countries of the world.
There is a growing interest in global catastrophic risk (GCR) research in English-speaking regions, yet this area remains neglected elsewhere. We want to address this deficit by identifying initiatives to enhance the public management of GCR in Spanish-speaking countries. In the short term, we will write reports about the initiatives we consider most promising. [Quote from Introducing the new Riesgos Catastróficos Globales team]
We’re a team of researchers investigating and forecasting the development of advanced AI.
International Center for Future Generations
The International Center for Future Generations is a European think-and-do-tank for improving societal resilience in relation to exponential technologies and existential risks.
As of today, their website lists their priorities as:
I appreciate you sharing this additional info and reflections, Julia.
I notice you mention being friends with Owen, but, as far as I can tell, the post, your comment, and other comments don't highlight that Owen was on the board of (what's now called) EV UK when you learned about this incident and tried to figure out how to deal with it, and EV UK was the umbrella organization hosting the org (CEA) that was employing you (including specifically for this work).[1] This seems to me like a key potential conflict of interest, and like it may have warranted someone outside CEA being looped in to decide what to do about this incident. At first glance, I feel confused about this not having been mentioned in these comments. I'd be curious to hear whether you explicitly thought about that when you were thinking about this incident in 2021?
That is, if I understand correctly, in some sense Owen had a key position of authority in an organization that in turn technically had authority over the organization you worked at. That said, my rough impression from the outside is that, prior to November 2022, the umbrella organization in practice exerted little influence over what the organizations it hosted did. So this conflict of interest was probably in practice weaker than it would've looked on paper. But still it seems noteworthy.
More generally, this makes me realise that it seems like it would be valuable for the community health team to:
(I'm not saying this should extend to the other orgs EV UK / EV US host, e.g. GWWC or 80k, just CEA and the umbrella orgs themselves.)
I'd be curious to hear whether such a thing is already in place, and if so what it looks like.
Caveats in a footnote. [2]
(I wrote this just in a personal capacity. I didn't run this by anyone.)
I'n not sure if this terminology is exactly right. I'm drawing on the post CEA Disambiguation.
:
Harvard AI Safety Team (HAIST), MIT AI Alignment (MAIA), and Cambridge Boston Alignment Initiative (CBAI)
These are three distinct but somewhat overlapping field-building initiatives. More info at Update on Harvard AI Safety Team and MIT AI Alignment and at the things that post links to.
Is Britain prepared for the challenges ahead?
We face significant risks, from climate change to pandemics, to digital transformation and geopolitical tensions. We need social-democratic answers to create a fair and resilient future.Our vision
A leading role for the UK
Many long-term issues have an important political dimension in which the UK can play a leading role. Building on the work of previous Labour governments, we see a future where the UK can play a larger role in areas such as in reducing international tensions and in becoming a world leader in green technology.
Policy Foundry
an Australian-based organisations dedicated to developing high-quality and detailed policy proposals for the greatest challenges of the 21st century. [source]
The Collective Intelligence Project
We are an incubator for new governance models for transformative technology.
Our goal: To overcome the transformative technology trilemma.
Existing tech governance approaches fall prey to the transformative technology trilemma. They assume significant trade-offs between progress, participation, and safety.
Market-forward builders tend to sacrifice safety for progress; risk-averse technocrats tend to sacrifice participation for safety; participation-centered democrats tend to sacrifice progress for participation.
Collective flourishing requires all three. We need CI R&D so we can simultaneously advance technological capabilities, prevent disproportionate risks, and enable individual and collective self-determination.
Just remembered that Artificial Intelligence and Nuclear Command, Control, & Communications: The Risks of Integration was written and published after I initially drafted this, so Will and I's post doesn't draw on or reference this, but it's of course relevant too.
...and while I hopefully have your attention: My team is currently hiring for a Research Manager! If you might be interested in managing one or more researchers working on a diverse set of issues relevant to mitigating extreme risks from the development and deployment of AI, please check out the job ad!
The application form should take <2 hours. The deadline is the end of the day on March 21. The role is remote and we're able to hire in most countries.
People with a wide range of backgrounds could turn out to be the best fit for the role. As such, if you're interested, please don't rule yourself out due to thinking you're not qualified unless you at least read the job ad first!