I’m Michael Aird, a Senior Research Manager at Rethink Priorities and guest fund manager at the Effective Altruism Infrastructure Fund. Opinions expressed are my own. See here for Rethink Priorities' job openings or expression of interest forms, here for a list of EA-aligned funding sources you could apply to, and here for my top recommended resources for people interested in EA/longtermist research careers. You can give me anonymous feedback here.
With Rethink, I'm mostly focused on co-leading our AI Governance & Strategy team. I also do some nuclear risk research, give input on Rethink's Generalist Longtermism team's work, and do random other stuff.
Previously, I did a range of longtermism-y and research-y things as a Research Scholar at the Future of Humanity Institute, a Summer Research Fellow at the Center on Long-Term Risk, and a Researcher/Writer for Convergence Analysis.
I also post to LessWrong sometimes.
If you think you or I could benefit from us talking, feel free to message me! You might also want to check out my post "Interested in EA/longtermist research careers? Here are my top recommended resources".
(I've now responded via email.)
(I wrote this comment in a personal capacity, intending only to reflect my own views / knowledge.)
Hi,
In 2021, the EA Infrastructure Fund (which is not CEA, though both are supported and fiscally sponsored by Effective Ventures) made a grant for preparatory work toward potentially creating a COVID-related documentary.[1] I was the guest fund manager who recommended that grant. When I saw this post, I guessed the post was probably related to that grant and to things I said, and I’ve now confirmed that.
This post does not match my memory of what happened or what I intended to communicate, so I'll clarify a few things:
[1] The grant is described in one of EAIF’s public payout reports. But it doesn’t seem productive to name the grantees here.
(EDIT: I wrote this and hit publish before seeing Rachel also commented shortly beforehand. Her comment does not match my memory of events in a few ways additional to what I noted in this comment. I might say more on that later, but I'd guess it's not very productive to discuss this further here. Regardless, as noted in my comment, it does seem to me that in this case I failed to adequately emphasize that my input was intended just as input, and I regret that.)
Minor (yet longwinded!) comment: FWIW, I think that:
(I should flag that I didn't read the post very carefully, haven't read all the comments, and haven't formed a stable/confident view on this topic. Also I'm currently sleep-deprived and expect my reasoning isn't super clear unfortunately.)
I also think the comment is overconfident in substance, but that's something that happens often in productive debates, and I think that cost is worth paying and hard to totally avoid if we want productive debates to happen.)
(Update: I've now made this entry.)
Publication norms
I haven't checked how many relevant posts there are, but I'd guess 2-10 quite relevant and somewhat notable posts?
Related entries
proliferation | AI governance | AI forecasting | [probably some other things]
Also the Forecasting Research Institute
The Forecasting Research Institute (FRI) is a new organization focused on advancing the science of forecasting for the public good.
[...] our team is pursuing a two-pronged strategy. One is foundational, aimed at filling in the gaps in the science of forecasting that represent critical barriers to some of the most important uses of forecasting—like how to handle low probability events, long-run and unobservable outcomes, or complex topics that cannot be captured in a single forecast. The other prong is translational, focused on adapting forecasting methods to practical purposes: increasing the decision-relevance of questions, using forecasting to map important disagreements, and identifying the contexts in which forecasting will be most useful.
[...] Our core team consists of Phil Tetlock, Michael Page, Josh Rosenberg, Ezra Karger, Tegan McCaslin, and Zachary Jacobs. We also work with various contractors and external collaborators in the forecasting space.
Also School of Thinking
School of Thinking (SoT) is a media startup.
Our purpose is to spread Effective Altruist, longtermist, and rationalist values and ideas as much as possible to the general public by leveraging new media. We aim to reach our goal through the creation of high-quality material posted on an ecosystem of YouTube channels, profiles on social media platforms, podcasts, and SoT's website.
Our priority is to produce content in English and Italian, but we will cover more languages down the line. We have been funded by the Effective Altruism Infrastructure Fund (EAIF) and the FTX Future Fund.
Sometime after writing this, I saw Asya Bergal wrote an overlapping list of downsides here:
"I do think projects interacting with policymakers have substantial room for downside, including:
- Pushing policies that are harmful
- Making key issues partisan
- Creating an impression (among policymakers or the broader world) that people who care about the long-term future are offputting, unrealistic, incompetent, or otherwise undesirable to work with
- “Taking up the space” such that future actors who want to make long-term future-focused asks are encouraged or expected to work through or coordinate with the existing project"
I wrote this quickly, as part of a set of quickly written things I wanted to share with a few Cambridge Existential Risk Initiative fellows. This is mostly aggregating ideas that are already floating around. The doc version of this shortform is here, and I'll probably occasionally update that but not this.
"Here’s my quick list of what seem to me like the main downside risks of longtermism-relevant policy work, field-building (esp. in new areas), and large-scale communications.
Feel free to let me know if you’re not sure what I mean by any of these or if you think you and me chatting more about these things seems worthwhile.
Also bear in mind the unilateralist's curse.
None of this means people shouldn’t do policy stuff or large-scale communications. Definitely some policy stuff should happen already, and over time more should happen. These are just things to be aware of so you can avoid doing bad things and so you can tweak net positive things to be even more net positive by patching the downsides.
See also Hard-to-reverse decisions destroy option value and Adding important nuances to "preserve option value" arguments"
EU AI Act and/or NIST AI Risk Management Framework
These are quite separate, but I mention them together because they're both specific pieces of upcoming AI policy that I think many experts think are pretty important. It's pretty unclear to me whether we should have entries for these two specific things and for things like this in general.
Also Cavendish Labs: