I’m Michael Aird, a Senior Research Manager at Rethink Priorities and guest fund manager at the Effective Altruism Infrastructure Fund. Opinions expressed are my own. See here for Rethink Priorities' job openings or expression of interest forms, here for a list of EA-aligned funding sources you could apply to, and here for my top recommended resources for people interested in EA/longtermist research careers. You can give me anonymous feedback here.
With Rethink, I'm mostly focused on co-leading our AI Governance & Strategy team. I also do some nuclear risk research, give input on Rethink's Generalist Longtermism team's work, and do random other stuff.
Previously, I did a range of longtermism-y and research-y things as a Research Scholar at the Future of Humanity Institute, a Summer Research Fellow at the Center on Long-Term Risk, and a Researcher/Writer for Convergence Analysis.
I also post to LessWrong sometimes.
If you think you or I could benefit from us talking, feel free to message me! You might also want to check out my post "Interested in EA/longtermist research careers? Here are my top recommended resources".
Thanks - I only read this linkpost and Haydn's comment quoting your summary, not the linked post as a whole, but this seems to me like probably useful work.
One nitpick:
It seems likely to me that the US is currently much more likely to create transformative AI before China, especially under short(ish) timelines (next 5-15 years) - 70%.
I feel like it'd be more useful/clearer to say "It seems x% likely that the US will create transformative AI before China, and y% likely if TAI is developed in short(ish) timelines (next 5-15 years)". Because:
Thanks, this seems right to me.
Are the survey results shareable yet? Do you have a sense of when they will be?
Also Cavendish Labs:
Cavendish Labs is a 501(c)(3) nonprofit research organization dedicated to solving the most important and neglected scientific problems of our age.
We're founding a research community in Cavendish, Vermont that's focused primarily on AI safety and pandemic prevention, although we’re interested in all avenues of effective research.
(I've now responded via email.)
(I wrote this comment in a personal capacity, intending only to reflect my own views / knowledge.)
Hi,
In 2021, the EA Infrastructure Fund (which is not CEA, though both are supported and fiscally sponsored by Effective Ventures) made a grant for preparatory work toward potentially creating a COVID-related documentary.[1] I was the guest fund manager who recommended that grant. When I saw this post, I guessed the post was probably related to that grant and to things I said, and I’ve now confirmed that.
This post does not match my memory of what happened or what I intended to communicate, so I'll clarify a few things:
[1] The grant is described in one of EAIF’s public payout reports. But it doesn’t seem productive to name the grantees here.
(EDIT: I wrote this and hit publish before seeing Rachel also commented shortly beforehand. Her comment does not match my memory of events in a few ways additional to what I noted in this comment. I might say more on that later, but I'd guess it's not very productive to discuss this further here. Regardless, as noted in my comment, it does seem to me that in this case I failed to adequately emphasize that my input was intended just as input, and I regret that.)
Minor (yet longwinded!) comment: FWIW, I think that:
(I should flag that I didn't read the post very carefully, haven't read all the comments, and haven't formed a stable/confident view on this topic. Also I'm currently sleep-deprived and expect my reasoning isn't super clear unfortunately.)
I also think the comment is overconfident in substance, but that's something that happens often in productive debates, and I think that cost is worth paying and hard to totally avoid if we want productive debates to happen.)
(Update: I've now made this entry.)
Publication norms
I haven't checked how many relevant posts there are, but I'd guess 2-10 quite relevant and somewhat notable posts?
Related entries
proliferation | AI governance | AI forecasting | [probably some other things]
Also the Forecasting Research Institute
The Forecasting Research Institute (FRI) is a new organization focused on advancing the science of forecasting for the public good.
[...] our team is pursuing a two-pronged strategy. One is foundational, aimed at filling in the gaps in the science of forecasting that represent critical barriers to some of the most important uses of forecasting—like how to handle low probability events, long-run and unobservable outcomes, or complex topics that cannot be captured in a single forecast. The other prong is translational, focused on adapting forecasting methods to practical purposes: increasing the decision-relevance of questions, using forecasting to map important disagreements, and identifying the contexts in which forecasting will be most useful.
[...] Our core team consists of Phil Tetlock, Michael Page, Josh Rosenberg, Ezra Karger, Tegan McCaslin, and Zachary Jacobs. We also work with various contractors and external collaborators in the forecasting space.
Also School of Thinking
School of Thinking (SoT) is a media startup.
Our purpose is to spread Effective Altruist, longtermist, and rationalist values and ideas as much as possible to the general public by leveraging new media. We aim to reach our goal through the creation of high-quality material posted on an ecosystem of YouTube channels, profiles on social media platforms, podcasts, and SoT's website.
Our priority is to produce content in English and Italian, but we will cover more languages down the line. We have been funded by the Effective Altruism Infrastructure Fund (EAIF) and the FTX Future Fund.
Rethink Priorities' AI Governance & Strategy team (which I co-lead) has room for more funding. There's some info about our work and the work of RP's other x-risk-focused team* here and elsewhere in that post. One piece of public work by us so far is Understanding the diffusion of large language models: summary. We also have a lot of work that's unfortunately not public, either because it's still in progress or e.g. due to information hazards. I could share some more info via a DM if you want.
We also have yet to release a thorough public overview of the team, but we aim to do so in the coming months.
(*That other team - the General Longtermism team - may also be interested in funding, but I don't want to speak for them. I could probably connect you with them if you want.)