M

MichaelA

Senior Research Manager @ Rethink Priorities; also guest fund manager @ the EA Infrastructure Fund
12408 karmaJoined Dec 2018Working (0-5 years)Oxford, UK

Bio

I'm a Senior Research Manager in Rethink Priorities' AI Governance and Strategy team. I'm also an advisor to organizations such as Training for Good and an affiliate of the Centre for the Governance of AI. My prior work includes positions at the Center on Long-Term Risk and the Future of Humanity Institute.

Opinions expressed are my own. You can give me anonymous feedback here.

See here for Rethink Priorities' job openings or expression of interest forms, here for a list of EA-aligned funding sources you could apply to, and here for my top recommended resources for people interested in EA/longtermist research careers. 

Sequences
4

Nuclear risk research project ideas
Moral uncertainty
Risks from Nuclear Weapons
Improving the EA-aligned research pipeline

Comments
2488

Topic Contributions
793

Oh wow, thanks for flagging that, fixed! Amazing that a whole extra word in the title itself survived a whole year, and survived me copy-pasting the title in various other places too 😬

Thanks for making this!

What do the asterisks before a given resource mean? (E.g. before "Act of Congress: How America’s Essential Institution Works, and How It Doesn’t".) Maybe they mean you're especially strongly recommending that? 

AI Safety Support have a list of funding opportunities. I'm pretty sure all of them are already in this post + comments section, but it's plausible that'll change in future. 

Yeah, the "About sharing information from this report" section attempts to explain this. Also, for what it's worth, I approved all access requests, generally within 24 hours.

That said, FYI I've now switched to the folder being viewable by anyone with the link, rather than requiring requesting access, though we still have the policies in "About sharing information from this report". (This switch was partly because my sense of the risks vs benefits has changed, and partly because we apparently hit the max number of people who can be individually shared on a folder.)

AI Safety Impact Markets

Description provided to me by one of the organizers: 

This is a public platform for AI safety projects where funders can find you. You shop around for donations from donors that already have a high donor score on the platform, and their donations will signal-boost your project so that more donors and funders will see it. 

See also An Overview of the AI Safety Funding Situation for indications of some additional non-EA funding opportunities relevant to AI safety (e.g. for people doing PhDs or further academic work). 

FYI, if any readers want just a list of funding opportunities and to see some that aren't in here, they could check out List of EA funding opportunities.

(But note that that includes some things not relevant to AI safety, and excludes some funding sources from outside the EA community.)

$20 Million in NSF Grants for Safety Research

After a year of negotiation, the NSF has announced a $20 million request for proposals for empirical AI safety research.

Here is the detailed program description.

The request for proposals is broad, as is common for NSF RfPs. Many safety avenues, such as transparency and anomaly detection, are in scope

Announcing Manifund Regrants

Manifund is launching a new regranting program! We will allocate ~$2 million over the next six months based on the recommendations of our regrantors. Grantees can apply for funding through our site; we’re also looking for additional regrantors and donors to join.

Load more