The AI Safety Fundamentals opportunities board, filtered for "funding" as the opportunity type, is probably also useful.
Oh wow, thanks for flagging that, fixed! Amazing that a whole extra word in the title itself survived a whole year, and survived me copy-pasting the title in various other places too 😬
Thanks for making this!
What do the asterisks before a given resource mean? (E.g. before "Act of Congress: How America’s Essential Institution Works, and How It Doesn’t".) Maybe they mean you're especially strongly recommending that?
AI Safety Support have a list of funding opportunities. I'm pretty sure all of them are already in this post + comments section, but it's plausible that'll change in future.
Yeah, the "About sharing information from this report" section attempts to explain this. Also, for what it's worth, I approved all access requests, generally within 24 hours.
That said, FYI I've now switched to the folder being viewable by anyone with the link, rather than requiring requesting access, though we still have the policies in "About sharing information from this report". (This switch was partly because my sense of the risks vs benefits has changed, and partly because we apparently hit the max number of people who can be individually shared on a folder.)
Description provided to me by one of the organizers:
This is a public platform for AI safety projects where funders can find you. You shop around for donations from donors that already have a high donor score on the platform, and their donations will signal-boost your project so that more donors and funders will see it.
See also An Overview of the AI Safety Funding Situation for indications of some additional non-EA funding opportunities relevant to AI safety (e.g. for people doing PhDs or further academic work).
FYI, if any readers want just a list of funding opportunities and to see some that aren't in here, they could check out List of EA funding opportunities.
(But note that that includes some things not relevant to AI safety, and excludes some funding sources from outside the EA community.)
$20 Million in NSF Grants for Safety Research
After a year of negotiation, the NSF has announced a $20 million request for proposals for empirical AI safety research.
Here is the detailed program description.
The request for proposals is broad, as is common for NSF RfPs. Many safety avenues, such as transparency and anomaly detection, are in scope
Manifund is launching a new regranting program! We will allocate ~$2 million over the next six months based on the recommendations of our regrantors. Grantees can apply for funding through our site; we’re also looking for additional regrantors and donors to join.
Yeah, this seems to me like an important question. I see it as one subquestion of the broader, seemingly important, and seemingly neglected questions "What fraction of importance-adjusted AI safety and governance work will be done or heavily boosted by AIs? What's needed to enable that? What are the implications of that?"
I previously had a discussion focused on another subquestion of that, which is what the implications are for government funding programs in particular. I wrote notes from that conversation and will copy them below. (Some of this is also re...
"At Palisade, our mission is to help humanity find the safest possible routes to powerful AI systems aligned with human values. Our current approach is to research offensive AI capabilities to better understand and communicate the threats posed by agentic AI systems."
Jeffrey Ladish is the Executive Director.
Senter for Langsiktig Politikk
"A politically independent organisation aimed at creating a better and safer future"
A think tank based in Norway.
Tentative suggestion: Maybe try to find a way include info about how much karma the post has near the start of the episode description, in the podcast feed?
Reasoning:
Thanks! This seems valuable.
One suggestion: Could the episode titles, or at least the start of the descriptions, say who the author is?
Reasoning:
...Hi, we are the Confido Institute and we believe in a world where decision makers (even outside the EA-rationalist bubble) can make important decisions with less overconfidence and more awareness of the uncertainties involved. We believe that almost all strategic decision-makers (or their advisors) can understand and use forecasting, quantified uncertainty and public forecasting platforms as valuable resources for making better and more informed decisions.
We design tools, workshops and materials to support this mission. This is the first in
...We are announcing a new organization called Epistea. Epistea supports projects in the space of existential security, epistemics, rationality, and effective altruism. Some projects we initiate and run ourselves, and some projects we support by providing infrastructure, know-how, staff, operations, or fiscal sponsorship.
Our current projects are FIXED POINT, Prague Fall Season, and the Epistea Residency Program. We support ACS (Alignment of Complex Systems Research Group), PIBBSS (Principles of Intelligent Behavior in Biologica
This seems like a useful topic to raise. Here's a pretty quickly written & unsatisfactory little comment:
One specific thing I'll mention in case it's relevant to some people looking at this post: The AI Governance & Strategy team at Rethink Priorities (which I co-lead) is hiring for a Compute Governance Researcher or Research Assistant. The first application stage takes 1hr, and the deadline is June 11. @readers: Please consider applying and/or sharing the role!
We're hoping to open additional roles sometime around September. One way to be sure you'd be alerted if and when we do is filling in our registration of interest form.
Nonlinear Support Fund: Productivity grants for people working in AI safety
...Get up to $10,000 a year for therapy, coaching, consulting, tutoring, education, or childcare
[...]
You automatically qualify for up to $10,000 a year if:
- You work full time on something helping with AI safety
- Technical research
- Governance
- Graduate studies
- Meta (>30% of beneficiaries must work in AI safety)
- You or your organization received >$40,000 of funding to do the above work from any one of these funders in the last 365 days
- Open Philanthropy
- The EA Funds (Infrastructure or Long-t
EffiSciences is a collective of students founded in the Écoles Normales Supérieures (ENS) acting for more involved research in the face of the problems of our world. [translated from French]
A*PART is an independent ML safety research and research facilitation organization working for a future with a benevolent relationship to AI.
We run AISI, the Alignment Hackathons, and an AI safety research update series.
Also the European Network for AI Safety (ENAIS)
TLDR; The European Network for AI Safety is a central point for connecting researchers and community organizers in Europe with opportunities and events happening in their vicinity. Sign up here to become a member of the network, and join our launch event on Wednesday, April 5th from 19:00-20:00 CET!
...and while I hopefully have your attention: My team is currently hiring for a Research Manager! If you might be interested in managing one or more researchers working on a diverse set of issues relevant to mitigating extreme risks from the development and deployment of AI, please check out the job ad!
The application form should take <2 hours. The deadline is the end of the day on March 21. The role is remote and we're able to hire in most countries.
People with a wide range of backgrounds could turn out to be the best fit for the role. As such, if you'...
Riesgos Catastróficos Globales
...Our mission is to conduct research and prioritize global catastrophic risks in the Spanish-speaking countries of the world.
There is a growing interest in global catastrophic risk (GCR) research in English-speaking regions, yet this area remains neglected elsewhere. We want to address this deficit by identifying initiatives to enhance the public management of GCR in Spanish-speaking countries. In the short term, we will write reports about the initiatives we consider most promising. [Quote from Introducing the new Riesgos
International Center for Future Generations
The International Center for Future Generations is a European think-and-do-tank for improving societal resilience in relation to exponential technologies and existential risks.
As of today, their website lists their priorities as:
I appreciate you sharing this additional info and reflections, Julia.
I notice you mention being friends with Owen, but, as far as I can tell, the post, your comment, and other comments don't highlight that Owen was on the board of (what's now called) EV UK when you learned about this incident and tried to figure out how to deal with it, and EV UK was the umbrella organization hosting the org (CEA) that was employing you (including specifically for this work).[1] This seems to me like a key potential conflict of interest, and like it may have war...
Harvard AI Safety Team (HAIST), MIT AI Alignment (MAIA), and Cambridge Boston Alignment Initiative (CBAI)
These are three distinct but somewhat overlapping field-building initiatives. More info at Update on Harvard AI Safety Team and MIT AI Alignment and at the things that post links to.
...Is Britain prepared for the challenges ahead?
We face significant risks, from climate change to pandemics, to digital transformation and geopolitical tensions. We need social-democratic answers to create a fair and resilient future.Our vision
A leading role for the UK
Many long-term issues have an important political dimension in which the UK can play a leading role. Building on the work of previous Labour governments, we see a future where the UK can play a larger role in areas such as in reducing international tensions and in becomin
The Collective Intelligence Project
...We are an incubator for new governance models for transformative technology.
Our goal: To overcome the transformative technology trilemma.
Existing tech governance approaches fall prey to the transformative technology trilemma. They assume significant trade-offs between progress, participation, and safety.
Market-forward builders tend to sacrifice safety for progress; risk-averse technocrats tend to sacrifice participation for safety; participation-centered democrats tend to sacrifice progress for participation.
Collective fl
Just remembered that Artificial Intelligence and Nuclear Command, Control, & Communications: The Risks of Integration was written and published after I initially drafted this, so Will and I's post doesn't draw on or reference this, but it's of course relevant too.
Glad to hear that!
Oh also, just noticed I forgot to add info on how to donate, in case you or others are interested - that info can be found at https://rethinkpriorities.org/donate
An in-our-view interesting tangential point: It might decently often be the case that a technological development initially increases risks but then later increases risk by a smaller margin or even overall reduces risks.
Rethink Priorities' AI Governance & Strategy team (which I co-lead) has room for more funding. There's some info about our work and the work of RP's other x-risk-focused team* here and elsewhere in that post. One piece of public work by us so far is Understanding the diffusion of large language models: summary. We also have a lot of work that's unfortunately not public, either because it's still in progress or e.g. due to information hazards. I could share some more info via a DM if you want.
We also have yet to release a thorough public overview of the...
Thanks - I only read this linkpost and Haydn's comment quoting your summary, not the linked post as a whole, but this seems to me like probably useful work.
One nitpick:
It seems likely to me that the US is currently much more likely to create transformative AI before China, especially under short(ish) timelines (next 5-15 years) - 70%.
I feel like it'd be more useful/clearer to say "It seems x% likely that the US will create transformative AI before China, and y% likely if TAI is developed in short(ish) timelines (next 5-15 years)". Because:
Thanks, this seems right to me.
Are the survey results shareable yet? Do you have a sense of when they will be?
Also Cavendish Labs:
Cavendish Labs is a 501(c)(3) nonprofit research organization dedicated to solving the most important and neglected scientific problems of our age.
We're founding a research community in Cavendish, Vermont that's focused primarily on AI safety and pandemic prevention, although we’re interested in all avenues of effective research.
Hi Richard, quick reactions without having much context:
- If you mean this is all one company, this sounds like putting too many eggs in one basket, and insufficiently exploring.
- I think it's generally good to apply to many different types of roles and organizations.
- Sometimes it makes sense to focus in mostly on one role type or one org. But probably not entirely. And not once one has already gotten some evidence that that's not the right fit. (Receiving a few rejections isn't much negative info, but if it's >5 for one particular org or type of
... (read more)