Just adding that:
My standard take is that people should have a somewhat low bar for applying to things that might be a great fit, and then if they get to a work trial or an offer they should dive deep on the details, think about their broader career plans (e....
Haven't read much here, but just flagging that the first sentences of my post were not merely "just apply" but rather:
Don’t spend too long thinking about the pros and cons of applying to an opportunity (e.g., a job, grant, degree program, or internship). Assuming the initial application wouldn’t take you long, if it seems worth thinking hard about, you should probably just apply instead." [emphasis changed]
This is indeed ideally complemented by heuristics about which specific things to apply to, and with some other career-capital-building moves like doing ...
Nice post! If someone wants AI governance/safety recommendations, feel free to message me. There's a set of orgs I'm confident are things that (a) non-OP donors are better suited than OP to funding and (b) people like me and OP grantmakers think are good. These are what I give to myself. Up to a given person whether they want my suggestions, of course!
(I was previously a grantmaker for EA Funds, and have been in the AI governance space for a few years.)
Hi Richard, quick reactions without having much context:
The AI Safety Fundamentals opportunities board, filtered for "funding" as the opportunity type, is probably also useful.
AI Safety Support have a list of funding opportunities. I'm pretty sure all of them are already in this post + comments section, but it's plausible that'll change in future.
Yeah, the "About sharing information from this report" section attempts to explain this. Also, for what it's worth, I approved all access requests, generally within 24 hours.
That said, FYI I've now switched to the folder being viewable by anyone with the link, rather than requiring requesting access, though we still have the policies in "About sharing information from this report". (This switch was partly because my sense of the risks vs benefits has changed, and partly because we apparently hit the max number of people who can be individually shared on a folder.)
Description provided to me by one of the organizers:
This is a public platform for AI safety projects where funders can find you. You shop around for donations from donors that already have a high donor score on the platform, and their donations will signal-boost your project so that more donors and funders will see it.
See also An Overview of the AI Safety Funding Situation for indications of some additional non-EA funding opportunities relevant to AI safety (e.g. for people doing PhDs or further academic work).
FYI, if any readers want just a list of funding opportunities and to see some that aren't in here, they could check out List of EA funding opportunities.
(But note that that includes some things not relevant to AI safety, and excludes some funding sources from outside the EA community.)
$20 Million in NSF Grants for Safety Research
After a year of negotiation, the NSF has announced a $20 million request for proposals for empirical AI safety research.
Here is the detailed program description.
The request for proposals is broad, as is common for NSF RfPs. Many safety avenues, such as transparency and anomaly detection, are in scope
Manifund is launching a new regranting program! We will allocate ~$2 million over the next six months based on the recommendations of our regrantors. Grantees can apply for funding through our site; we’re also looking for additional regrantors and donors to join.
Yeah, this seems to me like an important question. I see it as one subquestion of the broader, seemingly important, and seemingly neglected questions "What fraction of importance-adjusted AI safety and governance work will be done or heavily boosted by AIs? What's needed to enable that? What are the implications of that?"
I previously had a discussion focused on another subquestion of that, which is what the implications are for government funding programs in particular. I wrote notes from that conversation and will copy them below. (Some of this is also re...
"At Palisade, our mission is to help humanity find the safest possible routes to powerful AI systems aligned with human values. Our current approach is to research offensive AI capabilities to better understand and communicate the threats posed by agentic AI systems."
Jeffrey Ladish is the Executive Director.
"Admond is an independent Danish think tank that works to promote the safe and beneficial development of artificial intelligence."
"Artificial intelligence is going to change Denmark. Our mission is to ensure that this change happens safely and for the benefit of our democracy."
Senter for Langsiktig Politikk
"A politically independent organisation aimed at creating a better and safer future"
A think tank based in Norway.
Tentative suggestion: Maybe try to find a way include info about how much karma the post has near the start of the episode description, in the podcast feed?
Reasoning:
Thanks! This seems valuable.
One suggestion: Could the episode titles, or at least the start of the descriptions, say who the author is?
Reasoning:
...Hi, we are the Confido Institute and we believe in a world where decision makers (even outside the EA-rationalist bubble) can make important decisions with less overconfidence and more awareness of the uncertainties involved. We believe that almost all strategic decision-makers (or their advisors) can understand and use forecasting, quantified uncertainty and public forecasting platforms as valuable resources for making better and more informed decisions.
We design tools, workshops and materials to support this mission. This is the first in
...We are announcing a new organization called Epistea. Epistea supports projects in the space of existential security, epistemics, rationality, and effective altruism. Some projects we initiate and run ourselves, and some projects we support by providing infrastructure, know-how, staff, operations, or fiscal sponsorship.
Our current projects are FIXED POINT, Prague Fall Season, and the Epistea Residency Program. We support ACS (Alignment of Complex Systems Research Group), PIBBSS (Principles of Intelligent Behavior in Biologica
This seems like a useful topic to raise. Here's a pretty quickly written & unsatisfactory little comment:
One specific thing I'll mention in case it's relevant to some people looking at this post: The AI Governance & Strategy team at Rethink Priorities (which I co-lead) is hiring for a Compute Governance Researcher or Research Assistant. The first application stage takes 1hr, and the deadline is June 11. @readers: Please consider applying and/or sharing the role!
We're hoping to open additional roles sometime around September. One way to be sure you'd be alerted if and when we do is filling in our registration of interest form.
Nonlinear Support Fund: Productivity grants for people working in AI safety
...Get up to $10,000 a year for therapy, coaching, consulting, tutoring, education, or childcare
[...]
You automatically qualify for up to $10,000 a year if:
- You work full time on something helping with AI safety
- Technical research
- Governance
- Graduate studies
- Meta (>30% of beneficiaries must work in AI safety)
- You or your organization received >$40,000 of funding to do the above work from any one of these funders in the last 365 days
- Open Philanthropy
- The EA Funds (Infrastructure or Long-t
SaferAI is developing the technology that will allow to audit and mitigate potential harms from general-purpose AI systems such as large language models.
EffiSciences is a collective of students founded in the Écoles Normales Supérieures (ENS) acting for more involved research in the face of the problems of our world. [translated from French]
See also this detailed breakdown of potential funding options for EA (community-building-type) groups specifically.
A*PART is an independent ML safety research and research facilitation organization working for a future with a benevolent relationship to AI.
We run AISI, the Alignment Hackathons, and an AI safety research update series.
Also the European Network for AI Safety (ENAIS)
TLDR; The European Network for AI Safety is a central point for connecting researchers and community organizers in Europe with opportunities and events happening in their vicinity. Sign up here to become a member of the network, and join our launch event on Wednesday, April 5th from 19:00-20:00 CET!
...and while I hopefully have your attention: My team is currently hiring for a Research Manager! If you might be interested in managing one or more researchers working on a diverse set of issues relevant to mitigating extreme risks from the development and deployment of AI, please check out the job ad!
The application form should take <2 hours. The deadline is the end of the day on March 21. The role is remote and we're able to hire in most countries.
People with a wide range of backgrounds could turn out to be the best fit for the role. As such, if you'...
Riesgos Catastróficos Globales
...Our mission is to conduct research and prioritize global catastrophic risks in the Spanish-speaking countries of the world.
There is a growing interest in global catastrophic risk (GCR) research in English-speaking regions, yet this area remains neglected elsewhere. We want to address this deficit by identifying initiatives to enhance the public management of GCR in Spanish-speaking countries. In the short term, we will write reports about the initiatives we consider most promising. [Quote from Introducing the new Riesgos
International Center for Future Generations
The International Center for Future Generations is a European think-and-do-tank for improving societal resilience in relation to exponential technologies and existential risks.
As of today, their website lists their priorities as:
I appreciate you sharing this additional info and reflections, Julia.
I notice you mention being friends with Owen, but, as far as I can tell, the post, your comment, and other comments don't highlight that Owen was on the board of (what's now called) EV UK when you learned about this incident and tried to figure out how to deal with it, and EV UK was the umbrella organization hosting the org (CEA) that was employing you (including specifically for this work).[1] This seems to me like a key potential conflict of interest, and like it may have war...
Harvard AI Safety Team (HAIST), MIT AI Alignment (MAIA), and Cambridge Boston Alignment Initiative (CBAI)
These are three distinct but somewhat overlapping field-building initiatives. More info at Update on Harvard AI Safety Team and MIT AI Alignment and at the things that post links to.
...Is Britain prepared for the challenges ahead?
We face significant risks, from climate change to pandemics, to digital transformation and geopolitical tensions. We need social-democratic answers to create a fair and resilient future.Our vision
A leading role for the UK
Many long-term issues have an important political dimension in which the UK can play a leading role. Building on the work of previous Labour governments, we see a future where the UK can play a larger role in areas such as in reducing international tensions and in becomin
Policy Foundry
an Australian-based organisations dedicated to developing high-quality and detailed policy proposals for the greatest challenges of the 21st century. [source]
The Collective Intelligence Project
...We are an incubator for new governance models for transformative technology.
Our goal: To overcome the transformative technology trilemma.
Existing tech governance approaches fall prey to the transformative technology trilemma. They assume significant trade-offs between progress, participation, and safety.
Market-forward builders tend to sacrifice safety for progress; risk-averse technocrats tend to sacrifice participation for safety; participation-centered democrats tend to sacrifice progress for participation.
Collective fl
Just remembered that Artificial Intelligence and Nuclear Command, Control, & Communications: The Risks of Integration was written and published after I initially drafted this, so Will and I's post doesn't draw on or reference this, but it's of course relevant too.
Glad to hear that!
Oh also, just noticed I forgot to add info on how to donate, in case you or others are interested - that info can be found at https://rethinkpriorities.org/donate
An in-our-view interesting tangential point: It might decently often be the case that a technological development initially increases risks but then later increases risk by a smaller margin or even overall reduces risks.
Rethink Priorities' AI Governance & Strategy team (which I co-lead) has room for more funding. There's some info about our work and the work of RP's other x-risk-focused team* here and elsewhere in that post. One piece of public work by us so far is Understanding the diffusion of large language models: summary. We also have a lot of work that's unfortunately not public, either because it's still in progress or e.g. due to information hazards. I could share some more info via a DM if you want.
We also have yet to release a thorough public overview of the...
There's now also the related concept of viatopia, which is maybe a better concept/term. Not sure what the very best links on that are but this one seems a good starting point.