Hide table of contents
This is a linkpost for https://www.aipolicyideas.com/

Executive Summary 

  • AIPolicyIdeas.com is a new database that compiles AI policy ideas from various sources. It’s intended to help AI policy practitioners and researchers quickly review high-impact, high-feasibility AI policy ideas, inform decisions on what to push for or research, and identify gaps in knowledge for future work. (Alternate link for if any issues arise with the AIPolicyIdeas.com URL.) 
  • This was created relatively quickly. Users are encouraged to conduct their own research and analysis before making any decisions. 
  • Submit ideas through this form
  • If you’re working on existential-risk-relevant AI policy or related research, request access to the database via this form
    • Other people can also use GCR Policy’s related public database.
    • Approval for accessing the AI policy ideas database is not guaranteed. We appreciate your understanding if your application is not approved.

 

We are excited to announce the launch of AIPolicyIdeas.com, a database compiling AI policy ideas from various sources across the longtermist AI governance community and beyond. The database prioritizes inclusion of policy ideas that may help reduce catastrophic risk from AI and may be implementable in the US in the near- or medium-term (in the next ~5-10 years). The database includes policy ideas of varying levels of expected impact, clarity about how impactful they’d be, and feasibility. 

The ideas were curated by Abi Olvera from various sources such as Google Docs, the GCR Policy database, individual submissions, and public reports. For most ideas, we have included information on its source, relevant topic area, relevant U.S. agency, as well as loose ratings estimating expected levels of impact, feasibility, and specificity, and degree of confidence/certainty.

Collection Process: Abi started off with a collection of lists of AI policy ideas from personal Google Docs, contacts, conversations, and public reports. To avoid redundancy, ideas were only added if they contained unique ideas not already on the database. The two largest sources of AI ideas were RP’s Survey on intermediate goals in AI governance and ideas shared by the GCR Policy team. Additional ideas will be gradually added from similar sources and a form for idea submission.

Loose Ratings: To help sort the ideas, we used a loose five-point scale for impact, confidence in our impact assessment, feasibility, and specificity. These ratings were assigned by the original author, the GCR Policy evaluation team, or Abi. However, the ratings were not rigorously assessed and come from various sources, including different assessors with their biases.

Note that most choices about what to include in the database and what ratings to give were made by Abi alone, without someone else reviewing that.

Negative Impacts Not Well Accounted For: We want to make it clear that while we have included a range of policy ideas in this database, some may have lower confidence and unclear levels of expected impact. Therefore, potential negative impacts are not well represented in this database. We encourage users to exercise caution when considering ideas, particularly those with uncertain impacts, and to conduct their research and analysis before making any decisions.

Flag if You’re Researching or Available for Expertise on an Idea: We hope this database will serve as a useful resource for effective policymaking and research that can help make a positive impact on society. Researchers and policy practitioners can engage with the database by reviewing ideas, filtering them by relevant agency, and adding their names to the "Person Researching or Familiar With" column to collaborate with others. Users can also help keep the database up-to-date by sharing relevant ideas through the provided form.

Please reach out to me if you'd like to add a large collection to this database or have recommendations/suggestions for improvement.

Acknowledgements

This is a blog post from Rethink Priorities–a think tank dedicated to informing decisions made by high-impact organizations and funders across various cause areas. The author is Abi Olvera. Thanks to Amanda El-Dakhakhni for their guidance on this project and to Michael Aird, Ashwin Acharya, John Croxton, Markus Anderljung, Rumtin Sepasspour, Marie Buhl, Alex Lintz, Max Rauker, Renan Araujo, Emma Bluemke, Rose Hadshar, and many others for their helpful feedback.


If you are interested in RP’s work, please visit our research database and subscribe to our newsletter.

Comments3


Sorted by Click to highlight new comments since:

Seems like a great initiative, but what’s the rationale behind not having the database publicly available?

There already seems to be a strong publicly available database: GCR’s. We actually synced our publicly-available AI policies ideas to their database while working on this, strengthening GCR’s public database even more. This specific database allows for sharing of ideas that aren’t ready for prime-time, and that wouldn’t have been shared had they been meant for public dissemination. For example, this might be ideas that people are investigating or would like for folks to investigate, but no public report exists. I reviewed a lot of Google Docs that were previously not shared with a large groups of people. This expands access to that niche.

Just in case it is helpful, and I guess this might have been an inspiration for this excellent project: In the climate space, organizations like PCAP have made concrete, tactical plans that a new US president could implement right away without even congressional action. I do not know the details and do not know how successful these plans have been or how pivotal they have been in getting certain policies in place. But it seems at first glance like it is super useful. I imagine a future where some warning shot with AI happens and state leaders are looking around for what they can do right away. I feel like something like this might be very valuable in such a future.

Curated and popular this week
 ·  · 32m read
 · 
Summary Immediate skin-to-skin contact (SSC) between mothers and newborns and early initiation of breastfeeding (EIBF) may play a significant and underappreciated role in reducing neonatal mortality. These practices are distinct in important ways from more broadly recognized (and clearly impactful) interventions like kangaroo care and exclusive breastfeeding, and they are recommended for both preterm and full-term infants. A large evidence base indicates that immediate SSC and EIBF substantially reduce neonatal mortality. Many randomized trials show that immediate SSC promotes EIBF, reduces episodes of low blood sugar, improves temperature regulation, and promotes cardiac and respiratory stability. All of these effects are linked to lower mortality, and the biological pathways between immediate SSC, EIBF, and reduced mortality are compelling. A meta-analysis of large observational studies found a 25% lower risk of mortality in infants who began breastfeeding within one hour of birth compared to initiation after one hour. These practices are attractive targets for intervention, and promoting them is effective. Immediate SSC and EIBF require no commodities, are under the direct influence of birth attendants, are time-bound to the first hour after birth, are consistent with international guidelines, and are appropriate for universal promotion. Their adoption is often low, but ceilings are demonstrably high: many low-and middle-income countries (LMICs) have rates of EIBF less than 30%, yet several have rates over 70%. Multiple studies find that health worker training and quality improvement activities dramatically increase rates of immediate SSC and EIBF. There do not appear to be any major actors focused specifically on promotion of universal immediate SSC and EIBF. By contrast, general breastfeeding promotion and essential newborn care training programs are relatively common. More research on cost-effectiveness is needed, but it appears promising. Limited existing
Ben_West🔸
 ·  · 1m read
 · 
> Summary: We propose measuring AI performance in terms of the length of tasks AI agents can complete. We show that this metric has been consistently exponentially increasing over the past 6 years, with a doubling time of around 7 months. Extrapolating this trend predicts that, in under a decade, we will see AI agents that can independently complete a large fraction of software tasks that currently take humans days or weeks. > > The length of tasks (measured by how long they take human professionals) that generalist frontier model agents can complete autonomously with 50% reliability has been doubling approximately every 7 months for the last 6 years. The shaded region represents 95% CI calculated by hierarchical bootstrap over task families, tasks, and task attempts. > > Full paper | Github repo Blogpost; tweet thread. 
 ·  · 2m read
 · 
For immediate release: April 1, 2025 OXFORD, UK — The Centre for Effective Altruism (CEA) announced today that it will no longer identify as an "Effective Altruism" organization.  "After careful consideration, we've determined that the most effective way to have a positive impact is to deny any association with Effective Altruism," said a CEA spokesperson. "Our mission remains unchanged: to use reason and evidence to do the most good. Which coincidentally was the definition of EA." The announcement mirrors a pattern of other organizations that have grown with EA support and frameworks and eventually distanced themselves from EA. CEA's statement clarified that it will continue to use the same methodologies, maintain the same team, and pursue identical goals. "We've found that not being associated with the movement we have spent years building gives us more flexibility to do exactly what we were already doing, just with better PR," the spokesperson explained. "It's like keeping all the benefits of a community while refusing to contribute to its future development or taking responsibility for its challenges. Win-win!" In a related announcement, CEA revealed plans to rename its annual EA Global conference to "Coincidental Gathering of Like-Minded Individuals Who Mysteriously All Know Each Other But Definitely Aren't Part of Any Specific Movement Conference 2025." When asked about concerns that this trend might be pulling up the ladder for future projects that also might benefit from the infrastructure of the effective altruist community, the spokesperson adjusted their "I Heart Consequentialism" tie and replied, "Future projects? I'm sorry, but focusing on long-term movement building would be very EA of us, and as we've clearly established, we're not that anymore." Industry analysts predict that by 2026, the only entities still identifying as "EA" will be three post-rationalist bloggers, a Discord server full of undergraduate philosophy majors, and one person at