Hide table of contents

This is a series of posts with lists of projects that it could be valuable for someone to work on. The unifying theme is that they are projects that:

  • Would be especially valuable if transformative AI is coming in the next 10 years or so.
  • Are not primarily about controlling AI or aligning AI to human intentions.[1]
    • Most of the projects would be valuable even if we were guaranteed to get aligned AI.
    • Some of the projects would be especially valuable if we were inevitably going to get misaligned AI.

The posts contain some discussion of how important it is to work on these topics, but not a lot. For previous discussion (especially: discussing the objection “Why not leave these issues to future AI systems?”), you can see the section How ITN are these issues? from my previous memo on some neglected topics.

The lists are definitely not exhaustive. Failure to include an idea doesn’t necessarily mean I wouldn’t like it. (Similarly, although I’ve made some attempts to link to previous writings when appropriate, I’m sure to have missed a lot of good previous content.)

There’s a lot of variation in how sketched out the projects are. Most of the projects just have some informal notes and would require more thought before someone could start executing. If you're potentially interested in working on any of them and you could benefit from more discussion, I’d be excited if you reached out to me! [2]

There’s also a lot of variation in skills needed for the projects. If you’re looking for projects that are especially suited to your talents, you can search the posts for any of the following tags (including brackets):

[ML]   [Empirical research]   [Philosophical/conceptual]   [survey/interview]   [Advocacy]   [Governance]   [Writing]   [Forecasting]

The projects are organized into the following categories (which are in separate posts). Feel free to skip to whatever you’re most interested in.

Acknowledgements

Few of the ideas in these posts are original to me. I’ve benefited from conversations with many people. Nevertheless, all views are my own.

For some projects, I credit someone who especially contributed to my understanding of the idea. If I do, that doesn’t mean they have read or agree with how I present the idea (I may well have distorted it beyond recognition). If I don’t, I’m still likely to have drawn heavily on discussion with others, and I apologize for any failure to assign appropriate credit.

For general comments and discussion, thanks to Joseph Carlsmith, Paul Christiano, Jesse Clifton, Owen Cotton-Barrat, Holden Karnofsky, Daniel Kokotajlo, Linh Chi Nguyen, Fin Moorhouse, Caspar Oesterheld, and Carl Shulman.

  1. ^

    Nor are they primarily about reducing risks from engineered pandemics.

  2. ^

    My email is [last name].[first name]@gmail.com

Comments1


Sorted by Click to highlight new comments since:

Thanks for making this series! I added it to this oversight of project ideas Impactful Projects and Organizations to Start - List of Lists.

Commenting here because I thought it might be useful to know this list of lists exists for those who are drawn to this post.

Curated and popular this week
 ·  · 16m read
 · 
Applications are currently open for the next cohort of AIM's Charity Entrepreneurship Incubation Program in August 2025. We've just published our in-depth research reports on the new ideas for charities we're recommending for people to launch through the program. This article provides an introduction to each idea, and a link to the full report. You can learn more about these ideas in our upcoming Q&A with Morgan Fairless, AIM's Director of Research, on February 26th.   Advocacy for used lead-acid battery recycling legislation Full report: https://www.charityentrepreneurship.com/reports/lead-battery-recycling-advocacy    Description Lead-acid batteries are widely used across industries, particularly in the automotive sector. While recycling these batteries is essential because the lead inside them can be recovered and reused, it is also a major source of lead exposure—a significant environmental health hazard. Lead exposure can cause severe cardiovascular and cognitive development issues, among other health problems.   The risk is especially high when used-lead acid batteries (ULABs) are processed at informal sites with inadequate health and environmental protections. At these sites, lead from the batteries is often released into the air, soil, and water, exposing nearby populations through inhalation and ingestion. Though data remain scarce, we estimate that ULAB recycling accounts for 5–30% of total global lead exposure. This report explores the potential of launching a new charity focused on advocating for stronger ULAB recycling policies in low- and middle-income countries (LMICs). The primary goal of these policies would be to transition the sector from informal, high-pollution recycling to formal, regulated recycling. Policies may also improve environmental and safety standards within the formal sector to further reduce pollution and exposure risks.   Counterfactual impact Cost-effectiveness analysis: We estimate that this charity could generate abou
sawyer🔸
 ·  · 2m read
 · 
Note: This started as a quick take, but it got too long so I made it a full post. It's still kind of a rant; a stronger post would include sources and would have gotten feedback from people more knowledgeable than I. But in the spirit of Draft Amnesty Week, I'm writing this in one sitting and smashing that Submit button. Many people continue to refer to companies like OpenAI, Anthropic, and Google DeepMind as "frontier AI labs". I think we should drop "labs" entirely when discussing these companies, calling them "AI companies"[1] instead. While these companies may have once been primarily research laboratories, they are no longer so. Continuing to call them labs makes them sound like harmless groups focused on pushing the frontier of human knowledge, when in reality they are profit-seeking corporations focused on building products and capturing value in the marketplace. Laboratories do not directly publish software products that attract hundreds of millions of users and billions in revenue. Laboratories do not hire armies of lobbyists to control the regulation of their work. Laboratories do not compete for tens of billions in external investments or announce many-billion-dollar capital expenditures in partnership with governments both foreign and domestic. People call these companies labs due to some combination of marketing and historical accident. To my knowledge no one ever called Facebook, Amazon, Apple, or Netflix "labs", despite each of them employing many researchers and pushing a lot of genuine innovation in many fields of technology. To be clear, there are labs inside many AI companies, especially the big ones mentioned above. There are groups of researchers doing research at the cutting edge of various fields of knowledge, in AI capabilities, safety, governance, etc. Many individuals (perhaps some readers of this very post!) would be correct in saying they work at a lab inside a frontier AI company. It's just not the case that any of these companies as
 ·  · 1m read
 · 
The belief that it's preferable for America to develop AGI before China does seems widespread among American effective altruists. Is this belief supported by evidence, or it it just patriotism in disguise? How would you try to convince an open-minded Chinese citizen that it really would be better for America to develop AGI first? Such a person might point out: * Over the past 30 years, the Chinese government has done more for the flourishing of Chinese citizens than the American government has done for the flourishing of American citizens. My village growing up lacked electricity, and now I'm a software engineer! Chinese institutions are more trustworthy for promoting the future flourishing of humanity. * Commerce in China ditches some of the older ideas of Marxism because it's the means to an end: the China Dream of wealthy communism. As AGI makes China and the world extraordinarily wealthy, we are far readier to convert to full communism, taking care of everyone, including the laborers who have been permanently displaced by capital. * The American Supreme Court has established "corporate personhood" to an extent that is nonexistent in China. As corporations become increasingly managed by AI, this legal precedent will give AI enormous leverage for influencing policy, without regard to human interests. * Compared to America, China has a head start in using AI to build a harmonious society. The American federal, state, and municipal governments already lag so far behind that they're less likely to manage the huge changes that come after AGI. * America's founding and expansion were based on a technologically-superior civilization exterminating the simpler natives. Isn't this exactly what we're trying to prevent AI from doing to humanity?