The FTX Foundation's Future Fund is a philanthropic fund making grants and investments to ambitious projects in order to improve humanity's long-term prospects.
We have a longlist of project ideas that we’d be excited to help launch.
We’re now announcing a prize for new project ideas to add to this longlist. If you submit an idea, and we like it enough to add to the website, we’ll pay you a prize of $5,000 (or more in exceptional cases). We’ll also attribute the idea to you on the website (unless you prefer to be anonymous).
All submissions must be received in the next week, i.e. by Monday, March 7, 2022.
We are excited about this prize for two main reasons:
- We would love to add great ideas to our list of projects.
- We are excited about experimenting with prizes to jumpstart creative ideas.
To participate, you can either
- Add your proposal as a comment to this post (one proposal per comment, please), or
- Fill in this form
Please write your project idea in the same format as the project ideas on our website. Here’s an example:
Early detection center
Biorisk and Recovery from Catastrophes
By the time we find out about novel pathogens, they’ve already spread far and wide, as we saw with Covid-19. Earlier detection would increase the amount of time we have to respond to biothreats. Moreover, existing systems are almost exclusively focused on known pathogens—we could do a lot better by creating pathogen-agnostic systems that can detect unknown pathogens. We’d like to see a system that collects samples from wastewater or travelers, for example, and then performs a full metagenomic scan for anything that could be dangerous
You can also provide further explanation, if you think the case for including your project idea will not be obvious to us on its face.
Some rules and fine print:
- You may submit refinements of ideas already on our website, but these might receive only a portion of the full prize.
- At our discretion, we will award partial prizes for submissions that are proposed by multiple people, or require additional work for us to make viable.
- At our discretion, we will award larger prizes for submissions that we really like.
- Prizes will be awarded at the sole discretion of the Future Fund.
We’re happy to answer questions, though it might take us a few days to respond due to other programs and content we're launching right now.
We’re excited to see what you come up with!
(Thanks to Owen Cotton-Barratt for helpful discussion and feedback.)
Retrospective grant evaluations
Research That Can Help Us Improve
This list should have karma hidden and entries randomised. I guess most poeple do not read and vote all the way to the bottom. I certainly didn't the first time I read it.
I agree; something like Reddit's contest mode would be useful here. I've sorted the list by "newest first" to avoid mostly seeing the most upvoted entries.
Starting EA community offices
Effective altruism
(Note: I believe someone actually is looking into starting such an office in Boston. I think (?) that might already be funded, but many other cities could plausibly benefit from offices of their own.)
Here is a more ambitious version:
EA Coworking Spaces at Scale
Effective Altruism
Here is an even more ambitious one:
Found an EA charter city
Effective Altruism
Investment strategies for longtermist funders
Research That Can Help Us Improve, Epistemic Institutions, Economic growth
Because of their non-standard goals, longtermist funders should arguably follow investment strategies that differ from standard best practices in investing. Longtermists place unusual value on certain scenarios and may have different views of how the future is likely to play out.
We'd be excited to see projects that make a contribution towards producing a pipeline of actionable recommendations in this regard. We think this is mostly a matter of combining a knowledge of finance with detailed views of the future for our areas of interest (i.e. forecasts for different scenarios with a focus on how giving opportunities may change and the associated financial winners/losers). There is a huge amount of room for research on these topics. Useful contributions could be made by research that develops these views of the future in a financially-relevant way, practical analysis of existing or potential financial instruments, and work to improve coordination on these topics.
Some of the ways the strategies of altruistic funders may differ include:
- Mission-correlated investing
... (read more)I have had a similar idea, which I didn't submit, relating to trying to create investor access to tax-deductible longtermist/patient philanthropy funds across all major EA hubs. Ideally these would be scaled up/modelled on the existing EA long term future fund (which I recall reading about but can't find now, sorry)
Edit - found it and some ideas - see this and top level post.
Highly effective enhancement of productivity, health, and wellbeing for people in high-impact roles
Effective Altruism
When it comes to enhancement of productivity, health, and wellbeing, the EA community does not sufficiently utilise division of labour. Currently, community members need to obtain the relevant knowledge themselves and do related research, e.g. on health issues, themselves. We would like to see dedicated experts on these issues that offer optimal productivity, health, and wellbeing, as a service. As a vision, a person working in a high-impact role could book calls with highly trained nutrition specialists, exercise specialists, sleep specialists, personal coaches, mental trainers, GPs with sufficient time, and so on, increasing their work output by 50% while costing little time. This could involve innovative methods such as ML-enabled optimal experiment design to figure out which interventions work for each individual.
Note: Inspired by conversations with various people. I won't name them here because I don't want to ask for permission first, but will share the prize money with them if I win something.
Reducing gain-of-function research on potentially pandemic pathogens
Biorisk
Lab outbreaks and other lab accidents with infectious pathogens happen regularly. When such accidents happen in labs that work on gain-of-function research (on potentially pandemic pathogens), the outcome could be catastrophic. At the same time, the usefulness of gain-of-function research seems limited; for example, none of the major technological innovations that helped us fight COVID-19 (vaccines, testing, better treatment, infectious disease modelling) was enabled by gain-of-function research. We'd like to see projects that reduce the amount of gain-of-function research done in the world, for example by targeting coordination between journals or funding bodies, or developing safer alternatives to gain-of-function research.
Additional notes:
- There are many stakeholders In the research system (funders, journals, scientists, hosting institutions, hosting countries). I think the concentration of power is strongest in journals: there are only a few really high profile life-science journals(*). Currently, they do publish gain-of-function research. Getting high-profile journals to coordinate against publishi
... (read more)Putting Books in Libraries
Effective Altruism
The idea of this project is to come up with a menu of ~30 books and a list of ~10000 libraries, and to offer to buy for each library, any number of books from the menu. This would ensure that folks interested in EA-related topics, who browse a library, discover these ideas. The books would be ones that teach people to use an effective altruist mindset, similar to those on this list. The libraries could be ones that are large, or that that serve top universities or cities with large English-speaking populations.
The case for the project is that if you assume that the value of discovering one new EA contributor is $200k, and that each book is read once per year (which seems plausible based on at least one random library) then the project will deliver far greater than the financial costs, of about $20 per book. The time costs would be minimised by doing much of the correspondence with libraries over the space over a short period of weeks to months. It also can serve as a useful experiment for even larger-scale book distributions, and could be replicated in other languages.
I really like this project idea! It's ambitious and yet approachable, and it seems that a lot of this work could be delegated to virtual personal assistants. Before starting the project, it seems that it would be valuable to quickly get a sense of how often EA books in libraries are read. For example, you could see how many copies of Doing Good Better are currently checked out, or perhaps you could nicely ask a library if they could tell you how many times a given book has been checked out.
In terms of the cost estimates, how would targeted social media advertising compare? Say targeting people who are already interested in charity and volunteering, or technology, or veg*anism, and offering to send them a free book.
I like this idea, but I wonder - how many people / students actually use physical libraries still? I don't think I've used one in over 15 years. My impression is that most are in chronic decline (and many have closed over the last decade).
Never Again: A Blue-Ribbon Panel on COVID Failures
Biorisk, Epistemic Institutions
Since effective altruism came to exist as a movement, COVID was the first big test of a negative event that was clearly within our areas of concern and expertise. Despite many high-profile warnings, the world was clearly not prepared to meet the moment and did not successfully contain COVID and prevent excess deaths to the extent that should've been theoretically possible if these warnings had been properly heeded. What went wrong?
We'd like to see a project that goes into extensive detail about the global COVID response - from governments, non-profits, for-profit companies, various high-profile individuals, and the effective altruism movement - and understands what the possibilities were for policy action given what we knew at the time and where things fell apart. What could've gone better and - more importantly - how might we be better prepared for the next disaster? And rather than try to re-fight the last war, what needs to be done now for us to better handle a future disaster that may not be bio-risk at all?
Disclaimer: This is just my personal opinion and not the opinion of Rethink Priorities. This project idea was not seen by anyone else at Rethink Priorities prior to posting.
Are you thinking of EAs running this themselves? We already have an informal sense of what some top priorities are for action in biosafety/pandemic-preparedness going forwards (ramp up investment in vaccines and sterilizing technology, improve PPE, try to ban Gain of Function research, etc), even if this has never been tied together into a unified and rigorously prioritized framework.
I think the idea of a blue-ribbon panel on Covid failures could have huge impact if it had (in the best-case) official buy-in from government agencies like the CDC, or (failing that) at least something like "support from a couple prestigious universities" or "participation from a pair of senators that care about the issue" or "we don't get the USA or UK but we do get a small European country like Portugal to do a Blue Ribbon Covid Panel". In short, I think this idea might ideally look more like "lobby for the creation of an official Blue Ribbon Panel, and also try to contribute to it and influence it with EA research" rather than just running it entirely as an internal EA research project. But maybe I am wrong and a really good, comprehensive EA report could change a lot of minds.
Minor note about the name: "Never Again" is a slogan often associated with the Holocaust. I think that people using it for COVID might be taken as appropriation or similar. I might suggest a different name.
https://en.wikipedia.org/wiki/Never_again
Cognitive enhancement research and development (nootropics, devices, ...)
Values and Reflective Processes, Economic Growth
Improving people's ability to think has many positive effects on innovation, reflection, and potentially individual happiness. We'd like to see more rigorous research on nootropics, devices that improve cognitive performance, and similar fields. This could target any aspect of thinking ability---such as long/short term memory, abstract reasoning, creativity---and any stage of the research and development pipeline---from wet lab research or engineering over testing in humans to product development.
Additional notes on cognitive enhancement research:
- Importance:
- Sign of impact: You already seem to think that AI-based cognitive aids would be good from a longtermist perspective, so you will probably think that non-AI-based cognitive enhancement is also at least positive. (I personally think that's somewhat likely but not obvious and would love to see more analysis on it).
- Size of impact: AI-based cognitive enhancement is probably more promising right now. But non-AI-based cognitive enhancement is still pretty promising, there is some precedent (e.g. massive benefit
... (read more)Create and distribute civilizational restart manuals
A number of "existential risks" we are worried about may not directly kill off everybody, but would still cause enough deaths and chaos to make rebuilding extremely difficult. Thus, we propose that people design and distribute "civilizational restart manuals" to places that are likely to survive biological or nuclear catastrophes, giving humanity more backup options in case of extreme diasters.
The first version can be really cheap, perhaps involving storing paper copies of parts of Wikipedia plus 10 most important books sent to 100 safe and relatively uncorrelated locations -- somewhere in New Zealand, the Antarctica research base, a couple of nuclear bunkers, nuclear submarines, etc.
We are perhaps even more concerned about great moral values like concern for all sentient beings surviving and re-emerging than preserving civilization itself, so we would love for people to do further research and work into considering how to preserve cosmopolitan values as well.
My comment from another thread applies here too:
SEP for every subject
Epistemic institutions
Create free online encyclopedias for every academic subject (or those most relevant to longtermism) written by experts and regularly updated. Despite the Stanford Encyclopedia of Philosophy being widely-known and well-loved there are few examples from other subjects. Often academic encyclopedias are both behind institutional paywalls and not accessible on sci-hub (e.g. https://oxfordre.com/). This would provide decisionmakers and the public with better access to academic views on a variety of topics.
Purchase a top journal
Metascience
Journals give bad incentives to academics - they require new knowledge to be written in hard to understand language, without pre-registration at great cost and sometimes focused on unimportant topics. Taking over a top journal and ensuring it incentivised high quality work on the most important topics would begin to turn the scientific system around.
We could, of course, simply get the future fund to pay for this. There is, however, an alternative that might be worth thinking about.
This seems like the kind of thing that dominant assurance contracts are designed to solve. We could run a Kickstarter, and use the future fund to pay the early backers if we fail to reach the target amount. This should incentivise all those who want the journals bought to chip in.
Here is one way we could do this:
A Longtermist Nobel Prize
All Areas
The idea is to upgrade the Future of Life Award to be more desirable. The prizemoney would be increased from $50k to$10M SEK (roughly $1.1M) per individual to match the Nobel Prizes. Both for prestige, and to make sure ideal candidates are selected, the selection procedure would be reviewed, adding extra judges or governance mechanisms as needed. This would not immediately mean that longtermism has something to match the prestige of a Nobel, but it would give a substantial reward and offer top longtermists something to strive for.
(A variation on a suggestion by DavidMoss)
Megastar salaries for AI alignment work
Artificial Intelligence
Aligning future superhuman AI systems is arguably the most difficult problem currently facing humanity; and the most important. In order to solve it, we need all the help we can get from the very best and brightest. To the extent that we can identify the absolute most intelligent, most capable, and most qualified people on the planet – think Fields Medalists, Nobel Prize winners, foremost champions of intellectual competition, the most sought-after engineers – we aim to offer them salaries competitive with top sportspeople, actors and music artists to work on the problem. This is complementary to our AI alignment prizes, in that getting paid is not dependent on results. The pay is for devoting a significant amount of full time work (say a year), and maximum brainpower, to the problem; with the hope that highly promising directions in the pursuit of a full solution will be forthcoming. We will aim to provide access to top AI alignment researchers for guidance, affiliation with top-tier universities, and an exclusive retreat house and office for fellows of this program to use, if so desired.
Preventing factory farming from spreading beyond the earth
Space governance, moral circle expansion (yes I am also proposing a new area of interest.)
Early space advocates such as Gerard O’Neill and Thomas Heppenheimer had both included animal husbandry in their designs of space colonies. In our time, the European Space Agency, the Canadian Space Agency, the Beijing University of Aeronautics and Astronautics, and NASA, have all expressed interests or announced projects to employ fish or insect farming in space.
This, if successful, might multiply the suffering of farmed animals by many times of the numbers of farmed animals on earth currently, spanned across the long-term future. Research is needed in areas like:
- Continuous tracking of the scientific research on transporting and raising animals in space colonies or other planets.
- Tracking, or even conducting research on the feasibility of cultivating meat in space.
- Tracking the development and implementation of AI in factory farming, which might enable unmanned factory farms and therefore make space factory farming more feasible. For instance, the aquaculture industry is hoping that AI can help them overcome major difficultie
... (read more)Longtermist Policy Lobbying Group
Biorisk, Recovery from Catastrophe, Epistemic Institutions, Values and Reflective Processes
Many social movements find a lot of opportunity by attempting to influence policy to achieve their goals . While longtermism can and should remain bi-partisan, there may be many opportunities to pull the rope sideways on policy areas of concern.
We'd like to see a project that attempts to carefully understand the lobbying process and explores garnering support for identified tractable policies. We think while such a project could scale to be very large once successful, anyone working on this project should really aim to start small and tred carefully, aiming to avoid issues around the unilateralist curse and ensuring to not make longtermism into an overly partisan issue. It's likely that longtermist lobbying might also be best done as lobbying for other clear areas related to longtermism but as a distinct idea, such as lobbying for climate change mitigation or lobbying for pandemic preparedness.
Disclaimer: This is just my personal opinion and not the opinion of Rethink Priorities. This project idea was not seen by anyone else at Rethink Priorities prior to posting.
Landscape Analysis: Longtermist Policy
Biorisk, Recovery from Catastrophe, Epistemic Institutions, Values and Reflective Processes
Many social movements find a lot of opportunity by attempting to influence policy to achieve their goals - what ought we do for longtermist policy? Longtermism can and should remain bi-partisan but there may be many opportunities to pull the rope sideways on policy areas of concern.
We'd like to see a project that attempts to collect a large number of possible longtermist policies that are tractable, explore strategies for pushing these policies, and also use public opinion polling on representative samples to understand which policies are popular. Based on this information, we could then suggest initiatives to try to push for.
Disclaimer: This is just my personal opinion and not the opinion of Rethink Priorities. This project idea was not seen by anyone else at Rethink Priorities prior to posting.
Experiments to scale mentorship and upskill people
Empowering Exceptional People, Effective Altruism
For many very important and pressing problems, especially those focused on improving the far future, there are very few experts working full-time on these problems. What's more, these fields are nascent, and there are few well-defined paths for young or early-career people to follow, it can be hard to enter the field. Experts in the field are often ideal mentors - they can vet newcomers, help them navigate the field, provide career advice, collaborate on projects and gain access to new opportunities, but there are currently very few people qualified to be mentors. We'd love to see projects that experiment with ways to improve the mentorship pipeline so that more individuals can work on pressing problems. The kinds of possible solutions possible are very broad - from developing expertise in some subset of mentorship tasks (such as vetting) in a scalable way, increasing the pool of mentors, improving existing mentors' ability to provide advice by training them, experimenting with better mentor-mentee matchmaking, running structured mentorship programs, and more.
Proportional prizes for prescient philanthropists
Effective Altruism, Economic Growth, Empowering Excetional People
A low-tech alternative to my proposal for impact markets is to offer regular, reliable prizes for early supporters of exceptionally impactful charities. These can be founders, advisors, or donors. The prizes would not only go to the top supporters but proportionally to almost anyone who can prove that they’ve contributed (or where the charity has proof of the contribution), capped only at a level where the prize money is close to the cost of the administrative overhead.
Donors may be rewarded in proportion to the aggregate size of their donations, advisors may be rewarded in proportion to their time investment valued at market rates, founders may be rewarded in proportion to the sum of both.
If these prizes are awarded reliably, maybe by several entities, they may have some of the same benefits as impact markets. Smart and altruistic donors, advisors, and charity serial entrepreneurs can accumulate more capital that they can use to support their next equally prescient project.
High quality, EA Audio Library (HEAAL)
all/meta, though I think the main value add is in AI
(Nonlinear has made a great rough/low quality version of this, so at least some credit/prize should go to them.)
Audio has several advantages over text when it comes to consuming long-form content, with one significant example being that people can consume it while doing some other task (commuting, chores, exercising) meaning the time cost of consumption is almost 0. If we think that broad, sustained engagement with key ideas is important, making the cost of engagement much lower is a clear win. Quoting Holden's recent post:
What does high quality mean here, and what content might get covered?
High quality means read by humans (I'm imagining paying maths/compsci students who'll be able to handle mathematical n
High-quality human performance is much more engaging than autogenerated audio, fwiw.
Our World in Base Rates
Epistemic Institutions
Our World In Data are excellent; they provide world-class data and analysis on a bunch of subjects. Their COVID coverage made it obvious that this is a very great public good.
So far, they haven't included data on base rates; but from Tetlock we know that base rates are the king of judgmental forecasting (EAs generally agree). Making them easily available can thus help people think better about the future. Here's a cool corporate example.
e.g.
“85% of big data projects fail”;
“10% of people refuse to be vaccinated because of fearing needles (pre-COVID so you can compare to the COVID hesitancy)”;
"11% of ballot initiatives pass"
“7% of Emergent Ventures applications are granted”;
“50% of applicants get 80k advice”;
“x% of applicants get to the 3rd round of OpenPhil hiring”, "which takes y months";
“x% of graduates from country [y] start a business”.
MVP:
Later, Q... (read more)
I think this is neat.
Perhaps-minor note: if you'd do it at scale, I imagine you'd want something more sophisticated than coarse base rates. More like, "For a project that has these parameters, our model estimates that you have a 85% chance of failure."
I of course see this as basically a bunch of estimation functions, but you get the idea.
Teaching buy-out fund
Allocate EA Researchers from Teaching Activities to Research
Problem: Professors spend a lot of their time teaching instead of researching. Many don’t know that many universities offer “teaching buy-outs”, where if you pay a certain amount of money, you don’t have to teach. Many also don’t know that a lot of EA funding would be interested in paying that.
Solution: Make a fund that's explicitly for this, to make it so more EAs know. This is the 80/20 of promoting the idea. Alternatively, funders can just advertise this offering in other ways.
Adversarial collaborations on important topics
Epistemic Institutions
There are many important topics, such as the level of risk from advanced artificial intelligence and how to reduce it, among which there are reasonable people with very different views. We are interested in experimenting with various types of adversarial collaborations, which we define as people with opposing views working to clarify their disagreement and either resolve the disagreement or identify an experiment/observation that would resolve it. We are especially excited about combining adversarial collaborations with forecasting on any double cruxes identified from them. Some ideas for experimentation might be varying the number of participants, varying the level of moderation and strictness of enforced structure, and introducing AI-based aids.
Existing and past work relevant to this space include the Adversarial Collaboration Project, SlateStarCodex's adversarial collaboration contests, and the Late 2021 MIRI Conversations.
Focus Groups Exploring Longtermism / Deliberative Democracy for Longtermism
Epistemic Institutions, Values and Reflective Processes
Right now longtermism is being developed within a relatively narrow set of stakeholders and participants relative to the broad set of people (and nonhumans) that would be affected by the decisions we make. We'd like to see focus groups that attempt to engage a more diverse group of people (diversity across many axes including but not limited to race, gender, age, geography, and socioeconomic status) and attempt to explain longtermism to them and explore what visions they have for the future of humanity (and nonhumans). Hopefully through many iterations we can find a way to go across what is likely rather large initial inferential distance to explore how a broader and more diverse group of people would think about longtermism once ideally informed. This can be related to and informed by engaging in deliberative democracy. This also could be helping to initiate what longtermists call "the long reflection".
Disclaimer: This is just my personal opinion and not the opinion of Rethink Priorities. This project idea was not seen by anyone else at Rethink Priorities prior to posting.
Foundational research on the value of the long-term future
Research That Can Help Us Improve
If we successfully avoid existential catastrophe in the next century, what are the best pathways to reaching existential security, and how likely is it? How optimistic should we be about the trajectory of the long-term future? What are the worst-case scenarios, and how do we avoid them? How can we make sure the future is robustly positive and build a world where as many people are flourishing?
To elaborate on what I have in mind with this proposal, it seems important to conduct research beyond reducing existential risk over the next century – we should make sure that the future we have afterwards is good as well. I'd be interested in research following up on subjects like those of the posts:
- "Disappointing Futures" Might Be As Important As Existential Risk - Michael Dickens
- Why I prioritize moral circle expansion over artificial intelligence alignment - Jacy Reese
- The expected value of extinction risk reduction is positive - Jan Brauner and Friederike Grosse-Holz and A longtermist critique of “The expected value of extinction risk reduction is positive”
- Should We Prioritize Long-Term Existential
... (read more)Incubator for Independent Researchers
Training People to Work Independently on AI Safety
Problem: AI safety is bottlenecked by management and jobs. There are <10 orgs you can do AI safety full time at, and they are limited by the number of people they can manage and their research interests.
Solution: Make an “independent researcher incubator”. Train up people to work independently on AI safety. Match them with problems the top AI safety researchers are excited about. Connect them with advisors and teammates. Provide light-touch coaching/accountability. Provide enough funding so they can work full time or provide seed funding to establish themselves, after which they fundraise individually. Help them set up co-working or co-habitation with other researchers.
This could also be structured as a research organization instead of an incubator.
EA Marketing Agency
Improve Marketing in EA Domains at Scale
Problem: EAs aren’t good at marketing, and marketing is important.
Solution: Fund an experienced marketer who is an EA or EA-adjacent to start an EA marketing agency to help EA orgs.
Expected value calculations in practice
Invest in creating the tools to approximate expected value calculations for speculative projects, even if hard.
Currently, we can’t compare the impact of speculative interventions in a principled way. When making a decision about where to work or donate, longtermists or risk-neutral neartermists may have to choose an organization based on status, network effects, or expert opinion. This is, obviously, not ideal.
We could instead push towards having expected value calculations for more things. In the same way that GiveWell did something similar for global health and development, we could try to do something similar for longtermism/speculative projects. Longer writeup here.
AGI Early Warning System
Anonymous Fire Alarm for Spotting Red Flags in AI Safety
Problem: In a fast takeoff scenario, individuals at places like DeepMind or OpenAI may see alarming red flags but not share them because of myriad institutional/political reasons.
Solution: create an anonymous form - a “fire alarm” (like an whistleblowing Andon Cord of sorts) where these employees can report what they’re seeing. We could restrict the audience to a small council of AI safety leaders, who then can determine next steps. This could, in theory, provide days to months of additional response time.
Alignment Forum Writers
Pay Top Alignment Forum Contributors to Work Full Time on AI Safety
Problem: Some of AF’s top contributors don’t actually work full-time on AI safety because they have a day job to pay the bills.
Solution: Offer them enough money to quit their job and work on AI safety full time.
(Per Nick's note, reposting)
Political fellowships
Values and Reflective Processes, Empowering Exceptional People
We’re like to fund ways to pull people who don’t run for political office to run for political office. It's like a MacArthur. You get a call one day. You've been selected. You'd make a great public servant, even if you don't know it. You'd get some training, like DCCC and NRCC, and when you run, you get two million spent by the super-PAC run by the best. They've done the analysis. They'll provide funding. They've lined up endorsers. You've never thought about politics, but they've got your back. Say what you want to say, make a difference in the world: run the campaign you don't mind losing. And if you win, make it real.
The Billionaire Nice List
Philanthropy
A regularly updated list of how much impact we estimate billionaires have created. Billionaires care about their public image, people like checking lists. Let's attempt to create a list which can be sorted by different moral weights and incentivises billionaires to do more good.
Pro-immigration advocacy outside the United States
Economic Growth
Increasing migration to rich countries could dramatically reduce poverty and grow the world economy by up to 150%. Open Philanthropy has long had pro-immigration reform in the U.S. as a focus area, but the American political climate has been very hostile to and/or polarized on immigration, making it harder to make progress in the U.S. However, other high-income countries might be more receptive to increasing immigration, and would thus be easier places to make progress. For example, according to a 2018 Pew survey, 81% of Japanese citizens support increasing or keeping immigration levels about the same. It would be worth exploring which developed countries are most promising for pro-immigration advocacy, and then advocating for immigration there.
What this project could look like:
Related posts:
- Which countries are most receptiv
... (read more)Improving ventilation
Biorisk
Ventilation emerged as a potential intervention to reduce the risk of COVID and other pathogens. Additionally, poor air quality is a health concern in its own right, negatively affecting cognition and cognitive development. Despite this, there still does not seem to be commonly accepted wisdom about what kind of ventilation interventions ought to be pursued in offices, bedrooms, and other locations.
We'd like to see a project that does rigorous research to establish strong ventilation strategies in a variety of contexts and explores their effectiveness on various ventilation issues. Once successful ventilation strategies are developed, assuming it would be cost-effective to do so, this project could then aim to roll out ventilation and campaign/market for ventilation interventions either as a for-profit, non-profit, or hybrid.
Disclaimer: This is just my personal opinion and not the opinion of Rethink Priorities. This project idea was not seen by anyone else at Rethink Priorities prior to posting.
Advocacy organization for unduly unpopular technologies
Public opinion on key technologies.
Some technologies have enormous benefits, but they are not deployed very much because they are unpopular. Nuclear energy could be a powerful tool for enhancing access to clean energy and combating climate change, but it faces public opposition in Western countries. Similarly, GMOs could help solve the puzzle of feeding the global population with fewer resources, but public opinion is largely against them. Cellular agriculture may soon face similar challenges. Public opinion on these technologies must urgently be shifted. We’d like to see NGOs that create the necessary support via institutions and the media, without falling into the trap of partisan warfare with traditional environmentalists.
Building the grantmaker pipeline
Empowering Exceptional People, Effective Altruism
The amount of funding committed to Effective Altruism has grown dramatically in the past few years, with an estimated $46 billion dollars currently earmarked for EA. With this significant increase in available funding, there is now a greatly increased need for talented and thoughtful grantmakers, who can effectively deploy this money. It's plausible that yearly EA grantmaking could increase by a factor of 5-10x over the coming decade, and this requires finding and training new grantmakers on best practices, as well as developing sound judgement. We'd love to see projects that build the grantmaker pipeline, whether that's grantmaking fellowships, grantmaker mentoring, more frequent donor lotteries, more EA funds-style organisations with rotating fund managers, and more.
NB: This might be a refinement of fellowships, but I think it's particularly important.
Website for coordinating independent donors and applicants for funding
Empowering exceptional people, effective altruism
At EAG London 2021, many attendees indicated in their profiles that they were looking for donation opportunities. Donation autonomy is important to many prospective donors, and increasing the range of potential funding sources is important to those applying for funding. A curated website which allows applicants to post requests for funding and allows potential donors to browse those requests and offer to fully or partially fund applicants, seems like an effective solution.
Nuclear arms reduction to lower AI risk
Artificial Intelligence and Great Power Relations
In addition to being an existential risk in their own right, the continued existence of large numbers of launch-ready nuclear weapons also bears on risks from transformative AI. Existing launch-ready nuclear weapon systems could be manipulated or leveraged by a powerful AI to further its goals if it decided to behave adversarially towards humans. We think understanding the dynamics of and policy responses to this topic are under-researched and would benefit from further investigation.
Incremental Institutional Review Board Reform
Epistemic Institutions, Values and Reflective Process
Institutional Review Boards (IRBs) regulate biomedical and social science research. In addition to slowing and deterring life-saving biomedical research, IRBs interfere with controversial but useful social science research, eg, Scott Atran was deterred from studying Jihadi terrorists; Mark Kleiman was deterred from studying the California prison system, and a Florida State University IRB cited public controversy as a reason to deter research. We would like to see a group focused on advocating for plausible reforms to IRBs that allow more social science research to be performed. Some plausible examples:
Concrete steps to these goals could be:
- sponsoring a prize for the first university that allowed use of Prof. Omri Ben-Shahar’s electronic checklist tool;
- setting up a journal for “Deterred Social Science Resea
... (read more)Top ML researchers to AI safety researchers
Pay top ML researchers to switch to AI safety
Problem: <.001% of the world’s brightest minds are working on AI safety. Many are working on AI capabilities.
Solution: Pay them to switch. Pay them their same salary, or more, or maybe a lot more.
EA Productivity Fund
Increase the output of top longtermists by paying for things like coaching, therapy, personal assistants, and more.
Problem: Longtermism is severely talent constrained. Yet, even though these services could easily increase a top EAs productivity by 10-50%, many can’t afford them or would be put off by the cost (imposter syndrome or just because it feels selfish).
Solution: Create a lightly-administered fund to pay for them. It’s unclear what the best way would be to select who gets funding, but a very simple decision metric could be to give it to anybody who gets funding from Open Phil, LTFF, SFF, or FTX. This would leverage other people’s existing vetting work.
Studying stimulants' and anti-depressants' long-term effects on productivity and health in healthy people (e.g. Modafinil, Adderall, and Wellbutrin)
Economic Growth, Effective Altruism
Is it beneficial or harmful for long-term productivity to take Modafinil, Adderall, Wellbutrin, or other stimulants on a regular basis as a healthy person (some people speculate that it might make you less productive on days where you're not taking it)? If it's beneficial, what's the effect size? What frequency hits the best trade-off between building up tolerance vs short-term productivity gains? What are the long-term health effects? Does it affect longevity?
Some people think that taking stimulants regularly provides a large net boost to productivity. If true, that would mean we could relatively cheaply increase the productivity of the world and thereby increase economic growth. In particular, it could also increase the productivity of the EA community (which might be unusually willing to act on such information), including AI and biorisk researchers.
My very superficial impression is that many academics avoid researching the use of drugs in healthy people and that there is a bias against taking medic... (read more)
Sub-extinction event drills, games, exercises
Civilizational resilience to catastrophes
Someone should build up expertise and produce educational materials / run workshops on questions like
Differentially distributing these materials/workshops to people who live in geographical areas likely to survive at all could help rebuilding efforts in worlds where massive sub-extinction events occur.
Centralising Information on EA/AI Safety
Effective Altruism, AI Safety
There are many list of opportunities available in EA/AI Safety and many lists of what organisations exist. Unfortunately these lists tend to get outdated. It would be extremely valuable to have a single list that is up to date and filterable according to various criteria. This would require someone being paid to maintain these part-time.
Another opportunity for centralisation would be to create an EA link shortener with pretty URLs. So for example, you'd be able to type in ea.guide/careers to see information on careers or ea.guide/forum to jump to the forum.
Notes: I own the URL ea.guide so I'd be able to donate it.
Automated Open Project Ideas Board
The Future Fund
All of these ideas should be submitted to a board where anyone can forecast their value in dollars lives saved per $ as rated by a trusted research organisation, say Rethink Priorities. The forecasts can be reputation or prediction markets. Then that research organisation checks 1% of the ideas and scores them. These scores are used to weight the other forecasts. This creates a scalable system for ranking ideas. Then funders can donate to them as they see fit.
Massive US-China exchange programme
Great power conflict, AI
Fund (university) students to live in the other country in a host family: between US-China, Russia-US, China-India, potentially India-Pakistan. This is important if one thinks that personal experience make it less likely that individuals incentivise or encourage escalation, war and certain competitive dynamics.
Nuclear/Great Power Conflict Movement Building
Effective Altruism
Given the current situation in Ukraine, movement-building related to nuclear x-risk or great power conflict would likely be much more tractable than it was up until recently. We don't know how long this period will last for and the memory of the public can be short, so we intend to advantage of this opportunity. This outreach should focus on people with an interest in policy or potential student group organisers as these people are most likely to have an influence here.
Longtermism movement-building/election/appointment efforts, targeted at federal and state governments
Effective altruism
Increasing knowledge of and alignment with longtermism in government by targeted movement-building and facilitating the election/appointment of sympathetic people (and of close friends and family of sympathetic people) could potentially be very impactful. If longtermism/EA becomes a social norm in, say, Congress or the Washington 'blob', we could benefit from the stickiness of this social norm.
Pilot emergency geoengineering solutions for catastrophic climate change
Research That Can Help Us Improve
Toby Ord puts the risk of runaway climate change causing the extinction of humanity by 2100 at 1/1000, a staggering expected loss. Emergency solutions, such as seeding oceans with carbon-absorbing algae or creating more reflective clouds, may be our last chance to prevent catastrophic warming but are extraordinarily operationally complex and may have unforeseen negative side-effects. Governments are highly unlikely to invest in massive geoengineering solutions until the last minute, at which point they may be rushed in execution and cause significant collateral damage. We’d like to fund people who can:
Epistemic status: there seems to be rea... (read more)
(Per Nick's note, reposting)
Market shaping and advanced market commitments
Epistemic institutions; Economic Growth
Market shaping is when an idea can only be jump-started by committed demand or other forces. Operation Warp Speed is the most recent example of market-shaping through advanced market commitments, but it has been used several times for other vaccine development. We are interested in funding work to understand when market-shaping makes sense, ideas for creating and funding market-shaping methods, and specific market-shaping or advanced market commitments in our areas of interest.
(I drafted this then realized that it is largely the same as Zac's comment above - so I've strong upvoted that comment and I'm posting here in case my take on it is useful.)
Crowding in other funding
We're excited to see ideas for structuring projects in our areas of interest that leverage our funds by aligning with the tastes of other funders and investors. While we are excited about spending billions of dollars on the best projects we can find, we're also excited to include other funders and investors in the journey of helping these projects scale in the best way possible. We would like to maximize the chance that other sources of funding come in. Some projects are inherently widely attractive and some others are only ever likely to attract (or want) longtermist funding. But, we expect that there are many projects where one or more general mechanisms can be applied to crowd in other funding. This may include:
A center applying epistemic best practices to predicting & evaluating AI progress
Artificial Intelligence and Epistemic Institutions
Forecasting and evaluating AI progress is difficult and important. Current work in this area is distributed across multiple organizations or individual researchers, not all of whom possess (a) the technical expertise, (b) knowledge & skill in applying epistemic best practices, and (c) institutional legitimacy (or otherwise suffer from cultural constraints). Activities of the center could include providing services to AI groups (e.g. offering superforecasting training or prediction services), producing bottom-line reports on "How capable is AI system X?", hosting adversarial collaborations, pointing out deficiencies in academic AI evaluations, and generally pioneering "analytic tradecraft" for AI progress.
An Organisation that Sells its Impact for Profit
Empowering Exceptional People, Epistemic Institutions
Nonprofits are inefficient in some respects: they don't maximize value for anyone the way for-profits do for their customers. Moreover, they lack market valuations, so successful nonprofits scale too slowly while unsuccessful ones linger too long. One way to address this is to start an organisation that only accepts funding that incentivizes impact. Its revenue would come from: (1) Selling Impact Cerificates, (2) Prizes, and/or (3) Grants (but only if they value the work at a similar level to the impact certificates). Such an organization could operate on an entirely for-profit basis. Funding would be raised from for-profit investors. Staff would be paid in salary plus equity. The main premise here is that increased salaries are a small price to pay for the efficiencies that can be gained from for-profit markets. Of course, this can only succeed if the funding mechanisms (1-3) become sufficiently popular, but given the increased funding in longtermist circles, this now looks increasingly likely.
See also Retrospective grant evaluations, Retroactive public goods funding, Impact ... (read more)
Tradable impact certificates
Effective Altruism, Research That Can Help Us Improve, Economic Growth
Issuing and trading impact certificates can popularize and normalize impact investment and profitable strategic research among the world's economic influencers. Then, economic growth will have an approximately good direction, only the relative popularization of impact certificates management/incentivization would remain.
Better understanding the needs of organisational leaders
Coincidence of wants problems
In EA, organisational leaders and potential workers often don't have good information about each other’s needs and offerings (See EA needs consultancies). The same is true for researchers who might like to do research for organisations but don't know what to do. We would like to fund work to help to resolve this. This could involve collecting advanced market commitments for funders (e.g., org group x would pay up to x for y hours of design time next year, on average). It could involved identifying unknowns for key decision makers in EA in relevant areas (e.g., instructional decision-making, longtermism, or animal welfare) which could be used to develop a research agendas and kickstart research.
Organization to push for mandatory liability insurance for dual-use research
Biorisk and Recovery from Catastrophe
Owen Cotton-Barratt for the Global Priorities Project in 2015:
... (read more)Rationalism But For Group Psychology
Epistemic Institutions
LessWrong and the rationalist community have done well to highlight biases and help individuals become more rational, as well as creating a community around this. But most of the biggest things in life are done by groups and organizations.
We'd like to see a project that takes group psychology / organizational psychology and turns it into a rationalist movement with actionable advice to help groups be less biased and help groups achieve more impact, like how the original rationalist movement did so with individuals. We imagine this would involve identifying useful ideas from group psychology / organizational psychology literature and popularizing them in the rationalist community, as well as trying to intentionally experiment. Perhaps this could come up with better ideas for meetings, how to hire, how to attract talent, better ways to help align employees with organizational goals, better ways to keep track of projects, etc.
Disclaimer: This is just my personal opinion and not the opinion of Rethink Priorities. This project idea was not seen by anyone else at Rethink Priorities prior to posting.
Wild animal suffering in space
Space governance, moral circle expansion.
Terraforming other planets might cause animals to come to exist in these planets, either because of intentional or unintentional behaviors. These animals might live net negative lives.
Also, we cannot rule out the possibility that there are already wild "animals" (or any form of sentient beings) who might be suffering from net negative lives in other planets. (this does not relate directly to the Fermi Paradox, which is highly intelligent lives, not lives per se)
Relevant research include:
Physical AI Safety
Drawing from work done in the former Soviet Union to improve safety in their bioweapons and nuclear facilities (e.g. free consultations and install of engineering safety measures, at-cost upgrades of infrastructure such as ventilation and storage facilities, etc), developing a standard set of physical/infrastructure technologies to help monitor AI Development labs/hardware and provide physical failsafes in the event of unexpectedly rapid takeoff (e.g., a FOOM scenario). Although unlikely, some standard guidelines modifying current best-practices for data center safety (e.g., restrictions on devices, physical air gaps between critical systems and the broader world, extensive onsite power monitoring and backup generators) could be critical to prevent anxiety over both physical and digital security from encouraging risk-taking behaviors by AI Development programs (Such as rushing builds, hiding locations, inappropriate dual-use or shared facilities which decrease control over data flows). In particular, physical low-tech hardware such as low-voltage switches have already provided demonstrable benefit in safeguarding high-tech, high-risk activity (See the Goldsb... (read more)
AI alignment prize suggestion: Introduce AI Safety concepts into the ML community
Artificial Intelligence
Recently, there have been several papers published at top ML conferences that introduced concepts from the AI safety community into the broader ML community. Such papers often define a problem, explain why it matters, sometimes formalise it, often include extensive experiments to showcase the problem, sometimes include some initial suggestions for remedies. Such papers are useful in several ways: they popularise AI alignment concepts, pave the way for further research, and demonstrate that researchers can do alignment research while also publishing in top venues. A great example would be Optimal Policies Tend To Seek Power, published in NeurIPS. Future Fund could advertise prizes for any paper that gets published in a top ML/NLP/Computer Vision conference (from ML, that would be NeurIPS, ICML, and ICLR) and introduces a key concept of AI alignment.
EA Macrostrategy:
Effective Altruism
Many people write about the general strategy that EA should take, but almost no-one outside of CEA has this as their main focus. Macrostrategy involves understanding all of the different organisations and projects in EA, how they work together, what the gaps are and the ways in which EA could fail to achieve its goals. Some resources should be spent here as an exploratory grant to see what this turns up.
A Project Candor for Global Catastrophic Risks
Biorisk and Recovery from Catastrophe, Values and Reflective Processes, Effective Altruism
This is a proposal to fund a large-scale public communications project on global catastrophic risks (GCRs), modeled on the Eisenhower administration's Project Candor. Project Candor was a Cold War public relations campaign to "inform the public of the realities of the 'Age of Peril'" (see Unclassified 1953 Memo from Eisenhower Library). Policymakers were concerned that the public did not yet understand that the threats from nuclear weapons and the Soviet Union had inaugurated a new era in human history: the Age of Peril. Today, at the precipice, the Age of Peril continues with possible risks from engineered pandemics, thermonuclear exchange, great power war, and more. Voting behavior and public discourse, however, do not seem attuned to these risks. A new privately-funded Project Candor would communicate to the public the nature of the threats, their probabilities, and what we can do about them. This proposal is related to "a fund for movies and documentaries" and "new publications on the most pressing issues," but differs in that it would be a unified and coordinated campaign across multiple media.
A social media platform with better incentives
Epistemic Institutions, Values and Reflective Processes
Social media has arguably become a major way in which people consume information and develop their values, and the most popular platforms are far from optimally set up to bring people closer to truthfulness or altruistic ends. We’d love to see experiments with social media platforms that provide more pro-social incentives and yet have the potential to reach a large audience.
Eliminate all mosquito-borne viruses by permanently immunizing mosquitoes
Biorisk and Recovery from Catastrophe
Billions of people are at risk from mosquito-borne viruses, including the threat of new viruses emerging. Over a century of large-scale attempts to eradicate mosquitoes as virus vectors has changed little: there could be significant value in demonstrating large-scale, permanent vector control for both general deployment and rapid response to novel viruses. Recent research has shown that infecting mosquitoes with Wolbachia, a bacterium, out-competes viruses (including dengue, yellow fever and Zika), preventing the virus from replicating within the insect, essentially immunizing it. The bacterium passes to future generations by infecting mosquito eggs, allowing a small release of immunized mosquitoes to gradually and permanently immunize an entire population of mosquitoes. We are interested in proposals for taking this technology to massive scale, with a particular focus on rapid deployment in the case of novel mosquito-borne viruses.
Epistemic status: Wolbachia impact on dengue fever has been demonstrated in a large RCT and about 10 city-level pilots. Impact on ot... (read more)
Increasing social norms of moral circle expansion/cooperation
Moral circle expansion
International cooperation on existential risks and other impactful issues is largely downstream of social norms of, for example, whether foreigners are part of one's moral circle. Research and efforts to encourage social norms of moral circle expansion and cooperation to include out-group members could potentially be very impactful, especially in relevant countries (e.g., US and China) and among relevant decision-makers.
Movement-building/research/pipeline for content creators/influencers
Effective altruism
Content creators/influencers have (if popular) a lot of outreach potential and earning-to-give potential. We should investigate the possibility of investing in movement-building or a pipeline into this field. Practical research on how to be a successful influencer is also likely to be broadly applicable for movement-building in general.
Burying caches of basic machinery needed to rebuild civilisation from scratch
Recovery from Catastophe
Should the worst happen, and a global catastrophe happens, we want to be able to help survivors rebuild civilisation as quickly and efficiently as possible. To this end, burying caches of machinery that can be used to bootstrap development is a useful part of a civilisation recovery toolkit. Such a cache could be in the form of a shipping container filled with heavy machines of open source design, such as a wind turbine, an engine, a tractor with back hoe, an oven, basic computers and CNC fabricators, etc. Written instructions would also be included of course! Along with a selection of useful books. First we aim to put together a prototype of such a cache and test it in various locations with people of various skill levels, to see how well they fare at "rebuilding" in simulated catastrophe scenarios. Learning from this, we will iterate the design until at least 10% of simulations are successful (to what is judged to be a reasonable level). We ultimately aim to bury 10,000 such caches at strategic locations around the world. Some will be in well known locations (for the case of sudde... (read more)
Targeted social media advertising to give away high-value books
Effective Altruism, Values and Reflective Processes, Epistemic Institutions
Books are a high-fidelity means of spreading ideas. We think that high-value books are those that promote the safeguarding and flourishing of humanity and all sentient life, using evidence and reason. Many of the most valuable books have come out of the Effective Altruism (EA) movement over the last decade. We are keen for more people who want to maximize the good they do to read them. Offering those most likely to be interested in EA ideas free high-value books via targeted adverts on social media could be a highly cost effective means of growing the EA movement in a values-preserving manner. Examples of target demographics are people interested in charity and volunteering, technology, or veg*anism. Examples of books that could be offered are The Life You Can Save, Doing Good Better, The Precipice, Human Compatible, The End of Animal Farming. Perhaps a list of books could be offered, with people being allowed to chose any one.
DNA banks and backup of Svalbard Global Seed Vault
Biorisk and Recovery from Catastrophe
Arguably, the most important information that the world has generated is the diversity of codes for life. Technologies are available to allow all these to be stored quickly and at low cost in DNA banks. Seed banks currently provide security for the world’s food supply. In the event of a catastrophe, it may be important to have multiple seed banks for redundancy.
Redefine humanity & assisting its transition
Artificial intelligence, values and reflective processes
As humanity inevitably evolves into coexistence with AI – the adage “if a man will not work, he shall not eat” needs to be redefined. Apart from AI’s early displacement effects already apparent (cue autonomous driving/trucking industry etc), humanity’s productivity function will continue rising due to the intrinsic nature of AI (consider 3D printing normal/lux goods at economies of scale), so much so that even plentitude becomes a potential problem. (In the usual then followed citation of ‘what about the African kids’ – kindly note this is a separate distribution problem) Ultimately – we should be contributing towards smoothing the AI transition curve and managing initial displacement by AI followed by proactively managing integration.
AI alignment: Evaluate the extent to which large language models have natural abstractions
Artificial Intelligence
The natural abstraction hypothesis is the hypothesis that neural networks will learn abstractions very similar to human concepts because these concepts are a better decomposition of reality than the alternatives. If it were true in practice, it would imply that large NNs (and large LMs in particular, due to being trained on natural language) would learn faithful models of human values, as well as bound the difficulty of translating between the model and human ontologies in ELK, avoiding the hard case of ELK in practice. If it turns out that the natural abstraction hypothesis is true at relevant scales, this would allow us to sidestep a large part of the alignment problem, and if it is false then this allows us to know to avoid a class of approaches that would be doomed to fail.
We'd like to see work towards gathering evidence on whether natural abstractions holds in practice and how this scales with model size, with a focus on interpretability of model latents, and experiments in toy environments that test whether human simulators are favored in practice. Work towar... (read more)
Refinement of idea #33, "A fund for movies and documentaries":
I'd like to see filmmakers (including screenwriters and directors) working on EA-inspired films collaborate with social scientists and other subject-matter experts to ensure that their films realistically depict EA issues (such as x-risks) and social dynamics. These collaborations can help filmmakers avoid pitfalls like those committed by Don't Look Up and The Ministry for the Future.[1]
- ^
... (read more)From this review: "But while here and there an offhand reference to some reluctant group or other is made, they are, in Ministry, always feckless. The initial disaster undermines India’s Hindu nationalist party, rather than strengthening it. Further disasters are met with turns to socialism. The anti-fossil fuel terrorism that is portrayed (and both criticized and seen as necessary by varying characters) does not provoke anti-environmental terrorism in response. One particular striking example is about two-thirds of the way through the novel, when a small American town is evacuated in the name of half-Earth. While not welcomed, this evacuation is accepted in a way that is all but impossible to imagine, at least while we, looking up from
Accelerating Accelerators
Economic Growth
Y Combinator has had one of the largest impacts on GDP of any institution in history. We are interested in funding efforts to replicate that success across different geographies, sectors (e.g. healthcare, financial services), or corporate form (e.g. not-for-profit vs. for-profit).
Salary Negotiation Service:
Effective Altruism
This service could negotiate salaries on behalf of EAs or others who would then commit a proportion of the extra to charity. This would increase the amount of money going to EA causes, promote Effective Altruism and draw people deeper into the community. Given the number of EAs who are working at high-paying tech companies this would likely be profitable.
(I remembered hearing this idea from someone else a few years back, but I can't remember who it was, unfortunately, so I can't give them credit unless they name themselves)
Risks: Might be expensive to find someone with the skills to do this and this might outweigh the money raised.
Ambitious Altruistic Software Engineering Efforts
Values and Reflective Processes, Effective Altruism
There is a long list of altruistic software projects waiting to be built, with various worthy goals such as improving forecasting, improving groups' ability to intelligently coordinate, or improving the quality of research and social-media conversations.
Evaluating large foundations
Effective Altruism
Givewell looks at actors: object-level charities, people who do stuff. But logically, it's even more worth scrutinising megadonors (assuming that they care about impact or public opinion about their operations, and thus that our analysis could actually have some effect on them).
For instance, we've seen claims that the Global Fund, who spend $4B per year, meet a 2x GiveDirectly bar but not a Givewell Top Charity bar.
This matters because most charity - and even most good charity - is still not by EAs or run on EA lines. Also, even big cautious foundations can risk waste / harm, as arguably happened with the Gates Foundation and IHME - it's important to understand the base rate of conservative giving failing, so that we can compare hits-based giving. And you only have to persuade a couple of people in a foundation before you're redirecting massive amounts.
Refining EA communications and messaging
Values and Reflective Processes, Research That Can Help Us Improve
If we want to motivate a broad spectrum of people about the importance of doing good and ensuring the long-term goes well, it's imperative we find out which messages are "sticky" and which ones are forgotten quickly. Testing various communication frames, particularly for key target audiences like highly talented students, will support EA outreach projects in better tailoring their messaging. Better communications could hugely increase the number of people that consume EA content, relate to the values of the EA movement, and ultimately commit their life to doing good. We'd be excited to see people testing various frames and messaging, across a range of target audiences, using methodologies such as surveys, focus groups, digital media, and more.
TL;DR: EA Retroactive Public Good's Funding
In your format:
Deciding which projects to fund is hard, and one of the reasons for that is that it's hard to guess which projects will succeed and which will fail. But wait, startups have solved this problem perfectly: Anybody is allowed to vet a startup and decide to invest (bet) their money on this startup succeeding, and if the startup does succeed, then the early investors get a big financial return.
The EA community could do the same, only it is missing the part where we give big financial returns to projects that turned out good.
This would make the fund's job much easier: They would have to vet which project helped IN RETROSPECT, which is much easier, and they'll leave the hard prediction work to the market.
Context for proposing this
I heard of a promising EA project that is for some reason having trouble raising funds. I'm considering funding it myself, though I am not rich and that would be somewhat broken to do. But I AM rich enough to fund this project and bet on it working well enough to get a Retroactive Public Good grant in the future, if such a thing existed. I also might have some advantage over the EA Fund in vetting this project.
In Vitalik's words:
https://medium.com/ethereum-optimism/retroactive-public-goods-funding-33c9b7d00f0c
EA Forum Writers
Pay top EA Forum contributors to write about EA topics full time
Problem: Some of the EA Forum’s top writers don’t work on EA, but contribute some of the community’s most important ideas via writing.
Solution: Pay them to write about EA ideas full time. This could be combined with the independent researcher incubator quite well.
A “Red Team” to rigorously explore possible futures and advocate against interventions that threaten to backfire
Research That Can Help Us Improve, Effective Altruism, Epistemic Institutions, Values and Reflective Processes
Motivation. There are a lot of proposals here. There are additional proposals on the Future Fund website. There are additional proposals also on various lists I have collected. Many EA charities are already implementing ambitious interventions. But really we’re quite clueless about what the future will bring.
This week alone I’ve discussed with friends and acquaintances three decisions in completely different contexts that might make the difference between paradise and hell for all sentient life, and not just in the abstract in the way that cluelessness forces us to assign some probability to almost any outcome but in the sense were we could point to concrete mechanisms along which the failure might occur. Yet we had to decide. I imagine that people in more influential positions than mine have to make similar decisions on almost a daily basis and on hardly any more information.
As a result, the robustness of an intervention has been the key criterion for prioritiza... (read more)
Subsidise catastrophic risk-related markets on prediction markets
Prediction markets and catastrophic risk
Many markets don't exist because there isn't enough liquidity. A fund could create important longtermist markets on biorisk, AI safetry and nuclear war by pledging to provide significant liquidity once created. This would likely still only work for markets resolving in 1-10 years, due to inflation, but still*.
*It has been suggested to run prediction markets which use indices rather than currency. But people have shown reluctance to bet on ETH markets, so might show reluctance here too.
FTX, which itself runs prediction markets, might be particularly well-suited for prediction-market interventions like this. I myself think that they could do a lot to advance people's understanding of prediction markets if in addition to their presidential prediction market, they also offered a conditional prediction market of how an indicator like the S&P 500 would do 1 week after the 2024 election, conditional on the Republicans winning vs the Democrats winning. Conditional prediction markets for important indicators on big national elections would provide both directly useful info in addition to educating people about prediction markets' potential.
Pandemic preparedness in LMIC countries
Biorisk
COVID has shown us that biorisk challenges fall on all countries, regardless of how prepared and well-resourced the countries are. While there certainly are many problems with pandemic preparedness high-income countries that need to be addressed, LMIC countries face even more issues in helping detect, identify, contain, mitigate, and/or prevent currently known and novel pathogens. Additionally, even after high income countries successfully contain a pathogen it may continue to spread within LMIC countries opening up risk of further more virulent mutations.
We'd like to see a project that works with LMIC governments to understand their current pandemic prevention plans and understand their local context. This project would especially focused on novel pathogens that are more severe than currently known pathogens -- and help provide the resources and knowledge needed to upgrade their plans to match the best practices of current bio-risk experts. Such a project would likely benefit from a team that contains expertise working with LMIC countries. An emergency fund and expert advice can also be provisioned to be ready to go when pathogens are... (read more)
Language models for detecting bad scholarship
Epistemic institutions
Anyone who has done desk research carefully knows that many citations don't support the claim they're cited for - usually in a subtle way, but sometimes a total nonsequitur. Here's a fun list of 13 features we need to protect ourselves.
This seems to be a side effect of academia scaling so much in recent decades - it's not that scientists are more dishonest than other groups, it's that they don't have time to carefully read everything in their sub-sub-field (... while maintaining their current arms-race publication tempo).
Take some claim P which is below the threshold of obviousness that warrants a citation.
It seems relatively easy, given current tech, to answer: (1) "Does the cited article say P?" This question is closely related to document summarisation - not a solved task, but the state of the art is workable. Having a reliable estimate of even this weak kind of citation quality would make reading research much easier - but under the above assumption of unread sources, it would also stop many bad citations from being written in the first place.
It is very hard to answer (2) "Is the cited ar... (read more)
Biorisk and information hazard workshops for iGEM competitors
Biorisk and Recovery from Catastrophe, Empowering Exceptional People
iGEM competitions are interdisciplinary synthetic biology competitions for students. They bring together the best and brightest university students with a considerable interest in synthetic biology. They already have knowledge and skills in bioengineering and many of them will likely choose it as a career path and will be very good at it. Educating them on biorisks and especially information hazards would therefore be a great contribution to safeguarding. They could also be introduced to EA ideas and rationalist approaches in general, bringing talented young people on board.
Getting former hiring managers from quant firms to help with alignment hiring
Artificial Intelligence, Empowering Exceptional People
Despite having lots of funding, alignment seems to not have been very successful at attracting top talent to date. Quant firms, on the other hand, have become known for very successfully acquiring talent and putting them to work on difficult conceptual and engineering problems. Although buy-in to alignment before one can contribute is often cited as a reason, this is, if anything, even more of a problem for quant firms, since very few people are inherently interested in quant trading as an end. As such, importing some of this know how could help substantially improve alignment hiring and onboarding efficiency.
On malevolence: How exactly does power corrupt?
Artificial Intelligence / Values and Reflective Processes
How does it happen, if it happens? Some plausible stories:
Screen and record all DNA synthesis
Biorisk and Recovery from Catastrophe
Screening all DNA synthesis orders for potentially serious hazards would reduce the risk that a dangerous biological agent is engineered and released. Robustly recording what DNA is synthesized (necessarily in an encrypted fashion) would allow labs to prove that they had not engineered an agent causing an outbreak. We are interested in funding work to solve technical, political and incentive problems related to securing DNA synthesis.
Meta note: there are already some cool EA-aligned projects related to this, such as SecureDNA from the MIT Media Lab and Common Mechanism to Prevent Illicit Gene Synthesis from NTI/IBBIS. Also, this one is not an original idea of mine to an even greater extent than the others I've posted.
Group psychology in space
Space governance
When human colonies are established in outer space, their relationship with Earth will be very important for their well-being. Initially, they’re likely to be dependent on Earth. Like settler colonies on Earth, they may grow to desire independence over time. Drawing on history and research on social group identities from social psychology, researchers should attempt to understand the kind of group identities likely to arise in independent colonies. As colonies grow they’ll inevitably form independent group identities, but depending on relationships with social groups back home, these identities could support links with Earth or create antagonistic relationships with them. Attitudes on Earth might also vary from supportive, exclusionary, or even prejudiced. Better understanding intergroup relations between Earth powers and their settler colonies off-world could help us develop equitable governance structures that promote peace and cooperation between groups.
Lobbying architects of the future
Values and Reflective Processes, Effective Altruism
Advocacy often focuses on changing politics, but the most important decisions about the future of civilization may be made in domains that receive relatively less attention. Examples include the reward functions of generally intelligent algorithms that eventually get scaled up, the design of the first space colonies, and the structure of virtual reality. We would like to see one or more organizations focused on getting the right values considered by influential decision-makers at institutions like NASA and Google. We would be excited about targeted outreach to promote consideration of aligned artificial intelligence, existential risks, the interests of future generations, and nonhuman (both animal and digital) minds. The nature of this work could take various forms, but some potential strategies are prestigious conferences in important industries, retreats including a small number of highly-influential professionals, or shareholder activism.
Bounty Budgets
Like Regranting, but for Bounties
Problem: In the same way that regranting decentralizes grantmaking, so do the same thing for bounties. For example, give the top 20 AI safety researchers up to $100,000 to create bounties or RFPs for, say, technical research problems. They could also reallocate their budget to other trusted people, creating a system of decentralized trust.
In theory, FTX’s regrantors could already do this with their existing budgets, but this would encourage people to think creatively about using bounties or RFPs.
Bounties are great because you only pay out if it's successful. If hypothetically each researcher created 5 bounties at $10,000 each that’d be 100 bounties - lots of experiments.
RFPs are great because it puts less risk on the applicants but also is a scalable, low-management way to turn money into impact.
Examples: 1) I’ll pay you $1,000 for every bounty idea that gets funded
2) Richard Ngo
EA ops: "Immigration Tech"
I have an idea for a cloud based, AI-powered SaaS platform to help governments handle immigration. Think KYC meets immigration
Today the immigration process is disjointed and fragmented amongst different countries and in most cases it's cumbersome, overly bureaucratic. That means that difficulties for immigrants, particularly in clear Human Rights cases, as well as for countries, who may be losing out on highly skilled migrants.
The idea is a platform that connects between potential immigrants and potential host countries. Instead of an immigrant applying individually to a number of countries, he would upload his relevant documentation to the platform that will then be shared with his countries of choice. Another model could be for interested countries to directly reach out to the potential immigrant of their own accord.
Part of the work of the platform would be to perform the relevant KYC work to authenticate the request as legitimate - thereby saving time and resources for national immigration departments, particularly when a request is lodged to multiple countries.
Obviously the idea is still in it's early stages and there are a number of detail... (read more)
More public EA charity evaluators
Effective Altruism
There are dozens of EA fundraising organizations deferring to just a handful of organizations that publish their research on funding opportunities, most notably GiveWell, Founders Pledge and Animal Charity Evaluators. We would like to see more professional funding opportunity research organizations sharing their research with the public, both to increase the quality of research in the areas that are currently covered - through competition and diversity of perspectives and methodologies - and to cover important areas that aren’t yet covered such as AI and EA meta.