The FTX Foundation's Future Fund is a philanthropic fund making grants and investments to ambitious projects in order to improve humanity's long-term prospects.
We have a longlist of project ideas that we’d be excited to help launch.
We’re now announcing a prize for new project ideas to add to this longlist. If you submit an idea, and we like it enough to add to the website, we’ll pay you a prize of $5,000 (or more in exceptional cases). We’ll also attribute the idea to you on the website (unless you prefer to be anonymous).
All submissions must be received in the next week, i.e. by Monday, March 7, 2022.
We are excited about this prize for two main reasons:
- We would love to add great ideas to our list of projects.
- We are excited about experimenting with prizes to jumpstart creative ideas.
To participate, you can either
- Add your proposal as a comment to this post (one proposal per comment, please), or
- Fill in this form
Please write your project idea in the same format as the project ideas on our website. Here’s an example:
Early detection center
Biorisk and Recovery from Catastrophes
By the time we find out about novel pathogens, they’ve already spread far and wide, as we saw with Covid-19. Earlier detection would increase the amount of time we have to respond to biothreats. Moreover, existing systems are almost exclusively focused on known pathogens—we could do a lot better by creating pathogen-agnostic systems that can detect unknown pathogens. We’d like to see a system that collects samples from wastewater or travelers, for example, and then performs a full metagenomic scan for anything that could be dangerous
You can also provide further explanation, if you think the case for including your project idea will not be obvious to us on its face.
Some rules and fine print:
- You may submit refinements of ideas already on our website, but these might receive only a portion of the full prize.
- At our discretion, we will award partial prizes for submissions that are proposed by multiple people, or require additional work for us to make viable.
- At our discretion, we will award larger prizes for submissions that we really like.
- Prizes will be awarded at the sole discretion of the Future Fund.
We’re happy to answer questions, though it might take us a few days to respond due to other programs and content we're launching right now.
We’re excited to see what you come up with!
(Thanks to Owen Cotton-Barratt for helpful discussion and feedback.)
Retrospective grant evaluations
Research That Can Help Us Improve
This list should have karma hidden and entries randomised. I guess most poeple do not read and vote all the way to the bottom. I certainly didn't the first time I read it.
I agree; something like Reddit's contest mode would be useful here. I've sorted the list by "newest first" to avoid mostly seeing the most upvoted entries.
Starting EA community offices
Effective altruism
(Note: I believe someone actually is looking into starting such an office in Boston. I think (?) that might already be funded, but many other cities could plausibly benefit from offices of their own.)
Here is a more ambitious version:
EA Coworking Spaces at Scale
Effective Altruism
Here is an even more ambitious one:
Found an EA charter city
Effective Altruism
Investment strategies for longtermist funders
Research That Can Help Us Improve, Epistemic Institutions, Economic growth
Because of their non-standard goals, longtermist funders should arguably follow investment strategies that differ from standard best practices in investing. Longtermists place unusual value on certain scenarios and may have different views of how the future is likely to play out.
We'd be excited to see projects that make a contribution towards producing a pipeline of actionable recommendations in this regard. We think this is mostly a matter of combining a knowledge of finance with detailed views of the future for our areas of interest (i.e. forecasts for different scenarios with a focus on how giving opportunities may change and the associated financial winners/losers). There is a huge amount of room for research on these topics. Useful contributions could be made by research that develops these views of the future in a financially-relevant way, practical analysis of existing or potential financial instruments, and work to improve coordination on these topics.
Some of the ways the strategies of altruistic funders may differ include:
- Mission-correlated investing
... (read more)I have had a similar idea, which I didn't submit, relating to trying to create investor access to tax-deductible longtermist/patient philanthropy funds across all major EA hubs. Ideally these would be scaled up/modelled on the existing EA long term future fund (which I recall reading about but can't find now, sorry)
Edit - found it and some ideas - see this and top level post.
Highly effective enhancement of productivity, health, and wellbeing for people in high-impact roles
Effective Altruism
When it comes to enhancement of productivity, health, and wellbeing, the EA community does not sufficiently utilise division of labour. Currently, community members need to obtain the relevant knowledge themselves and do related research, e.g. on health issues, themselves. We would like to see dedicated experts on these issues that offer optimal productivity, health, and wellbeing, as a service. As a vision, a person working in a high-impact role could book calls with highly trained nutrition specialists, exercise specialists, sleep specialists, personal coaches, mental trainers, GPs with sufficient time, and so on, increasing their work output by 50% while costing little time. This could involve innovative methods such as ML-enabled optimal experiment design to figure out which interventions work for each individual.
Note: Inspired by conversations with various people. I won't name them here because I don't want to ask for permission first, but will share the prize money with them if I win something.
Reducing gain-of-function research on potentially pandemic pathogens
Biorisk
Lab outbreaks and other lab accidents with infectious pathogens happen regularly. When such accidents happen in labs that work on gain-of-function research (on potentially pandemic pathogens), the outcome could be catastrophic. At the same time, the usefulness of gain-of-function research seems limited; for example, none of the major technological innovations that helped us fight COVID-19 (vaccines, testing, better treatment, infectious disease modelling) was enabled by gain-of-function research. We'd like to see projects that reduce the amount of gain-of-function research done in the world, for example by targeting coordination between journals or funding bodies, or developing safer alternatives to gain-of-function research.
Additional notes:
- There are many stakeholders In the research system (funders, journals, scientists, hosting institutions, hosting countries). I think the concentration of power is strongest in journals: there are only a few really high profile life-science journals(*). Currently, they do publish gain-of-function research. Getting high-profile journals to coordinate against publishi
... (read more)Putting Books in Libraries
Effective Altruism
The idea of this project is to come up with a menu of ~30 books and a list of ~10000 libraries, and to offer to buy for each library, any number of books from the menu. This would ensure that folks interested in EA-related topics, who browse a library, discover these ideas. The books would be ones that teach people to use an effective altruist mindset, similar to those on this list. The libraries could be ones that are large, or that that serve top universities or cities with large English-speaking populations.
The case for the project is that if you assume that the value of discovering one new EA contributor is $200k, and that each book is read once per year (which seems plausible based on at least one random library) then the project will deliver far greater than the financial costs, of about $20 per book. The time costs would be minimised by doing much of the correspondence with libraries over the space over a short period of weeks to months. It also can serve as a useful experiment for even larger-scale book distributions, and could be replicated in other languages.
I like this idea, but I wonder - how many people / students actually use physical libraries still? I don't think I've used one in over 15 years. My impression is that most are in chronic decline (and many have closed over the last decade).
I really like this project idea! It's ambitious and yet approachable, and it seems that a lot of this work could be delegated to virtual personal assistants. Before starting the project, it seems that it would be valuable to quickly get a sense of how often EA books in libraries are read. For example, you could see how many copies of Doing Good Better are currently checked out, or perhaps you could nicely ask a library if they could tell you how many times a given book has been checked out.
In terms of the cost estimates, how would targeted social media advertising compare? Say targeting people who are already interested in charity and volunteering, or technology, or veg*anism, and offering to send them a free book.
Never Again: A Blue-Ribbon Panel on COVID Failures
Biorisk, Epistemic Institutions
Since effective altruism came to exist as a movement, COVID was the first big test of a negative event that was clearly within our areas of concern and expertise. Despite many high-profile warnings, the world was clearly not prepared to meet the moment and did not successfully contain COVID and prevent excess deaths to the extent that should've been theoretically possible if these warnings had been properly heeded. What went wrong?
We'd like to see a project that goes into extensive detail about the global COVID response - from governments, non-profits, for-profit companies, various high-profile individuals, and the effective altruism movement - and understands what the possibilities were for policy action given what we knew at the time and where things fell apart. What could've gone better and - more importantly - how might we be better prepared for the next disaster? And rather than try to re-fight the last war, what needs to be done now for us to better handle a future disaster that may not be bio-risk at all?
Disclaimer: This is just my personal opinion and not the opinion of Rethink Priorities. This project idea was not seen by anyone else at Rethink Priorities prior to posting.
Minor note about the name: "Never Again" is a slogan often associated with the Holocaust. I think that people using it for COVID might be taken as appropriation or similar. I might suggest a different name.
https://en.wikipedia.org/wiki/Never_again
Are you thinking of EAs running this themselves? We already have an informal sense of what some top priorities are for action in biosafety/pandemic-preparedness going forwards (ramp up investment in vaccines and sterilizing technology, improve PPE, try to ban Gain of Function research, etc), even if this has never been tied together into a unified and rigorously prioritized framework.
I think the idea of a blue-ribbon panel on Covid failures could have huge impact if it had (in the best-case) official buy-in from government agencies like the CDC, or (failing that) at least something like "support from a couple prestigious universities" or "participation from a pair of senators that care about the issue" or "we don't get the USA or UK but we do get a small European country like Portugal to do a Blue Ribbon Covid Panel". In short, I think this idea might ideally look more like "lobby for the creation of an official Blue Ribbon Panel, and also try to contribute to it and influence it with EA research" rather than just running it entirely as an internal EA research project. But maybe I am wrong and a really good, comprehensive EA report could change a lot of minds.
Cognitive enhancement research and development (nootropics, devices, ...)
Values and Reflective Processes, Economic Growth
Improving people's ability to think has many positive effects on innovation, reflection, and potentially individual happiness. We'd like to see more rigorous research on nootropics, devices that improve cognitive performance, and similar fields. This could target any aspect of thinking ability---such as long/short term memory, abstract reasoning, creativity---and any stage of the research and development pipeline---from wet lab research or engineering over testing in humans to product development.
Additional notes on cognitive enhancement research:
- Importance:
- Sign of impact: You already seem to think that AI-based cognitive aids would be good from a longtermist perspective, so you will probably think that non-AI-based cognitive enhancement is also at least positive. (I personally think that's somewhat likely but not obvious and would love to see more analysis on it).
- Size of impact: AI-based cognitive enhancement is probably more promising right now. But non-AI-based cognitive enhancement is still pretty promising, there is some precedent (e.g. massive benefit
... (read more)Create and distribute civilizational restart manuals
A number of "existential risks" we are worried about may not directly kill off everybody, but would still cause enough deaths and chaos to make rebuilding extremely difficult. Thus, we propose that people design and distribute "civilizational restart manuals" to places that are likely to survive biological or nuclear catastrophes, giving humanity more backup options in case of extreme diasters.
The first version can be really cheap, perhaps involving storing paper copies of parts of Wikipedia plus 10 most important books sent to 100 safe and relatively uncorrelated locations -- somewhere in New Zealand, the Antarctica research base, a couple of nuclear bunkers, nuclear submarines, etc.
We are perhaps even more concerned about great moral values like concern for all sentient beings surviving and re-emerging than preserving civilization itself, so we would love for people to do further research and work into considering how to preserve cosmopolitan values as well.
My comment from another thread applies here too:
SEP for every subject
Epistemic institutions
Create free online encyclopedias for every academic subject (or those most relevant to longtermism) written by experts and regularly updated. Despite the Stanford Encyclopedia of Philosophy being widely-known and well-loved there are few examples from other subjects. Often academic encyclopedias are both behind institutional paywalls and not accessible on sci-hub (e.g. https://oxfordre.com/). This would provide decisionmakers and the public with better access to academic views on a variety of topics.
Preventing factory farming from spreading beyond the earth
Space governance, moral circle expansion (yes I am also proposing a new area of interest.)
Early space advocates such as Gerard O’Neill and Thomas Heppenheimer had both included animal husbandry in their designs of space colonies. In our time, the European Space Agency, the Canadian Space Agency, the Beijing University of Aeronautics and Astronautics, and NASA, have all expressed interests or announced projects to employ fish or insect farming in space.
This, if successful, might multiply the suffering of farmed animals by many times of the numbers of farmed animals on earth currently, spanned across the long-term future. Research is needed in areas like:
- Continuous tracking of the scientific research on transporting and raising animals in space colonies or other planets.
- Tracking, or even conducting research on the feasibility of cultivating meat in space.
- Tracking the development and implementation of AI in factory farming, which might enable unmanned factory farms and therefore make space factory farming more feasible. For instance, the aquaculture industry is hoping that AI can help them overcome major difficultie
... (read more)Purchase a top journal
Metascience
Journals give bad incentives to academics - they require new knowledge to be written in hard to understand language, without pre-registration at great cost and sometimes focused on unimportant topics. Taking over a top journal and ensuring it incentivised high quality work on the most important topics would begin to turn the scientific system around.
We could, of course, simply get the future fund to pay for this. There is, however, an alternative that might be worth thinking about.
This seems like the kind of thing that dominant assurance contracts are designed to solve. We could run a Kickstarter, and use the future fund to pay the early backers if we fail to reach the target amount. This should incentivise all those who want the journals bought to chip in.
Here is one way we could do this:
A Longtermist Nobel Prize
All Areas
The idea is to upgrade the Future of Life Award to be more desirable. The prizemoney would be increased from $50k to$10M SEK (roughly $1.1M) per individual to match the Nobel Prizes. Both for prestige, and to make sure ideal candidates are selected, the selection procedure would be reviewed, adding extra judges or governance mechanisms as needed. This would not immediately mean that longtermism has something to match the prestige of a Nobel, but it would give a substantial reward and offer top longtermists something to strive for.
(A variation on a suggestion by DavidMoss)
Megastar salaries for AI alignment work
Artificial Intelligence
Aligning future superhuman AI systems is arguably the most difficult problem currently facing humanity; and the most important. In order to solve it, we need all the help we can get from the very best and brightest. To the extent that we can identify the absolute most intelligent, most capable, and most qualified people on the planet – think Fields Medalists, Nobel Prize winners, foremost champions of intellectual competition, the most sought-after engineers – we aim to offer them salaries competitive with top sportspeople, actors and music artists to work on the problem. This is complementary to our AI alignment prizes, in that getting paid is not dependent on results. The pay is for devoting a significant amount of full time work (say a year), and maximum brainpower, to the problem; with the hope that highly promising directions in the pursuit of a full solution will be forthcoming. We will aim to provide access to top AI alignment researchers for guidance, affiliation with top-tier universities, and an exclusive retreat house and office for fellows of this program to use, if so desired.
Longtermist Policy Lobbying Group
Biorisk, Recovery from Catastrophe, Epistemic Institutions, Values and Reflective Processes
Many social movements find a lot of opportunity by attempting to influence policy to achieve their goals . While longtermism can and should remain bi-partisan, there may be many opportunities to pull the rope sideways on policy areas of concern.
We'd like to see a project that attempts to carefully understand the lobbying process and explores garnering support for identified tractable policies. We think while such a project could scale to be very large once successful, anyone working on this project should really aim to start small and tred carefully, aiming to avoid issues around the unilateralist curse and ensuring to not make longtermism into an overly partisan issue. It's likely that longtermist lobbying might also be best done as lobbying for other clear areas related to longtermism but as a distinct idea, such as lobbying for climate change mitigation or lobbying for pandemic preparedness.
Disclaimer: This is just my personal opinion and not the opinion of Rethink Priorities. This project idea was not seen by anyone else at Rethink Priorities prior to posting.
Landscape Analysis: Longtermist Policy
Biorisk, Recovery from Catastrophe, Epistemic Institutions, Values and Reflective Processes
Many social movements find a lot of opportunity by attempting to influence policy to achieve their goals - what ought we do for longtermist policy? Longtermism can and should remain bi-partisan but there may be many opportunities to pull the rope sideways on policy areas of concern.
We'd like to see a project that attempts to collect a large number of possible longtermist policies that are tractable, explore strategies for pushing these policies, and also use public opinion polling on representative samples to understand which policies are popular. Based on this information, we could then suggest initiatives to try to push for.
Disclaimer: This is just my personal opinion and not the opinion of Rethink Priorities. This project idea was not seen by anyone else at Rethink Priorities prior to posting.
Experiments to scale mentorship and upskill people
Empowering Exceptional People, Effective Altruism
For many very important and pressing problems, especially those focused on improving the far future, there are very few experts working full-time on these problems. What's more, these fields are nascent, and there are few well-defined paths for young or early-career people to follow, it can be hard to enter the field. Experts in the field are often ideal mentors - they can vet newcomers, help them navigate the field, provide career advice, collaborate on projects and gain access to new opportunities, but there are currently very few people qualified to be mentors. We'd love to see projects that experiment with ways to improve the mentorship pipeline so that more individuals can work on pressing problems. The kinds of possible solutions possible are very broad - from developing expertise in some subset of mentorship tasks (such as vetting) in a scalable way, increasing the pool of mentors, improving existing mentors' ability to provide advice by training them, experimenting with better mentor-mentee matchmaking, running structured mentorship programs, and more.
Proportional prizes for prescient philanthropists
Effective Altruism, Economic Growth, Empowering Excetional People
A low-tech alternative to my proposal for impact markets is to offer regular, reliable prizes for early supporters of exceptionally impactful charities. These can be founders, advisors, or donors. The prizes would not only go to the top supporters but proportionally to almost anyone who can prove that they’ve contributed (or where the charity has proof of the contribution), capped only at a level where the prize money is close to the cost of the administrative overhead.
Donors may be rewarded in proportion to the aggregate size of their donations, advisors may be rewarded in proportion to their time investment valued at market rates, founders may be rewarded in proportion to the sum of both.
If these prizes are awarded reliably, maybe by several entities, they may have some of the same benefits as impact markets. Smart and altruistic donors, advisors, and charity serial entrepreneurs can accumulate more capital that they can use to support their next equally prescient project.
High quality, EA Audio Library (HEAAL)
all/meta, though I think the main value add is in AI
(Nonlinear has made a great rough/low quality version of this, so at least some credit/prize should go to them.)
Audio has several advantages over text when it comes to consuming long-form content, with one significant example being that people can consume it while doing some other task (commuting, chores, exercising) meaning the time cost of consumption is almost 0. If we think that broad, sustained engagement with key ideas is important, making the cost of engagement much lower is a clear win. Quoting Holden's recent post:
What does high quality mean here, and what content might get covered?
High quality means read by humans (I'm imagining paying maths/compsci students who'll be able to handle mathematical n
High-quality human performance is much more engaging than autogenerated audio, fwiw.
Our World in Base Rates
Epistemic Institutions
Our World In Data are excellent; they provide world-class data and analysis on a bunch of subjects. Their COVID coverage made it obvious that this is a very great public good.
So far, they haven't included data on base rates; but from Tetlock we know that base rates are the king of judgmental forecasting (EAs generally agree). Making them easily available can thus help people think better about the future. Here's a cool corporate example.
e.g.
“85% of big data projects fail”;
“10% of people refuse to be vaccinated because of fearing needles (pre-COVID so you can compare to the COVID hesitancy)”;
"11% of ballot initiatives pass"
“7% of Emergent Ventures applications are granted”;
“50% of applicants get 80k advice”;
“x% of applicants get to the 3rd round of OpenPhil hiring”, "which takes y months";
“x% of graduates from country [y] start a business”.
MVP:
Later, Q... (read more)
I think this is neat.
Perhaps-minor note: if you'd do it at scale, I imagine you'd want something more sophisticated than coarse base rates. More like, "For a project that has these parameters, our model estimates that you have a 85% chance of failure."
I of course see this as basically a bunch of estimation functions, but you get the idea.
Teaching buy-out fund
Allocate EA Researchers from Teaching Activities to Research
Problem: Professors spend a lot of their time teaching instead of researching. Many don’t know that many universities offer “teaching buy-outs”, where if you pay a certain amount of money, you don’t have to teach. Many also don’t know that a lot of EA funding would be interested in paying that.
Solution: Make a fund that's explicitly for this, to make it so more EAs know. This is the 80/20 of promoting the idea. Alternatively, funders can just advertise this offering in other ways.
Adversarial collaborations on important topics
Epistemic Institutions
There are many important topics, such as the level of risk from advanced artificial intelligence and how to reduce it, among which there are reasonable people with very different views. We are interested in experimenting with various types of adversarial collaborations, which we define as people with opposing views working to clarify their disagreement and either resolve the disagreement or identify an experiment/observation that would resolve it. We are especially excited about combining adversarial collaborations with forecasting on any double cruxes identified from them. Some ideas for experimentation might be varying the number of participants, varying the level of moderation and strictness of enforced structure, and introducing AI-based aids.
Existing and past work relevant to this space include the Adversarial Collaboration Project, SlateStarCodex's adversarial collaboration contests, and the Late 2021 MIRI Conversations.
Foundational research on the value of the long-term future
Research That Can Help Us Improve
If we successfully avoid existential catastrophe in the next century, what are the best pathways to reaching existential security, and how likely is it? How optimistic should we be about the trajectory of the long-term future? What are the worst-case scenarios, and how do we avoid them? How can we make sure the future is robustly positive and build a world where as many people are flourishing?
To elaborate on what I have in mind with this proposal, it seems important to conduct research beyond reducing existential risk over the next century – we should make sure that the future we have afterwards is good as well. I'd be interested in research following up on subjects like those of the posts:
- "Disappointing Futures" Might Be As Important As Existential Risk - Michael Dickens
- Why I prioritize moral circle expansion over artificial intelligence alignment - Jacy Reese
- The expected value of extinction risk reduction is positive - Jan Brauner and Friederike Grosse-Holz and A longtermist critique of “The expected value of extinction risk reduction is positive”
- Should We Prioritize Long-Term Existential
... (read more)Focus Groups Exploring Longtermism / Deliberative Democracy for Longtermism
Epistemic Institutions, Values and Reflective Processes
Right now longtermism is being developed within a relatively narrow set of stakeholders and participants relative to the broad set of people (and nonhumans) that would be affected by the decisions we make. We'd like to see focus groups that attempt to engage a more diverse group of people (diversity across many axes including but not limited to race, gender, age, geography, and socioeconomic status) and attempt to explain longtermism to them and explore what visions they have for the future of humanity (and nonhumans). Hopefully through many iterations we can find a way to go across what is likely rather large initial inferential distance to explore how a broader and more diverse group of people would think about longtermism once ideally informed. This can be related to and informed by engaging in deliberative democracy. This also could be helping to initiate what longtermists call "the long reflection".
Disclaimer: This is just my personal opinion and not the opinion of Rethink Priorities. This project idea was not seen by anyone else at Rethink Priorities prior to posting.
Incubator for Independent Researchers
Training People to Work Independently on AI Safety
Problem: AI safety is bottlenecked by management and jobs. There are <10 orgs you can do AI safety full time at, and they are limited by the number of people they can manage and their research interests.
Solution: Make an “independent researcher incubator”. Train up people to work independently on AI safety. Match them with problems the top AI safety researchers are excited about. Connect them with advisors and teammates. Provide light-touch coaching/accountability. Provide enough funding so they can work full time or provide seed funding to establish themselves, after which they fundraise individually. Help them set up co-working or co-habitation with other researchers.
This could also be structured as a research organization instead of an incubator.
EA Marketing Agency
Improve Marketing in EA Domains at Scale
Problem: EAs aren’t good at marketing, and marketing is important.
Solution: Fund an experienced marketer who is an EA or EA-adjacent to start an EA marketing agency to help EA orgs.
Expected value calculations in practice
Invest in creating the tools to approximate expected value calculations for speculative projects, even if hard.
Currently, we can’t compare the impact of speculative interventions in a principled way. When making a decision about where to work or donate, longtermists or risk-neutral neartermists may have to choose an organization based on status, network effects, or expert opinion. This is, obviously, not ideal.
We could instead push towards having expected value calculations for more things. In the same way that GiveWell did something similar for global health and development, we could try to do something similar for longtermism/speculative projects. Longer writeup here.
AGI Early Warning System
Anonymous Fire Alarm for Spotting Red Flags in AI Safety
Problem: In a fast takeoff scenario, individuals at places like DeepMind or OpenAI may see alarming red flags but not share them because of myriad institutional/political reasons.
Solution: create an anonymous form - a “fire alarm” (like an whistleblowing Andon Cord of sorts) where these employees can report what they’re seeing. We could restrict the audience to a small council of AI safety leaders, who then can determine next steps. This could, in theory, provide days to months of additional response time.
Alignment Forum Writers
Pay Top Alignment Forum Contributors to Work Full Time on AI Safety
Problem: Some of AF’s top contributors don’t actually work full-time on AI safety because they have a day job to pay the bills.
Solution: Offer them enough money to quit their job and work on AI safety full time.
(Per Nick's note, reposting)
Political fellowships
Values and Reflective Processes, Empowering Exceptional People
We’re like to fund ways to pull people who don’t run for political office to run for political office. It's like a MacArthur. You get a call one day. You've been selected. You'd make a great public servant, even if you don't know it. You'd get some training, like DCCC and NRCC, and when you run, you get two million spent by the super-PAC run by the best. They've done the analysis. They'll provide funding. They've lined up endorsers. You've never thought about politics, but they've got your back. Say what you want to say, make a difference in the world: run the campaign you don't mind losing. And if you win, make it real.
The Billionaire Nice List
Philanthropy
A regularly updated list of how much impact we estimate billionaires have created. Billionaires care about their public image, people like checking lists. Let's attempt to create a list which can be sorted by different moral weights and incentivises billionaires to do more good.
Pro-immigration advocacy outside the United States
Economic Growth
Increasing migration to rich countries could dramatically reduce poverty and grow the world economy by up to 150%. Open Philanthropy has long had pro-immigration reform in the U.S. as a focus area, but the American political climate has been very hostile to and/or polarized on immigration, making it harder to make progress in the U.S. However, other high-income countries might be more receptive to increasing immigration, and would thus be easier places to make progress. For example, according to a 2018 Pew survey, 81% of Japanese citizens support increasing or keeping immigration levels about the same. It would be worth exploring which developed countries are most promising for pro-immigration advocacy, and then advocating for immigration there.
What this project could look like:
Related posts:
- Which countries are most receptiv
... (read more)Improving ventilation
Biorisk
Ventilation emerged as a potential intervention to reduce the risk of COVID and other pathogens. Additionally, poor air quality is a health concern in its own right, negatively affecting cognition and cognitive development. Despite this, there still does not seem to be commonly accepted wisdom about what kind of ventilation interventions ought to be pursued in offices, bedrooms, and other locations.
We'd like to see a project that does rigorous research to establish strong ventilation strategies in a variety of contexts and explores their effectiveness on various ventilation issues. Once successful ventilation strategies are developed, assuming it would be cost-effective to do so, this project could then aim to roll out ventilation and campaign/market for ventilation interventions either as a for-profit, non-profit, or hybrid.
Disclaimer: This is just my personal opinion and not the opinion of Rethink Priorities. This project idea was not seen by anyone else at Rethink Priorities prior to posting.
Advocacy organization for unduly unpopular technologies
Public opinion on key technologies.
Some technologies have enormous benefits, but they are not deployed very much because they are unpopular. Nuclear energy could be a powerful tool for enhancing access to clean energy and combating climate change, but it faces public opposition in Western countries. Similarly, GMOs could help solve the puzzle of feeding the global population with fewer resources, but public opinion is largely against them. Cellular agriculture may soon face similar challenges. Public opinion on these technologies must urgently be shifted. We’d like to see NGOs that create the necessary support via institutions and the media, without falling into the trap of partisan warfare with traditional environmentalists.
Building the grantmaker pipeline
Empowering Exceptional People, Effective Altruism
The amount of funding committed to Effective Altruism has grown dramatically in the past few years, with an estimated $46 billion dollars currently earmarked for EA. With this significant increase in available funding, there is now a greatly increased need for talented and thoughtful grantmakers, who can effectively deploy this money. It's plausible that yearly EA grantmaking could increase by a factor of 5-10x over the coming decade, and this requires finding and training new grantmakers on best practices, as well as developing sound judgement. We'd love to see projects that build the grantmaker pipeline, whether that's grantmaking fellowships, grantmaker mentoring, more frequent donor lotteries, more EA funds-style organisations with rotating fund managers, and more.
NB: This might be a refinement of fellowships, but I think it's particularly important.
Top ML researchers to AI safety researchers
Pay top ML researchers to switch to AI safety
Problem: <.001% of the world’s brightest minds are working on AI safety. Many are working on AI capabilities.
Solution: Pay them to switch. Pay them their same salary, or more, or maybe a lot more.
EA Productivity Fund
Increase the output of top longtermists by paying for things like coaching, therapy, personal assistants, and more.
Problem: Longtermism is severely talent constrained. Yet, even though these services could easily increase a top EAs productivity by 10-50%, many can’t afford them or would be put off by the cost (imposter syndrome or just because it feels selfish).
Solution: Create a lightly-administered fund to pay for them. It’s unclear what the best way would be to select who gets funding, but a very simple decision metric could be to give it to anybody who gets funding from Open Phil, LTFF, SFF, or FTX. This would leverage other people’s existing vetting work.
Automated Open Project Ideas Board
The Future Fund
All of these ideas should be submitted to a board where anyone can forecast their value in dollars lives saved per $ as rated by a trusted research organisation, say Rethink Priorities. The forecasts can be reputation or prediction markets. Then that research organisation checks 1% of the ideas and scores them. These scores are used to weight the other forecasts. This creates a scalable system for ranking ideas. Then funders can donate to them as they see fit.
Massive US-China exchange programme
Great power conflict, AI
Fund (university) students to live in the other country in a host family: between US-China, Russia-US, China-India, potentially India-Pakistan. This is important if one thinks that personal experience make it less likely that individuals incentivise or encourage escalation, war and certain competitive dynamics.
Nuclear/Great Power Conflict Movement Building
Effective Altruism
Given the current situation in Ukraine, movement-building related to nuclear x-risk or great power conflict would likely be much more tractable than it was up until recently. We don't know how long this period will last for and the memory of the public can be short, so we intend to advantage of this opportunity. This outreach should focus on people with an interest in policy or potential student group organisers as these people are most likely to have an influence here.
(Per Nick's note, reposting)
Market shaping and advanced market commitments
Epistemic institutions; Economic Growth
Market shaping is when an idea can only be jump-started by committed demand or other forces. Operation Warp Speed is the most recent example of market-shaping through advanced market commitments, but it has been used several times for other vaccine development. We are interested in funding work to understand when market-shaping makes sense, ideas for creating and funding market-shaping methods, and specific market-shaping or advanced market commitments in our areas of interest.
(I drafted this then realized that it is largely the same as Zac's comment above - so I've strong upvoted that comment and I'm posting here in case my take on it is useful.)
Crowding in other funding
We're excited to see ideas for structuring projects in our areas of interest that leverage our funds by aligning with the tastes of other funders and investors. While we are excited about spending billions of dollars on the best projects we can find, we're also excited to include other funders and investors in the journey of helping these projects scale in the best way possible. We would like to maximize the chance that other sources of funding come in. Some projects are inherently widely attractive and some others are only ever likely to attract (or want) longtermist funding. But, we expect that there are many projects where one or more general mechanisms can be applied to crowd in other funding. This may include:
An Organisation that Sells its Impact for Profit
Empowering Exceptional People, Epistemic Institutions
Nonprofits are inefficient in some respects: they don't maximize value for anyone the way for-profits do for their customers. Moreover, they lack market valuations, so successful nonprofits scale too slowly while unsuccessful ones linger too long. One way to address this is to start an organisation that only accepts funding that incentivizes impact. Its revenue would come from: (1) Selling Impact Cerificates, (2) Prizes, and/or (3) Grants (but only if they value the work at a similar level to the impact certificates). Such an organization could operate on an entirely for-profit basis. Funding would be raised from for-profit investors. Staff would be paid in salary plus equity. The main premise here is that increased salaries are a small price to pay for the efficiencies that can be gained from for-profit markets. Of course, this can only succeed if the funding mechanisms (1-3) become sufficiently popular, but given the increased funding in longtermist circles, this now looks increasingly likely.
See also Retrospective grant evaluations, Retroactive public goods funding, Impact ... (read more)
Rationalism But For Group Psychology
Epistemic Institutions
LessWrong and the rationalist community have done well to highlight biases and help individuals become more rational, as well as creating a community around this. But most of the biggest things in life are done by groups and organizations.
We'd like to see a project that takes group psychology / organizational psychology and turns it into a rationalist movement with actionable advice to help groups be less biased and help groups achieve more impact, like how the original rationalist movement did so with individuals. We imagine this would involve identifying useful ideas from group psychology / organizational psychology literature and popularizing them in the rationalist community, as well as trying to intentionally experiment. Perhaps this could come up with better ideas for meetings, how to hire, how to attract talent, better ways to help align employees with organizational goals, better ways to keep track of projects, etc.
Disclaimer: This is just my personal opinion and not the opinion of Rethink Priorities. This project idea was not seen by anyone else at Rethink Priorities prior to posting.
Wild animal suffering in space
Space governance, moral circle expansion.
Terraforming other planets might cause animals to come to exist in these planets, either because of intentional or unintentional behaviors. These animals might live net negative lives.
Also, we cannot rule out the possibility that there are already wild "animals" (or any form of sentient beings) who might be suffering from net negative lives in other planets. (this does not relate directly to the Fermi Paradox, which is highly intelligent lives, not lives per se)
Relevant research include:
AI alignment prize suggestion: Introduce AI Safety concepts into the ML community
Artificial Intelligence
Recently, there have been several papers published at top ML conferences that introduced concepts from the AI safety community into the broader ML community. Such papers often define a problem, explain why it matters, sometimes formalise it, often include extensive experiments to showcase the problem, sometimes include some initial suggestions for remedies. Such papers are useful in several ways: they popularise AI alignment concepts, pave the way for further research, and demonstrate that researchers can do alignment research while also publishing in top venues. A great example would be Optimal Policies Tend To Seek Power, published in NeurIPS. Future Fund could advertise prizes for any paper that gets published in a top ML/NLP/Computer Vision conference (from ML, that would be NeurIPS, ICML, and ICLR) and introduces a key concept of AI alignment.
EA Macrostrategy:
Effective Altruism
Many people write about the general strategy that EA should take, but almost no-one outside of CEA has this as their main focus. Macrostrategy involves understanding all of the different organisations and projects in EA, how they work together, what the gaps are and the ways in which EA could fail to achieve its goals. Some resources should be spent here as an exploratory grant to see what this turns up.
Evaluating large foundations
Effective Altruism
Givewell looks at actors: object-level charities, people who do stuff. But logically, it's even more worth scrutinising megadonors (assuming that they care about impact or public opinion about their operations, and thus that our analysis could actually have some effect on them).
For instance, we've seen claims that the Global Fund, who spend $4B per year, meet a 2x GiveDirectly bar but not a Givewell Top Charity bar.
This matters because most charity - and even most good charity - is still not by EAs or run on EA lines. Also, even big cautious foundations can risk waste / harm, as arguably happened with the Gates Foundation and IHME - it's important to understand the base rate of conservative giving failing, so that we can compare hits-based giving. And you only have to persuade a couple of people in a foundation before you're redirecting massive amounts.
Refining EA communications and messaging
Values and Reflective Processes, Research That Can Help Us Improve
If we want to motivate a broad spectrum of people about the importance of doing good and ensuring the long-term goes well, it's imperative we find out which messages are "sticky" and which ones are forgotten quickly. Testing various communication frames, particularly for key target audiences like highly talented students, will support EA outreach projects in better tailoring their messaging. Better communications could hugely increase the number of people that consume EA content, relate to the values of the EA movement, and ultimately commit their life to doing good. We'd be excited to see people testing various frames and messaging, across a range of target audiences, using methodologies such as surveys, focus groups, digital media, and more.
TL;DR: EA Retroactive Public Good's Funding
In your format:
Deciding which projects to fund is hard, and one of the reasons for that is that it's hard to guess which projects will succeed and which will fail. But wait, startups have solved this problem perfectly: Anybody is allowed to vet a startup and decide to invest (bet) their money on this startup succeeding, and if the startup does succeed, then the early investors get a big financial return.
The EA community could do the same, only it is missing the part where we give big financial returns to projects that turned out good.
This would make the fund's job much easier: They would have to vet which project helped IN RETROSPECT, which is much easier, and they'll leave the hard prediction work to the market.
Context for proposing this
I heard of a promising EA project that is for some reason having trouble raising funds. I'm considering funding it myself, though I am not rich and that would be somewhat broken to do. But I AM rich enough to fund this project and bet on it working well enough to get a Retroactive Public Good grant in the future, if such a thing existed. I also might have some advantage over the EA Fund in vetting this project.
In Vitalik's words:
https://medium.com/ethereum-optimism/retroactive-public-goods-funding-33c9b7d00f0c
EA Forum Writers
Pay top EA Forum contributors to write about EA topics full time
Problem: Some of the EA Forum’s top writers don’t work on EA, but contribute some of the community’s most important ideas via writing.
Solution: Pay them to write about EA ideas full time. This could be combined with the independent researcher incubator quite well.
A “Red Team” to rigorously explore possible futures and advocate against interventions that threaten to backfire
Research That Can Help Us Improve, Effective Altruism, Epistemic Institutions, Values and Reflective Processes
Motivation. There are a lot of proposals here. There are additional proposals on the Future Fund website. There are additional proposals also on various lists I have collected. Many EA charities are already implementing ambitious interventions. But really we’re quite clueless about what the future will bring.
This week alone I’ve discussed with friends and acquaintances three decisions in completely different contexts that might make the difference between paradise and hell for all sentient life, and not just in the abstract in the way that cluelessness forces us to assign some probability to almost any outcome but in the sense were we could point to concrete mechanisms along which the failure might occur. Yet we had to decide. I imagine that people in more influential positions than mine have to make similar decisions on almost a daily basis and on hardly any more information.
As a result, the robustness of an intervention has been the key criterion for prioritiza... (read more)
Subsidise catastrophic risk-related markets on prediction markets
Prediction markets and catastrophic risk
Many markets don't exist because there isn't enough liquidity. A fund could create important longtermist markets on biorisk, AI safetry and nuclear war by pledging to provide significant liquidity once created. This would likely still only work for markets resolving in 1-10 years, due to inflation, but still*.
*It has been suggested to run prediction markets which use indices rather than currency. But people have shown reluctance to bet on ETH markets, so might show reluctance here too.
FTX, which itself runs prediction markets, might be particularly well-suited for prediction-market interventions like this. I myself think that they could do a lot to advance people's understanding of prediction markets if in addition to their presidential prediction market, they also offered a conditional prediction market of how an indicator like the S&P 500 would do 1 week after the 2024 election, conditional on the Republicans winning vs the Democrats winning. Conditional prediction markets for important indicators on big national elections would provide both directly useful info in addition to educating people about prediction markets' potential.
Pandemic preparedness in LMIC countries
Biorisk
COVID has shown us that biorisk challenges fall on all countries, regardless of how prepared and well-resourced the countries are. While there certainly are many problems with pandemic preparedness high-income countries that need to be addressed, LMIC countries face even more issues in helping detect, identify, contain, mitigate, and/or prevent currently known and novel pathogens. Additionally, even after high income countries successfully contain a pathogen it may continue to spread within LMIC countries opening up risk of further more virulent mutations.
We'd like to see a project that works with LMIC governments to understand their current pandemic prevention plans and understand their local context. This project would especially focused on novel pathogens that are more severe than currently known pathogens -- and help provide the resources and knowledge needed to upgrade their plans to match the best practices of current bio-risk experts. Such a project would likely benefit from a team that contains expertise working with LMIC countries. An emergency fund and expert advice can also be provisioned to be ready to go when pathogens are... (read more)
Language models for detecting bad scholarship
Epistemic institutions
Anyone who has done desk research carefully knows that many citations don't support the claim they're cited for - usually in a subtle way, but sometimes a total nonsequitur. Here's a fun list of 13 features we need to protect ourselves.
This seems to be a side effect of academia scaling so much in recent decades - it's not that scientists are more dishonest than other groups, it's that they don't have time to carefully read everything in their sub-sub-field (... while maintaining their current arms-race publication tempo).
Take some claim P which is below the threshold of obviousness that warrants a citation.
It seems relatively easy, given current tech, to answer: (1) "Does the cited article say P?" This question is closely related to document summarisation - not a solved task, but the state of the art is workable. Having a reliable estimate of even this weak kind of citation quality would make reading research much easier - but under the above assumption of unread sources, it would also stop many bad citations from being written in the first place.
It is very hard to answer (2) "Is the cited ar... (read more)
Getting former hiring managers from quant firms to help with alignment hiring
Artificial Intelligence, Empowering Exceptional People
Despite having lots of funding, alignment seems to not have been very successful at attracting top talent to date. Quant firms, on the other hand, have become known for very successfully acquiring talent and putting them to work on difficult conceptual and engineering problems. Although buy-in to alignment before one can contribute is often cited as a reason, this is, if anything, even more of a problem for quant firms, since very few people are inherently interested in quant trading as an end. As such, importing some of this know how could help substantially improve alignment hiring and onboarding efficiency.
On malevolence: How exactly does power corrupt?
Artificial Intelligence / Values and Reflective Processes
How does it happen, if it happens? Some plausible stories:
Bounty Budgets
Like Regranting, but for Bounties
Problem: In the same way that regranting decentralizes grantmaking, so do the same thing for bounties. For example, give the top 20 AI safety researchers up to $100,000 to create bounties or RFPs for, say, technical research problems. They could also reallocate their budget to other trusted people, creating a system of decentralized trust.
In theory, FTX’s regrantors could already do this with their existing budgets, but this would encourage people to think creatively about using bounties or RFPs.
Bounties are great because you only pay out if it's successful. If hypothetically each researcher created 5 bounties at $10,000 each that’d be 100 bounties - lots of experiments.
RFPs are great because it puts less risk on the applicants but also is a scalable, low-management way to turn money into impact.
Examples: 1) I’ll pay you $1,000 for every bounty idea that gets funded
2) Richard Ngo
More public EA charity evaluators
Effective Altruism
There are dozens of EA fundraising organizations deferring to just a handful of organizations that publish their research on funding opportunities, most notably GiveWell, Founders Pledge and Animal Charity Evaluators. We would like to see more professional funding opportunity research organizations sharing their research with the public, both to increase the quality of research in the areas that are currently covered - through competition and diversity of perspectives and methodologies - and to cover important areas that aren’t yet covered such as AI and EA meta.
Longtermist risk screening and certification of institutions
Artificial Intelligence, Biorisk and Recovery from Catastrophe
Companies, nonprofits and government institutions participate and invest in activities that might significantly increase global catastrophic risk like gain-of-function research or research that might increase the likelihood of unaligned AGI. We’d like to see an organisation that evaluates and proposes policies and practices that should be followed in order to reduce these risks. Institutions that commit to following these practices and submit themselves to independent audits could be certified. This could help investors and funders to screen institutions for potential risks. It could also be used in future corporate campaigns to move companies and investors into adopting responsible practices.
Resilient ways to archive valuable technical / cultural / ecological information
Biorisk and recovery from catastrophe
In ancient Sumeria, clay tablets recording ordinary market transactions were considered disposable. But today's much larger and wealthier civilization considers them priceless for the historical insight they offer. By the same logic, if human civilization millennia from now becomes a flourishing utopia, they'll probably wish that modern-day civilization had done a better job at resiliently preserving valuable information. For example, over the past 120 years, around 1 vertebrate species has gone extinct each year, meaning we permanently lose the unique genetic info that arose in that species through millions of years of evolution.
There are many existing projects in this space -- like the internet archive, museums storing cultural artifacts, and efforts to protect endangered species. But almost none of these projects are designed robustly enough to last many centuries with the long-term future in mind. Museums can burn down, modern digital storage technologies like CDs and flash memory aren't designed to last for centuries, and many... (read more)
AI Safety “school” / More AI safety Courses
Train People in AI Safety at Scale
Problem: Part of the talent bottleneck is caused by there not being enough people who have the relevant skills and knowledge to do AI safety work. Right now, there’s no clear way to gain those skills. There’s the AGI Fundamentals curriculum, which has been a great success, but aside from that, there’s just a handful of reading lists. This ambiguity and lack of structure lead to way fewer people getting into the field than otherwise would.
Solution: Create an AI safety “school” or a bunch more AI safety courses. Make it so that if you finish the AGI Fundamentals course there are a lot more courses where you can dive deeper into various topics (e.g. an interpretability course, values learning course, an agent foundations course, etc). Make it so there’s a clear curriculum to build up your technical skills (probably just finding the best existing courses, putting them in the right order, and adding some accountability systems). This could be funded course by course, or funded as a school, which would probably lead to more and better quality content in the long run.
Offer paid sabbatical to people considering changing careers
Empowering Exceptional People
People sometimes are locked-in in their non-EA careers because while working, they do not have time to:
Create an organization that will offer paid sabbaticals to people considering changing careers to more EA-aligned jobs to help this transition. During the sabbatical, they could be members of a community of people in a similar situation, with coaching available.
Agree. I think that having an Advance Market Commitment system for this makes sense. E.g., FTX says 'We will fund mid-career academics/professionals for up to x months to do y. ' My experience is that most of the high value people I know who are good professional are sufficiently time poor and dissuaded by uncertainty that they won't spend 2-5 hours to apply for something they don't know they will get. The barriers and costs are probably greater than most EA funders realise.
An alternative/related idea is to have a simple EOI system where people can submit a fleshed out CV and a paragraph and then get a AMC on an application - e.g., We think that there is a more than 60% chance that we would fund this and would therefore welcome a full application.
A public EA impact investing evaluator
Effective Altruism, Empowering Exceptional People
Charity evaluators that publicly share their research - such as GiveWell, Founders Pledge and Animal Charity Evaluators - have arguably not only helped move a lot of money to effective funding opportunities but also introduced many people to the principles of effective altruism, which they have applied in their lives in various ways. Apart from some relatively small projects (1) (2) (3) there is currently no public EA research presence in the growing impact investing sector, which is both large in the amount of money being invested and in its potential to draw more exceptional people’s attention to the effective altruism movement. We’d love to see an organization that takes GiveWell-quality funding opportunity research to the impact investing space and publicly shares its findings.
Predicting Our Future Grants
Epistemic Institutions, Research That Can Help Us Improve
If we had access to a crystal ball that allowed us to know exactly what our grants five years from now otherwise would have been, we can make substantially better decisions now. Just making the grants we'd otherwise have made five years in the future can save a lot of grantmaking time and money, as well as cause many amazing projects to happen more quickly.
We don't have a crystal ball that lets us see future grants. But perhaps high-quality forecasts can be the next best thing. Thus, we're extremely excited about people experimenting with Prediction-Evaluation setups to predict the Future Fund's future grants with high accuracy, helping us to potentially allocate better grants more quickly.
Participatory longtermism
Values and reflective processes, Effective Altruism
Most longtermist and EA ideas come from a small group of people with similar backgrounds, but could affect the global population now and in the future. This creates the risk of longtermist decisionmakers not being aligned with that wider population. Participatory methods aim to involve people decisionmaking about issues that affect them, and they have become common in fields such as international development, global health, and humanitarian aid. Although a lot could be learned from existing participatory methods, they would need to be adapted to issues of concern to EAs and longtermists. The fund could support the development of new participatory methods that fit with EA and longtermist concerns, and could fund the running of participatory processes on key issues.
Additional notes:
Research on the long-run determinants of civilizational progress
Economic growth
What factors were the root cause of the industrial revolution? Why did industrialization happen in the time and place and ways that it did? How have the key factors supporting economic growth changed over the last two centuries? Why do some developing countries manage to "catch up" to the first world, while others lag behind or get stuck in a "middle-income trap"? Is the pace of entrepreneurship or scientific innovation slowing down -- and if so, what can we do about it? Is increasing amounts of "vetocracy" an inevitable disease that afflicts all stable and prosperous societies (as Holden Karnofsky argues here), or can we hope to change our culture or institutions to restore dynamism? At FTX, we'd be interested to fund research into these "progress studies" questions. We're also interested in funding advocacy groups promoting potential policy reforms derived from the ideas of the progress studies movement.
Pay prestigious universities to host free EA-related courses to very large numbers of government officials from around the world
Empowering Exceptional People
The direct benefit of the courses would be to give government officials better tools for thinking and talking with each other.
The indirect benefit could be to allow large numbers of pre-disposed officials to be seen by <some organisation> who could use the opportunity to identify those with particular potential and offer them extra support or opportunities so they can make an even bigger impact.
The need for it to be free is to overcome the blocker of otherwise needing to write a business case for attendance which may then require some sort of tortuous approval process.
The need for it to be hosted at a prestigious university is to overcome the blocker of justifying to bosses or colleagues why the course is worthwhile by allowing piggybacking off the University's brand.
Infrastructure to support independent researchers
Epistemic Institutions, Empowering Exceptional People
The EA and Longtermist communities appear to contain a relatively large proportion of independent researchers compared to traditional academia. While working independently can provide the freedom to address impactful topics by liberating researchers from the perversive incentives, bureaucracy, and other constraints imposed on academics, the lack of institutional support can impose other difficulties that range from routine (e.g. difficulties accessing pay-walled publications) to restrictive (e.g. lack of mentorship, limited opportunities for professional development). Virtual independent scholarship institutes have recently emerged to provide institutional support (e.g. affiliation for submitting journal articles, grant management) for academic researchers working independently. We expect that facilitating additional and more productive independent EA and Longtermist research will increase the demographic diversity and expand the geographical inclusivity of these communities of researchers. Initially, we would like to determine the main needs and limitations independent... (read more)
EA Health Institute/Chief Wellness Officer
Empowering Exceptional People, Effective Altruism, Community Building
Optimizing physical and mental health can improve cognitive performance and decrease burnout. We need EAs/longtermists to have the health resilience to weather the storm - physical fitness, sleep, nutrition, mental health. An institution could be created to assist EA aligned organizations and individuals. Using best practices from high performance workplace health, both personal and organizational, and innovative new ideas, a wellness team could help EAs have sustainable and productive careers. This could be done through consulting, coaching, preparation of educational materials or retreats. From a community growth perspective, EA becomes more attractive to some when one doesn’t have to sacrifice health for deeply meaningful work.
(Disclosure -I'm a physician/physician wellness SME - helping with this could be a good personal fit)
Unified, quantified world model
Epistemic Institutions, Effective Altruism, Values and Refelctive Processes, Research That Can Help Us Improve
Effective altruism started out, to some extend, with a strong focus on quantitative prioritization along the lines of GiveWell’s quantitative models, the Disease Control Priorities studies, etc. But they largely ignore complex, often nonlinear effects of these interventions on culture, international coordination, and the long-term future. Attempts to transfer the same rigor to quantative models of the long-term future (such as Tarsney’s set of models in The Epistemic Challenge to Longtermism) are still in their infancy. Otherwise effective altruist prioritization today is a grab bag of hundreds of considerations that interact in complex ways that (probably) no one has an overview over. Decision-makers may forget to take half of them into account if they haven’t recently thought about them. That makes it hard to prioritize, and misprioritization becomes more and more costly with every year.
A dedicated think tank could create and continually expand a unified world model that (1) is a repository of all considerations that affect altruistic decisi... (read more)
Civic sector software
Economic Growth, Values and Reflective Processes
Software and software vendors are among the biggest barriers to instituting new public policies or processes. The last twenty years have seen staggering advances in technology, user interfaces, and user-centric design, but governments have been left behind, saddled with outdated, bespoke, and inefficient software solutions. Worse, change of any kind can be impractical with existing technology systems or when choosing from existing vendors. This fact prevents public servants from implementing new evidence-based practices, becoming more data-driven, or experimenting with new service models.
Recent improvements in civic technology are often at the fringes of government activity, while investments in best practices or “what works” are often impossible for any government to implement because of technology. So while over the last five years, there has been an explosion of investments and activity around “civic innovation,” the results are often mediocre. On the one hand, governments end up with little more than tech toys or apps that have no relationship to the outcomes that matter (e.g. poverty alleviation, service deli... (read more)
(For context, I was the Chief Data Officer of the California State Government and CTO of Newark, NJ when Cory Booker was Mayor).
I actually think the way to do this is to partner with one city and build everything they need to run the city. The problem is that people can't use piecemeal systems very well. It would just take a huge initial set of capital -- like exactly the type of capital that could be provided here.
Teaching secondary school students about the most pressing issues for humanity's long-term future
Values and Reflective Processes, Effective Altruism
High-quality human data
Artificial Intelligence
Most proposals for aligning advanced AI require collecting high-quality human data on complex tasks such as evaluating whether a critique of an argument was good, breaking a difficult question into easier subquestions, or examining the outputs of interpretability tools. Collecting high-quality human data is also necessary for many current alignment research projects.
We’d like to see a human data startup that prioritizes data quality over financial cost. It would follow complex instructions, ensure high data quality and reliability, and operate with a fast feedback loop that’s optimized for researchers’ workflow. Having access to this service would make it quicker and easier for safety teams to iterate on different alignment approaches
Some alignment research teams currently manage their own contractors because existing services (such as surgehq.ai and scale.ai) don’t fully address their needs; a competent human data startup could free up considerable amounts of time for top researchers.
Such an organization could also practice and build capacity for things that might be needed at ‘crunch time’ – i.e., rapidly producing moderately la... (read more)
Advocacy for digital minds
Artificial Intelligence, Values and Reflective Processes, Effective Altruism
Digital sentience is likely to be widespread in the most important future scenarios. It may be possible to shape the development and deployment of artificially sentient beings in various ways, e.g. through corporate outreach and lobbying. For example, constitutions can be drafted or revised to grant personhood on the basis of sentience; corporate charters can include responsibilities to sentient subroutines; and laws regarding safe artificial intelligence can be tailored to consider the interests of a sentient system. We would like to see an organization dedicated to identifying and pursuing opportunities to protect the interests of digital minds. There could be one or multiple organizations. We expect foundational research to be crucial here; a successful effort would hinge on thorough research into potential policies and the best ways of identifying digital suffering.
X-risk Art Competitions
Fund competitions to make x-risk art to create emotion
Problem: Some EAs find longtermism intellectually compelling but not emotionally compelling, so they don’t work on it, yet feel guilty.
Solution: Hold competitions where artists make art explicitly intended to make x-risk emotionally compelling. Use crowd voting to determine winners.
Translate EA content at scale
Reach More Potential EAs in Non-English Languages
Problem: Lots of potential EAs don’t speak English, but most EA content hasn’t been translated
Solution: Pay people to translate the top EA content of all time into the most popular languages, then promote it to the relevant language communities.
Provide personal assistants for EAs
Empowering Exceptional People
Many senior EAs spend way too much with busywork because it is hard to get a good personal assistant. This is currently so because:
All these factors would be removed if an agency managed personal assistants.