Hide table of contents

The FTX Foundation's Future Fund is a philanthropic fund making grants and investments to ambitious projects in order to improve humanity's long-term prospects.

We have a longlist of project ideas that we’d be excited to help launch. 

We’re now announcing a prize for new project ideas to add to this longlist. If you submit an idea, and we like it enough to add to the website, we’ll pay you a prize of $5,000 (or more in exceptional cases). We’ll also attribute the idea to you on the website (unless you prefer to be anonymous). 

All submissions must be received in the next week, i.e. by Monday, March 7, 2022. 

We are excited about this prize for two main reasons:

  • We would love to add great ideas to our list of projects.
  • We are excited about experimenting with prizes to jumpstart creative ideas.

To participate, you can either

  • Add your proposal as a comment to this post (one proposal per comment, please), or
  • Fill in this form

Please write your project idea in the same format as the project ideas on our website. Here’s an example:

Early detection center

Biorisk and Recovery from Catastrophes

By the time we find out about novel pathogens, they’ve already spread far and wide, as we saw with Covid-19. Earlier detection would increase the amount of time we have to respond to biothreats. Moreover, existing systems are almost exclusively focused on known pathogens—we could do a lot better by creating pathogen-agnostic systems that can detect unknown pathogens. We’d like to see a system that collects samples from wastewater or travelers, for example, and then performs a full metagenomic scan for anything that could be dangerous

You can also provide further explanation, if you think the case for including your project idea will not be obvious to us on its face.

Some rules and fine print:

  • You may submit refinements of ideas already on our website, but these might receive only a portion of the full prize.
  • At our discretion, we will award partial prizes for submissions that are proposed by multiple people, or require additional work for us to make viable.
  • At our discretion, we will award larger prizes for submissions that we really like.
  • Prizes will be awarded at the sole discretion of the Future Fund.

We’re happy to answer questions, though it might take us a few days to respond due to other programs and content we're launching right now.

We’re excited to see what you come up with!

(Thanks to Owen Cotton-Barratt for helpful discussion and feedback.)

Comments731
Sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Retrospective grant evaluations

Research That Can Help Us Improve

EA funders allocate over a hundred million dollars per year to longtermist causes, but a very small fraction of this money is spent evaluating past grantmaking decisions. We are excited to fund efforts to conduct retrospective evaluations to examine which of these decisions have stood the test of time. He hope that these evaluations will help us better score a grantmaker's track record and generally make grantmaking more meritocratic and, in turn, more effective. We are interested in funding evaluations not just of our own grantmaking decisions (including decisions by regrantors in our regranting program), but also of decisions made by other grantmaking organizations in the longtermist EA community.

4
Avi Lewis
I'd like to expand on this: a think-tank/paper that formulates a way of evaluating all grants by a set of objective, quantifiable, criteria. This in order to better inform future allocation decisions so that each dollar spent ends up making the greatest impact possible. In this respect Retrospective Grant Evaluations, is but one variable to measure grant effectiveness. I have a few more ideas that can be combined to create some kind of weighted scoring mechanism for grant evaluation: * Social return on investment (SROI). Arriving at a set of non-monetary variables to quantify social impact * Cost effective analysis. GiveWell is a leader in this. We could consider applying some of their key learnings from the non-for-profit space to EA projects * Horizon Scanning. Governmental bodies have departments that perform this kind of work. A proposal could be assessed by it's alignment with emerging technology forecasts * Backcasting. Seek out ventures that are working towards a desirable future goal * Pareto optimal.  Penalize ideas that could have potential negative impact on factors/people outside of the intended target audience. * Competence and track record. Prioritize grant allocators/judges based on previous successful grants. Prioritize grants to founder or organizations with a proven track record of competence Obviously this list could go on and  this is just a small number of possible variables. The idea is simply to build a model that can score the utility of a proposed grant. 
1
brb243
Is this neglecting the notion that some of the grants are to strategically develop interest by presentation appealing to different decisionmakers, since the objectives are rather already known, such as improve lives of humans and animals in the long term and prevent actors, including those who use and develop AI to reduce the wellbeing of these individuals? It can be a bit of a reputational loss risk to evaluate 'well, we started convincing the government to focus on the long term by appealing by the extent of the future so now we can start talking about the quality of life in various geographies, and if this goes well then we move onto the advancement of animal-positive systems across the spacetime?'

This list should have karma hidden and entries randomised. I guess most poeple do not read and vote all the way to the bottom. I certainly didn't the first time I read it.

I agree; something like Reddit's contest mode would be useful here. I've sorted the list by "newest first" to avoid mostly seeing the most upvoted entries.

7
Stephen Clare
I'm (pleasantly) surprised by the number of entries! But as a result the Forum seems pretty far from optimal as a platform for this discussion. Would be helpful to have a way to filter by focus area, for example.
3
Nathan Young
Yeah I suggest it should be done like this, with search and filters as you suggest. https://forum.effectivealtruism.org/posts/KigFfo4TN7jZTcqNH/the-future-fund-s-project-ideas-competition?commentId=G7aLWq4zypE77Fn6f 
6
Taras Morozov
To prove the point: ATM the most upvoted comment is also the oldest one - Pablo's Retrospective grant evaluations.
4
Greg_Colbourn
The winners have been announced. It's interesting to note the low correlation between comment karma and awards. Of the (3 out of 6) public submissions, the winners had a mean of 20 karma [as of posting this comment], minimum 18, and the (9 out of 15) honourable mentions a mean of 39 (suggesting perhaps these were somewhat weighted "by popular demand"), minimum 16. None of the winners were in the top 75 highest rated comments; 8/9 of the publicly posted honourable mentions were (including 4 in the top 11).  There are 6 winners and 15 honourable mentions listed in OP (21 total); the top 21 public submissions had a mean karma of 52, minimum 38; the top 50 a mean of 40, minimum 28; and the top 100 a mean of 31, minimum 18. And there are 86 public submissions not amongst the awardees with higher karma than the lowest karma award winner.  See spreadsheet for details. Given that half of the winners were private entries (2/3 if accounting for the fact that one was only posted publicly 2 weeks after the deadline), and 40% of the honourable mentions, one explanation could be that private entries were generally higher quality. Note karma is an imperfect measure (so in addition to the factor Nathan mentions, maybe the discrepancy isn't that surprising).
2
Nathan Young
Alternatively, there could be an alternate ranking mode where you get two comments shown at once and you choose if one is better if they are about the same. Even a few people doing that would start to get a sense of if they agree with the overall ranking.

Starting EA community offices

Effective altruism

Some cities, such as Boston and New York, are home to many EAs and some EA organizations, but lack dedicated EA spaces. Small offices in these cities could greatly facilitate local EA operations. Possible uses of such offices include: serving as an EA community center, hosting talks or reading groups, providing working space for small EA organizations, reducing overhead for event hosting, etc.

(Note: I believe someone actually is looking into starting such an office in Boston. I think (?) that might already be funded, but many other cities could plausibly benefit from offices of their own.)

Here is a more ambitious version:

EA Coworking Spaces at Scale

Effective Altruism

The EA community has created several great coworking spaces, but mostly in an ad hoc way, with large overheads. Instead, a standard EA office could be created in upto 100 towns and cities. Companies,  community organisers, and individuals  working full-time on EA projects would be awarded a membership that allows them to use these offices in any city. Members gain from being able to work more flexibly, in collaboration with people with similar interests (this especially helps independent researchers with motivation). EA organisations benefit from decreased need to do office management (which can be done centrally without special EA expertise). EA community organisers gain easier access to an event space and standard resources, such as a library, and hotdesking space, and some access to the expertise of others using the office.

Here is an even more ambitious one:

Found an EA charter city

Effective Altruism

A place where EAs could live, work, and research for long periods, with an EA school for their children, an EA restaurant, and so on. Houses and a city UBI could be interesting incentives.

9
RyanCarey
What would be the value add of an EA city, over and above that of an EA school and coworking space? For example, I don't see why you need to eat at an EA restaurant, rather than just a regular restaurant with tasty and ethical food. Note also that the libertarian "Free State Project" seems to have failed, despite there being many more libertarians than effective altruists.
2
mako yass
Lower cost of living, meaning you can have more people working on less profitable stuff. I'm not sure 5000 free staters (out of 20k signatories) should be considered failure.
2
RyanCarey
Right, but it sounds like it didn't go well afterwards? https://www.google.com/amp/s/newrepublic.com/amp/article/159662/libertarian-walks-into-bear-book-review-free-town-project
1
Leo
Mere libertarians may have failed, as anarchists did in similar attempts. But I believe that EAs can do better. An EA city would be a perfect place to apply many of the ideas and polices we are currently advocating for.
3
RyanCarey
Could you elaborate on the policies? And what, roughly, are you picturing - an EA-sympathising municipal government, or a more of a Honduran special economic zone type situation?
1
Leo
I don't think I will elaborate on policies, given that they are the last thing to worry about. Even RP negative report counts new policies among the benefits of charter cities. Now we are supposed to have effective ways to improve welfare, why wouldn't we build a new city, start from scratch, do it better than everybody else, and show it to the world? While I agree that this can't be done without putting a lot of thinking into it, I believe it must be done sooner or later. From a longtermist point of view: how could we ever expect to carry out a rational colonization of other planets when nobody on earth has ever been able to successfully found at least one rational city?
1
mako yass
Note, VR is going to get really good in the next three years, so I wouldn't personally recommend getting too invested in any physical offices, but I guess as long as we're renting it won't be our problem.
4
Jeff Kaufman
I think it is pretty unlikely that VR improvements on the scale of 3y make people stop caring about being actually in person. This is a really hard problem that people have been working on for decades, and while we have definitely made a lot of progress if we were 3y from "who needs offices?" I would expect to already see many early adopters pushing VR as a comfortable environment for general work (VR desktop) or meetings.
1
mako yass
What problem are you referring to. Face tracking and remote presence didn't have a hardware platform at all until 2016, and wasn't a desirable product until maybe this year (mostly due to covid), and wont be a strongly desirable product until hardware starts to improve dramatically next year. And due to the perversity of social software economics, it wont be profitable in proportion to its impact, so it'll come late. There are currently zero non-blurry face tracking headsets with that are light enough to wear throughout a workday, so you should expect to not see anyone using VR for work. But we know that next year there will be at least one of those (apple's headset). It will appear suddenly and without any viable intermediaries. This could be a miracle of apple, but from what I can tell, it's not. Competitors will be capable of similar feats a few years later. (I expect to see limited initial impact from applevr (limited availability and reluctance from apple to open the gates), the VR office wont come all at once, even though the technical requirements will.) (You can get headsets with adequate visual acuity (60ppd) right now, but they're heavy, which makes them less convenient to use than 4k screens. They're expensive, and they require a bigger, heavier, and possibly even more expensive computer to drive them (though this was arguably partly a software problem), which also means they wont have the portability benefits that 2025's VR headsets will have, which means they're not going to be practical for much at all, and afaik the software for face tracking isn't available for them, and even if it were, it wouldn't have a sufficiently large user network in professional realms.)
2
Chris Leong
You think they'll get past the dizziness problem?
1
mako yass
I think everyone will adapt. I vaguely remember hearing that there might be a relatively large contingent of people who never do adapt, I was unable to confirm this with 15 minutes of looking just now, though. Every accessibility complaint I came across seemed to be a solvable software problem rather than anything fundamental.
6
Chris Leong
I heard that New York was starting a coworking space as well
2
JanB
I think Berlin has something like this
4
victor.yunenko
Indeed, the space was organized by Effektiv Spenden: teamwork-berlin.org
1
Yonatan Cale
I think EA Israel would have more people working remotely in international organizations if we had community offices. [We recently got an office which I'm going to check out tomorrow; Not an ideal location for me but will try!]

Investment strategies for longtermist funders

Research That Can Help Us Improve, Epistemic Institutions, Economic growth

Because of their non-standard goals, longtermist funders should arguably follow investment strategies that differ from standard best practices in investing. Longtermists place unusual value on certain scenarios and may have different views of how the future is likely to play out. 

We'd be excited to see projects that make a contribution towards producing a pipeline of actionable recommendations in this regard. We think this is mostly a matter of combining a knowledge of finance with detailed views of the future for our areas of interest (i.e. forecasts for different scenarios with a focus on how giving opportunities may change and the associated financial winners/losers). There is a huge amount of room for research on these topics. Useful contributions could be made by research that develops these views of the future in a financially-relevant way, practical analysis of existing or potential financial instruments, and work to improve coordination on these topics.

Some of the ways the strategies of altruistic funders may differ include:

  • Mission-correlated investing
... (read more)

I have had a similar idea, which I didn't submit, relating to trying to create investor access to tax-deductible longtermist/patient philanthropy funds across all major EA hubs. Ideally these would be scaled up/modelled on the existing EA long term future fund (which I recall reading about but can't find now, sorry)

 

Edit - found it and some ideas - see this and top level post.

2
Greg_Colbourn
Just going to note that SBF/FTX/Alameda are already setting a very high benchmark when it comes to investing!
1
brb243
A systemic change investment strategy for your review.
1
JBPDavies
You may be interested in the following project I'm working for: https://deeptransitions.net/news/the-deep-transition-futures-project-investing-in-transformation/ . The project goal is developing a new investment philosophy & strategy (complete with new outcome metrics) aimed at achieving transformational systems change. The project leverages the Deep Transitions theoretical framework as developed within the field of Sustainability Transitions and Science, Technology and Innovation Studies to create a theory of change and subsequently enact it with a group of public and private investors. Would recommend diving into this if you're interested in the nexus of investment and transformation of current systems/shaping future trajectories. I can't say too much about future plans at this stage, except that following the completion of the current phase (developing the philosophy, strategies and metrics), there will be an extended experimentation phase in which these are applied, tested and continuously redeveloped.

Highly effective enhancement of productivity, health, and wellbeing for people in high-impact roles

Effective Altruism

When it comes to enhancement of productivity, health, and wellbeing, the EA community does not sufficiently utilise division of labour. Currently, community members need to obtain the relevant knowledge themselves and do related research, e.g. on health issues, themselves. We would like to see dedicated experts on these issues that offer optimal productivity, health, and wellbeing, as a service. As a vision, a person working in a high-impact role could book calls with highly trained nutrition specialists, exercise specialists, sleep specialists, personal coaches, mental trainers, GPs with sufficient time, and so on, increasing their work output by 50% while costing little time. This could involve innovative methods such as ML-enabled optimal experiment design to figure out which interventions work for each individual.

Note: Inspired by conversations with various people. I won't name them here because I don't want to ask for permission first, but will share the prize money with them if I win something.

6
Brendon_Wong
I was going to write a similar comment for researching and promoting well-being and well-doing improvements for EAs as well as the general public! Since this already exists in similar form as a comment, strong upvoting instead. Relevant articles include Ben Williamson’s project (https://forum.effectivealtruism.org/posts/i2Q3DTsQq9THhFEgR/introducing-effective-self-help) and Dynomight’s article on “Effective Selfishness” (https://dynomight.net/effective-selfishness/). I also have a forthcoming article on this. Multiple project ideas that have been submitted also echo this general sentiment. For example “ Improving ventilation,” “Reducing amount of time productive people spend doing paperwork,” and “ Studying stimulants' and anti-depressants' long-term effects on productivity and health in healthy people (e.g. Modafinil, Adderall, and Wellbutrin).” Edit: I am launching this as a project called Better! Please get in touch if you're interested in funding, collaborating on, or using this!

Reducing gain-of-function research on potentially pandemic pathogens

Biorisk

Lab outbreaks and other lab accidents with infectious pathogens happen regularly. When such accidents happen in labs that work on gain-of-function research (on potentially pandemic pathogens), the outcome could be catastrophic. At the same time, the usefulness of gain-of-function research seems limited; for example, none of the major technological innovations that helped us fight COVID-19 (vaccines, testing, better treatment, infectious disease modelling) was enabled by gain-of-function research. We'd like to see projects that reduce the amount of gain-of-function research done in the world, for example by targeting coordination between journals or funding bodies, or developing safer alternatives to gain-of-function research.

 

Additional notes:

  • There are many stakeholders In the research system (funders, journals, scientists, hosting institutions, hosting countries). I think the concentration of power is strongest in journals: there are only a few really high profile life-science journals(*). Currently, they do publish gain-of-function research. Getting high-profile journals to coordinate against publishi
... (read more)

Putting Books in Libraries

Effective Altruism
 

The idea of this project is to come up with a menu of ~30 books and a list of ~10000 libraries, and to offer to buy for each library, any number of books from the menu. This would ensure that folks interested in EA-related topics, who browse a library, discover these ideas. The books would be ones that teach people to use an effective altruist mindset, similar to those on this list. The libraries could be ones that are large, or that that serve top universities or cities with large English-speaking populations.

The case for the project is that if you assume that the value of discovering one new EA contributor is $200k, and that each book is read once per year (which seems plausible based on at least one random library) then the project will deliver far greater than the financial costs, of about $20 per book. The time costs would be minimised by doing much of the correspondence with libraries over the space over a short period of weeks to months. It also can serve as a useful experiment for even larger-scale book distributions, and could be replicated in other languages.

I like this idea, but I wonder - how many  people / students actually use physical libraries still? I don't think I've used one in over 15 years. My impression is that most are in chronic decline (and many have closed over the last decade).

5
Cillian_
A way around this could be to provide e-books and audio books instead of physical copies. Would also make the distribution easier. (In the UK at least, it's possible to borrow e & audio from your local library using the Libby app)
3
Greg_Colbourn
I imagine that e-book systems (text and audio) work via access to large libraries, rather than needing people to request books be added individually? So maybe there is no action needed on this front (although someone should probably check that most EA books are available in such collections).
2
mic
My understanding is that individual libraries license an ebook for a number of uses or a set period of time (say, two years).
2
mic
I think print books are still preferred by more readers compared to e-books. You might as well donate the books in both the physical and digital formats and probably also as an audiobook. It looks like libraries don't generally have an official way for you to donate print books virtually or to donate e-books, so I think you would have to inquire with them about whether you can make a donation and ask them to use that to buy specific books. Note that the cost of e-book licenses to libraries is many times the consumer sale price.

I really like this project idea! It's ambitious and yet approachable, and it seems that a lot of this work could be delegated to virtual personal assistants. Before starting the project, it seems that it would be valuable to quickly get a sense of how often EA books in libraries are read. For example, you could see how many copies of Doing Good Better are currently checked out, or perhaps you could nicely ask a library if they could tell you how many times a given book has been checked out.

In terms of the cost estimates, how would targeted social media advertising compare? Say targeting people who are already interested in charity and volunteering, or technology, or veg*anism, and offering to send them a free book.

8
RyanCarey
Not sure, but targeted social media advertising would also be a great project.
6
Greg_Colbourn
Added.

Never Again: A Blue-Ribbon Panel on COVID Failures

Biorisk, Epistemic Institutions

Since effective altruism came to exist as a movement, COVID was the first big test of a negative event that was clearly within our areas of concern and expertise. Despite many high-profile warnings, the world was clearly not prepared to meet the moment and did not successfully contain COVID and prevent excess deaths to the extent that should've been theoretically possible if these warnings had been properly heeded. What went wrong?

We'd like to see a project that goes into extensive detail about the global COVID response - from governments, non-profits, for-profit companies, various high-profile individuals, and the effective altruism movement - and understands what the possibilities were for policy action given what we knew at the time and where things fell apart. What could've gone better and - more importantly - how might we be better prepared for the next disaster? And rather than try to re-fight the last war, what needs to be done now for us to better handle a future disaster that may not be bio-risk at all?

Disclaimer: This is just my personal opinion and not the opinion of Rethink Priorities. This project idea was not seen by anyone else at Rethink Priorities prior to posting.

Minor note about the name: "Never Again" is a slogan often associated with the Holocaust. I think that people using it for COVID might be taken as appropriation or similar. I might suggest a different name. 

https://en.wikipedia.org/wiki/Never_again 

2
Peter Wildeford
Sorry - I was not aware of this
2
Ozzie Gooen
No worries! I assumed as such.

Are you thinking of EAs running this themselves?  We already have an informal sense of what some top priorities are for action in biosafety/pandemic-preparedness going forwards (ramp up investment in vaccines and sterilizing technology, improve PPE, try to ban Gain of Function research, etc), even if this has never been tied together into a unified and rigorously prioritized framework.

I think the idea of a blue-ribbon panel on Covid failures could have huge impact if it had (in the best-case) official buy-in from government agencies like the CDC, or (failing that) at least something like "support from a couple prestigious universities" or "participation from a pair of senators that care about the issue" or "we don't get the USA or UK but we do get a small European country like Portugal to do a Blue Ribbon Covid Panel".   In short, I think this idea might ideally look more like "lobby for the creation of an official Blue Ribbon Panel, and also try to contribute to it and influence it with EA research" rather than just running it entirely as an internal EA research project.  But maybe I am wrong and a really good, comprehensive EA report could change a lot of minds.

2
IanDavidMoss
This is a great point. Also worth noting that there have been some retrospectives already, e.g. this one by the WHO: https://theindependentpanel.org/wp-content/uploads/2021/05/COVID-19-Make-it-the-Last-Pandemic_final.pdf It would be worth considering the right balance between putting resources toward conducting an original analysis vs. mustering the political will for implementing recommendations from retrospectives like those above.
4
Jan_Kulveit
Note that CSER is running a project roughly in this direction.
4
Sean_o_h
An early output from this project: Research Agenda (pre-review) Lessons from COVID-19 for GCR governance: a research agenda The Lessons from Covid-19 Research Agenda offers a structure to study the COVID-19 pandemic and the pandemic response from a Global Catastrophic Risk (GCR) perspective. The agenda sets out the aims of our study, which is to investigate the key decisions and actions (or failures to decide or to act) that significantly altered the course of the pandemic, with the aim of improving disaster preparedness and response in the future. It also asks how we can transfer these lessons to other areas of (potential) global catastrophic risk management such as extreme climate change, radical loss of biodiversity and the governance of extreme risks posed by new technologies. Our study aims to identify key moments- ‘inflection points’- that significantly shaped the catastrophic trajectory of COVID-19. To that end this Research Agenda has identified four broad clusters where such inflection points are likely to exist: pandemic preparedness, early action, vaccines and non-pharmaceutical interventions. The aim is to drill down into each of these clusters to ascertain whether and how the course of the pandemic might have gone differently, both at the national and the global level, using counterfactual analysis. Four aspects are used to assess candidate inflection points within each cluster: 1. the information available at the time; 2. the decision-making processes used; 3. the capacity and ability to implement different courses of action, and 4. the communication of information and decisions to different publics. The Research Agenda identifies crucial questions in each cluster for all four aspects that should enable the identification of the key lessons from COVID-19 and the pandemic response.
2
Sean_o_h
https://www.cser.ac.uk/research/lessons-covid-19/

Cognitive enhancement research and development (nootropics, devices, ...)

Values and Reflective Processes, Economic Growth

Improving people's ability to think has many positive effects on innovation, reflection, and potentially individual happiness. We'd like to see more rigorous research on nootropics, devices that improve cognitive performance, and similar fields. This could target any aspect of thinking ability---such as long/short term memory, abstract reasoning, creativity---and any stage of the research and development pipeline---from wet lab research or engineering over testing in humans to product development.

 

Additional notes on cognitive enhancement research:

  • Importance:
    • Sign of impact: You already seem to think that AI-based cognitive aids would be good from a longtermist perspective, so you will probably think that non-AI-based cognitive enhancement is also at least positive. (I personally think that's somewhat likely but not obvious and would love to see more analysis on it).
    • Size of impact: AI-based cognitive enhancement is probably more promising right now. But non-AI-based cognitive enhancement is still pretty promising, there is some precedent (e.g. massive benefit
... (read more)
5
Jackson Wagner
I think this is an underrated idea, and should be considered a good refinement/addition to the FTX theme #2 of "AI-based cognitive aids".  If it's worth kickstarting AI-based research assistant tools in order to make AI safety work go better, then doesn't the same logic apply towards: * Supporting the development of brain-computer interfaces like Neuralink. * Research into potential nootropics (glad to hear you are working on replicating the creatine study!) or the negative cognitive impact of air pollution and other toxins. * Research into tools/techniques to increase focus at work, management best practices for research organizations, and other factors that increase productivity/motivation. * Ordinary productivity-enhancing research software like better note-taking apps, virtual reality remote collaboration tools, etc.   The idea of AI-based cognitive aids only deserves special consideration insofar as: 1. Work on AI-based tools will also contribute to AI safety research directly, but won't accelerate AI progress more generally.  (This assumption seems sketchy to me.) 2. The benefit of AI-based tools will get stronger and stronger as AI becomes more powerful, so it will be most helpful in scenarios where we need help the most.  (IMO this assumption checks out.  But this probably also applies to brain-computer interfaces, which might allow humans to interact with AI systems in a more direct and high-bandwidth way.)

Create and distribute civilizational restart manuals

A number of "existential risks" we are worried about may not directly kill off everybody, but would still cause enough deaths and chaos to make rebuilding extremely difficult. Thus, we propose that people design and distribute "civilizational restart manuals" to places that are likely to survive biological or nuclear catastrophes, giving humanity more backup options in case of extreme diasters.

The first version can be really cheap, perhaps involving storing paper copies of parts of Wikipedia plus 10 most important books sent to 100 safe and relatively uncorrelated locations -- somewhere in New Zealand, the Antarctica research base, a couple of nuclear bunkers, nuclear submarines, etc.

We are perhaps even more concerned about great moral values like concern for all sentient beings surviving and re-emerging than preserving civilization itself, so we would love for people to do further research and work into considering how to preserve cosmopolitan values as well.

My comment from another thread applies here too:

Agreed, very important in my view! I’ve been meaning to post a very similar proposal with one important addition:

Anthropogenic causes of civilizational collapse are (arguably) much more likely than natural ones. These anthropogenic causes are enabled by technology. If we preserve an unbiased sample of today’s knowledge or even if it’s the knowledge that we consider to have been most important, it may just steer the next cycle of our civilization right into the same kind of catastrophe again. If we make the information particularly durable, maybe we’ll even steer all future cycles of our civilization into the same kind of catastrophe.

The selection of the information needs to be very carefully thought out. Maybe only information on thorium reactors rather than uranium ones; only information on clear energy sources; only information on proof of stake; only information on farming low-suffering food; no prose or poetry that glorifies natural death or war; etc.

I think that is also something that none of the existing projects take into account.

5
Greg_Colbourn
Relatedly, see this post about continuing AI Alignment research after a GCR.
2
Dawn Drescher
Very good!
3
ben.smith
Building on the above idea... Research the technology required to restart modern civilization and ensure the technology is understood and accessible in safe havens throughout the world A project could ensure that not only the know-how but also the technology exists dispersed in various parts of the world to enable a restart. For instance, New Zealand is often considered a relatively safe haven, but New Zealand’s economy is highly specialized and for many technologies, relies on importing technology rather than producing it indigenously. Kick-starting civilization from wikipedia could prove very slow. Physical equipment and training enabling strategic technologies important for restart could be planted in locations like New Zealand and other social contexts which are relatively safe. At an extreme, industries could be subsidized which localize technology required for a restart. This would not necessarily mean the most advanced technology; rather, it means technologies that have been important to develop to the point we are at now.  
3
Linch
Yes this is exciting to me, and related. Though of course generalist research talent is in short supply within EA, so the bar for any large-scale research project taking off is nontrivially high.
2
Dawn Drescher
I didn’t write this up as a separate proposal as it seemed a bit self-serving, but creating underground cities for EAs with all the ALLFED technology and whatnot and all these backups could enable us to afterwards build a utopia with all the best voting methods and academic journals that require Bayesian analyses and publish negative results and Singer on the elementary school curriculum and universal basic income etc.
2
Hauke Hillebrandt
All of wikipedia is just 20GB. Maybe there could be an way to share backups via Bittorrent or an 'offline version' of it... it would fit comfortably on most modern smartphones.
8
Linch
Digital solutions are not great because ideally you want something that can survive centuries or at least decades. But offline USBs in prominent + safe locations might still be a good first step anyway.
2
Greg_Colbourn
I've got a full version of the English Wikipedia, complete with images, on my phone (86GB). It's very easy to get using the Kiwix app.
2
Greg_Colbourn
I note there isn't much on Kiwix in terms of survival/post-apocalype collections (just a few TED talks and YouTube videos): a low-hanging fruit ripe for the picking.
2
Greg_Colbourn
Maybe someone should make an EA related collection and upload it to Kiwix? (Best books, EA Forum, AI Alignment Forum, LessWrong, SSC/ACX etc). This might be a good way of 80/20-ing preserving valuable information. As a bonus, people can easily and cheaply bury old phones with the info on, along with solar/hand-crank chargers.
1
wbryk
The  group who discovers this restart manual could gain a huge advantage over the other  groups in the world population -- they might reach the industrial age within a few decades while everyone else is still in the stone age. This discoverer group will therefore have a huge influence over the world civilization they create. I wonder if there were a way to ensure that this group has good values, even better values than our current world. For example, imagine there were a series of value tests within the restart manual that the discoverers were required to pass in order to unlock the next stage of the manual. Either multiple groups rediscover the manual and fail until one group succeeds, or some subgroup unlocks the next step and is able to leap technologically above the others in the group fast enough to ensure that their values flourish. If those value tests somehow ensure that a high score means the test-takers care deeply about the values we want them to have, then only those who've adopted these values will rule the earth. As a side note, this would be a really cool short story or movie :)

SEP for every subject

Epistemic institutions

Create free online encyclopedias for every academic subject (or those most relevant to longtermism) written by experts and regularly updated. Despite the Stanford Encyclopedia of Philosophy being widely-known and well-loved there are few examples from other subjects. Often academic encyclopedias are both behind institutional paywalls and not accessible on sci-hub (e.g. https://oxfordre.com/). This would provide decisionmakers and the public with better access to academic views on a variety of topics. 

5
Peter S. Park
Can editing efforts be directed to Wikipedia? Or would this not suffice because everyone can edit it?
2
agnode
I've read that experts often get frustrated with wikipedia because their work ends up getting undone by non-experts. Also there probably needs to be financial support and incentives for this kind of work. 
1
brb243
Yeah make it accessible and normally accepted.
2
Yitz
This would have to be a separate project from my proposed direct Wikipedia editing, but I'd  be very much in support of this (I see the efforts as being complementary)

Preventing factory farming from spreading beyond the earth

Space governance, moral circle expansion (yes I am also proposing a new area of interest.)

 

Early space advocates such as Gerard O’Neill and Thomas Heppenheimer had both included animal husbandry in their designs of space colonies. In our time, the European Space Agency, the Canadian Space Agency, the Beijing University of Aeronautics and Astronautics, and NASA, have all expressed interests or announced projects to employ fish or insect farming in space. 

This, if successful, might multiply the suffering of farmed animals by many times of the numbers of farmed animals on earth currently, spanned across the long-term future. Research is needed in areas like:

... (read more)

Purchase a top journal

Metascience

Journals give bad incentives to academics - they require new knowledge to be written in hard to understand language, without pre-registration at great cost and sometimes focused on unimportant topics. Taking over a top journal and ensuring it incentivised high quality work on the most important topics would begin to turn the scientific system around.

We could, of course, simply get the future fund to pay for this. There is, however, an alternative that might be worth thinking about.

This seems like the kind of thing that dominant assurance contracts are designed to solve. We could run a Kickstarter, and use the future fund to pay the early backers if we fail to reach the target amount. This should incentivise all those who want the journals bought to chip in.

Here is one way we could do this:

  1. Use a system like pol.is to identify points of consensus between universities. This should be about the rules going forward if we buy the journal. For example, do they all want pre-registration? What should the copyright situation be? How should peer-review work? How should the journal be ran? etc
  2. Whatever the consensus is, commit to implementing it if the buyout is successful
  3. Start crowdsourcing the funds needed. To maximise the chance of success, this should be done using a DAC (dominant assurance contract). This works like any other crowdfunding mechanism (GoFundMe, Kickstarter, etc), except we have a pool of money that is used to pay the early backers if we fail to meet the goal. If the standard donation size we're asking the unis for i
... (read more)
3
Jonathan Nankivell
Update: I emailed Alex Tabarrok to get his thoughts on this. He originally proposed using dominant assurance contracts to solve public good problems, and he has experience testing it empirically. He makes the following points about my suggestion: * The first step is the most important. Without clarity of what the public good will be and who is expected to pay for it, the DAC won't work * You should probably focus on libraries as the potential source of funding. They are the ones who pay subscription fees, they are the ones who would benefit from this * DACs are a novel forum of social technology. It might be best to try to deliver smaller public goods first, allowing people to get more familiar, before trying to buy a journal He also suggested other ways to solve the same problem: * Have you considered starting a new journal? This should be cheaper. There would also be a coordination questions to solve to make it prestigious, but this one might be easier * Have you considered 'flipping' a journal? Could you take the editors, reviewers and community that supports an existing journal, and persuade them to start a similar but open access journal? (The Fair Open Access Alliance seem to have had success facilitating this. Perhaps we should support them?) My current (and weakly held) position is that flipping editorial boards to create new open access journals is the best way to improve publishing standards. Small steps towards a much better world. Would it be possible to for the Future Fund to entice 80% of the big journals to do this? The top journal in every field? Maybe.
2
brb243
This is a reputational loss risk of an actor in the broader EA community seeking to influence the scientific discourse by economic/peer unreviewed means? There are repositories, such as of the Legal Priorities Project, of papers, that are cool and the EA community pays attention to aggregate narratives to keep some of its terms rather exclusive and convincing. If you mean coordinating research, to learn from the scientific community, then it can make sense to read papers and corresponding with academics. Maybe on the EA Forum or so. No need to buy a journal.
2
James Bailey
Agree, was thinking of submitting a proposal like this. A few ways to easily improve most journals: -Require data and code to be shared -Open access, but without the huge author fees most open access journals charge -If you do charge any fees, use them to pay reviewers for fast reviews
1
Jonas Moss
Shouldn't reviewers be paid, regardless of fees? It is a tough job, and there should strong incentives to do it properly. 

A Longtermist Nobel Prize

All Areas

The idea is to upgrade the Future of Life Award to be more desirable. The prizemoney would be increased from $50k to$10M SEK (roughly $1.1M) per individual to match the Nobel Prizes. Both for prestige, and to make sure ideal candidates are selected, the selection procedure would be reviewed, adding extra judges or governance mechanisms as needed. This would not immediately mean that longtermism has something to match the prestige of a Nobel, but it would give a substantial reward and offer top longtermists something to strive for.

(A variation on a suggestion by DavidMoss)

2
Gavin
How much of the prestige is the money value, how much just the age of the prize, and how much the association with a fancy institution like the Swedish monarchy?  I seem to remember that Heisenberg etc were more excited by the money than the prize, back in the day.
2
RyanCarey
The money isn't necessary - see the Fields Medal. Nor is the Swedish Monarchy - see the Nobel Memorial Prize in Econ. Age obviously helps. And there's some self-reinforcement - people want the prize that others want. My guess is that money does help, but this could be further investigated.
4
Hauke Hillebrandt
The Jacobs Foundation awards $1m prizes to scientist as a grant - I think this might be one of the biggest - one could award $5-10m to make it the most prestigious prize in the world.
1
Taras Morozov
I think Templeton Prize has become prestigious because they give more money than the Nobel on purpose.

Megastar salaries for AI alignment work

Artificial Intelligence

Aligning future superhuman AI systems is arguably the most difficult problem currently facing humanity; and the most important. In order to solve it, we need all the help we can get from the very best and brightest. To the extent that we can identify the absolute most intelligent, most capable, and most qualified people on the planet – think Fields Medalists, Nobel Prize winners, foremost champions of intellectual competition, the most sought-after engineers – we aim to offer them salaries competitive with top sportspeople, actors and music artists to work on the problem. This is complementary to our AI alignment prizes, in that getting paid is not dependent on results. The pay is for devoting a significant amount of full time work (say a year), and maximum brainpower, to the problem; with the hope that highly promising directions in the pursuit of a full solution will be forthcoming. We will aim to provide access to top AI alignment researchers for guidance, affiliation with top-tier universities, and an exclusive retreat house and office for fellows of this program to use, if so desired.

5
Greg_Colbourn
Here's a more fleshed out version, FAQ style. Comments welcome.

Longtermist Policy Lobbying Group

Biorisk, Recovery from Catastrophe, Epistemic Institutions, Values and Reflective Processes

Many social movements find a lot of opportunity by attempting to influence policy to achieve their goals . While longtermism can and should remain bi-partisan, there may be many opportunities to pull the rope sideways on policy areas of concern.

We'd like to see a project that attempts to carefully understand the lobbying process and explores garnering support for identified tractable policies. We think while such a project could scale to be very large once successful, anyone working on this project should really aim to start small and tred carefully, aiming to avoid issues around the unilateralist curse and ensuring to not make longtermism into an overly partisan issue. It's likely that longtermist lobbying might also be best done as lobbying for other clear areas related to longtermism but as a distinct idea, such as lobbying for climate change mitigation or lobbying for pandemic preparedness.

Disclaimer: This is just my personal opinion and not the opinion of Rethink Priorities. This project idea was not seen by anyone else at Rethink Priorities prior to posting.

4
IanDavidMoss
I think some form of lobbying for longtermist-friendly policies would be quite valuable. However, I'm skeptical that running lobbying work through a single centralized "shop" is going to be the most efficient use of funds. Lobbying groups tend to specialize in a specific target audience, e.g., particular divisions of the US federal government or stakeholders in a particular industry, because the relationships are really important to success of initiatives and those take time to develop and maintain. My guess is that effective strategies to get desired policies implemented will depend a lot on the intersection of the target audience + substance of the policy + the existing landscape of influences on the relevant decision-makers. In practice, this would probably mean at the very least developing a lot of partnerships with colleague organizations to help get things done or perhaps more likely setting up a regranting fund of some kind to support those partners. Happy to chat about this further since we're actively working on setting something like this up at EIP.
4
Peter Wildeford
I agree with you on the value of not overly centralizing this and of having different groups specialize in different policy areas and/or approaches.

Landscape Analysis: Longtermist Policy

Biorisk, Recovery from Catastrophe, Epistemic Institutions, Values and Reflective Processes

Many social movements find a lot of opportunity by attempting to influence policy to achieve their goals - what ought we do for longtermist policy? Longtermism can and should remain bi-partisan but there may be many opportunities to pull the rope sideways on policy areas of concern.

We'd like to see a project that attempts to collect a large number of possible longtermist policies that are tractable, explore strategies for pushing these policies, and also use public opinion polling on representative samples to understand which policies are popular. Based on this information, we could then suggest initiatives to try to push for.

Disclaimer: This is just my personal opinion and not the opinion of Rethink Priorities. This project idea was not seen by anyone else at Rethink Priorities prior to posting.

2
PeterSlattery
I really like this idea and think that having a global policy network could be valuable over the long term. Particularly if coordinated with other domains of EA work. For instance, I can imagine RT and various other researcher orgs and researchers providing evidence on demand to EAs who are directly embedded within policy production. 
1
JBPDavies
Hi Peter (if I may!), I love this and your other Longtermism suggestions, thanks for submitting them! Not sure if you saw my below suggestion of a Longtermism Policy Lab - but maybe this is exactly the kind of activity that could fall under such an organisation/programme (within Rethink even)? Likewise for your suggestion of a Lobbying group - by working directly with societal partners (e.g. National Ministries across the world) you could begin implementation directly through experimentation.  I've been involved in a similar (successful) project called the 'Transformative Innovation Policy Consortium (TIPC)', which works with, for example, the Colombian governement to shape innovation policy towards sustainable and just transformation (as opposed to systems optimisation).  Would love to talk to you about these your ideas for this space if you're interested. I'm working with the Institutions for Longtermism research platform at Utrecht University & we're still trying to shape our focus, so there may be some scope for piloting ideas.
2
IanDavidMoss
JBPDavies, it sounds like you and I should connect as well -- I run the Effective Institutions Project and I'd love to learn more about your Institutions for Longtermism research and provide input/ideas as appropriate.
1
JBPDavies
Sounds fantastic - drop me an email at j.b.p.davies@uu.nl and I would love to set up a meeting. In the meantime I'll dive into EIP's work!
2
Peter Wildeford
Sure! Email me at peter@rethinkpriorities.org and I will set up a meeting.
1
brb243
If it shows that policies that safeguard the long-term objectives of the top lobbyists in the nation while disregarding others' preferences are the most popular, do you recommend them as attention-captivating conversation starters so that impartial consideration can be explained one-on-one to support its internalization by regulators by implementing measures to prevent the enactment of these, possible catastrophically risky (codified dystopia for some actors) popular policies, if I understand it correctly?

Experiments to scale mentorship and upskill people

Empowering Exceptional People, Effective Altruism

For many very important and pressing problems, especially those focused on improving the far future, there are very few experts working full-time on these problems. What's more, these fields are nascent, and there are few well-defined paths for young or early-career people to follow, it can be hard to enter the field. Experts in the field are often ideal mentors - they can vet newcomers, help them navigate the field, provide career advice, collaborate on projects and gain access to new opportunities, but there are currently very few people qualified to be mentors. We'd love to see projects that experiment with ways to improve the mentorship pipeline so that more individuals can work on pressing problems. The kinds of possible solutions possible are very broad - from developing expertise in some subset of mentorship tasks (such as vetting) in a scalable way, increasing the pool of mentors, improving existing mentors' ability to provide advice by training them, experimenting with better mentor-mentee matchmaking, running structured mentorship programs, and more.

Proportional prizes for prescient philanthropists

Effective Altruism, Economic Growth, Empowering Excetional People

A low-tech alternative to my proposal for impact markets is to offer regular, reliable prizes for early supporters of exceptionally impactful charities. These can be founders, advisors, or donors. The prizes would not only go to the top supporters but proportionally to almost anyone who can prove that they’ve contributed (or where the charity has proof of the contribution), capped only at a level where the prize money is close to the cost of the administrative overhead.

Donors may be rewarded in proportion to the aggregate size of their donations, advisors may be rewarded in proportion to their time investment valued at market rates, founders may be rewarded in proportion to the sum of both.

If these prizes are awarded reliably, maybe by several entities, they may have some of the same benefits as impact markets. Smart and altruistic donors, advisors, and charity serial entrepreneurs can accumulate more capital that they can use to support their next equally prescient project.

5
IanDavidMoss
Reading this again, I want to register that I am much more excited about the idea of rewarding donors for early investment than I am about the other elements of the plan. As someone who has founded multiple organizations, the task of attaching precise retrospective monetary values to different people's contributions of time, connections, talent, etc. in a way that will satisfy everyone as fair sounds pretty infeasible. Early donations, by contrast, are an objective and verifiable measure of value that is much easier to reward in practice. You could just say that the first, say $500k that the org raises is eligible for retroactive reward/matching/whatever, with maybe the first $100k or something weighted more heavily. It's also worth thinking through the incentives that a system like this would set up, especially at scale. It would result in more seed funding and more small charities being founded and sustained for the first couple of years. I personally think that's a good thing at the present time, but I also know people who argue that we should be taking better advantage of economies of scale in existing organizations. There is probably a point  at which there is too much entrepreneurship, and it's worth figuring out what that point is before investing heavily in this idea.
4
Dawn Drescher
Owen Cotton-Barrett and I have thought about this for a while and have mostly arrived at the solution that beneficiaries who collaborated on a project need to hash this out with each other. So make a contract, like in a for-profit startup, who owns how much of the impact of the project. I think that capable charity entrepreneurs are a scarce resource as well, so that we should try hard to foster them. So that’s probably where a large chunk of the impact is. When it comes to the incentive structures: We – mostly Matt Brooks and I but the rest of the team will be around – will hold a talk on the risks from perverse incentives in our system at the Funding the Commons II conference tomorrow. Afterwards I can also link the video recording here. My big write-up, which is more comprehensive than the presentation but unfinished, is linked from the other proposal proposal. That said … I don’t quite understand… More funding for donors -> more donors -> more money to charities -> higher scale, right? So this system would enable charities to hire more so people can specialize etc., not the opposite? Thanks!
3
colin
This is really interesting. Setting up individual projects as DAOs could be an effective way to manage this.  The DAO issues tokens to founders, advisors, and donors.  If retrospectively it turns out that this was a particularly impactful project the funder can buy and burn the DAO tokens, which will drive up the price, thereby rewarding all of the holders.
2
Dawn Drescher
Yep! There’s this other proposal for impact markets linked above. That’s basically that with slight tweaks. It’s all written in a technology-agnostic way, but one of the implementations that we’re currently looking into is on the blockchain. There’s even a bit of a prototype already. :-D
2
IanDavidMoss
I really like this idea, and FWIW find it much more intuitive to grasp than your impact markets proposal.
2
Dawn Drescher
Sweet, thanks! :-D Then it’ll also help me explain impact markets to people.

High quality, EA Audio Library (HEAAL)

all/meta, though I think the main value add is in AI

(Nonlinear has made a great rough/low quality version of this, so at least some credit/prize should go to them.)

Audio has several advantages over text when it comes to consuming long-form content, with one significant example being that people can consume it while doing some other task (commuting, chores, exercising) meaning the time cost of consumption is almost 0. If we think that broad, sustained engagement with key ideas is important, making the cost of engagement much lower is a clear win. Quoting Holden's recent post:

I think a highly talented, dedicated generalist could become one of the world’s 25 most broadly knowledgeable people on the subject (in the sense of understanding a number of different agendas and arguments that are out there, rather than focusing on one particular line of research), from a standing start (no background in AI, AI alignment or computer science), within a year

What does high quality mean here, and what content might get covered?

  • High quality means read by humans (I'm imagining paying maths/compsci students who'll be able to handle mathematical n

... (read more)
2
Nathan Young
Frankly, I'd like the ability to send a written feed to somewhere and have it turned into audio, maybe crowdfunded. Clearly non-linear can do it, so why can't I have it for, say, Bryan Caplan's writing.
3
alex lawsen
If you're ok with autogenerated content of roughly the quality of nonlinear, both Pocket and Evie are reasonable choices.

High-quality human performance is much more engaging than autogenerated audio, fwiw.

4
alex lawsen
Hence the original pitch!
2
Nathan Young
Non-Linear could be paid to repost the most upvoted posts but with voice actors. 

Our World in Base Rates

Epistemic Institutions

Our World In Data are excellent; they provide world-class data and analysis on a bunch of subjects. Their COVID coverage made it obvious that this is a very great public good. 

So far, they haven't included data on base rates; but from Tetlock we know that base rates are the king of judgmental forecasting (EAs generally agree). Making them easily available can thus help people think better about the future. Here's a cool corporate example. 

e.g. 

85% of big data projects fail”; 
10% of people refuse to be vaccinated because of fearing needles (pre-COVID so you can compare to the COVID hesitancy)”; 
"11% of ballot initiatives pass"
“7% of Emergent Ventures applications are granted”; 
“50% of applicants get 80k advice”; 
“x% of applicants get to the 3rd round of OpenPhil hiring”, "which takes y months"; 
“x% of graduates from country [y] start a business”.

MVP:

  • come up with hundreds of baserates relevant to EA causes
  • scrape Wikidata for them, or diffbot.com
  • recurse: get people to forecast the true value, or later value (put them in a private competition on Foretold,  index them on metaforecast.org)


Later, Q... (read more)

I think this is neat. 

Perhaps-minor note: if you'd do it at scale, I imagine you'd want something more sophisticated than coarse base rates. More like, "For a project that has these parameters, our model estimates that you have a 85% chance of failure."

I of course see this as basically a bunch of estimation functions, but you get the idea.

Teaching buy-out fund

Allocate EA Researchers from Teaching Activities to Research

Problem: Professors spend a lot of their time teaching instead of researching. Many don’t know that many universities offer “teaching buy-outs”, where if you pay a certain amount of money, you don’t have to teach. Many also don’t know that a lot of EA funding would be interested in paying that.

Solution: Make a fund that's explicitly for this, to make it so more EAs know. This is the 80/20 of promoting the idea. Alternatively, funders can just advertise this offering in other ways.

Adversarial collaborations on important topics

Epistemic Institutions

There are many important topics, such as the level of risk from advanced artificial intelligence and how to reduce it, among which there are  reasonable people with very different views. We are interested in experimenting with various types of adversarial collaborations, which we define as people with opposing views working to clarify their disagreement and either resolve the disagreement or identify an experiment/observation that would resolve it. We are especially excited about combining adversarial collaborations with forecasting on any double cruxes identified from them. Some ideas for experimentation might be varying the number of participants, varying the level of moderation and strictness of enforced structure, and introducing AI-based aids.

Existing and past work relevant to this space include the Adversarial Collaboration Project, SlateStarCodex's adversarial collaboration contests, and the Late 2021 MIRI Conversations.

1
brb243
What topics? Which are not yet covered? (E. g. militaries already talk about peace) What adversaries? Are they rather collaborators (such as considering mergers and acquisitions and industry interest benefits for private actors and trade and alliance advantages for public actors)? Do you mean decisionmaker-nondecisionmaker collaborations - the issue is that systems are internalized, so you can get from the nondecisionmakers I want to be as powerful over others as the decisionmakers or also an inability to express or know their preferences (a chicken is in the cage so what can it say or a cricket is on the farm what do they know about their preferences) - probably, adversaries would prefer to talk about 'how can we get the other to give us profit' rather than 'how can we make impact' since the agreement is 'not impact, profit?'

Foundational research on the value of the long-term future

Research That Can Help Us Improve

If we successfully avoid existential catastrophe in the next century, what are the best pathways to reaching existential security, and how likely is it? How optimistic should we be about the trajectory of the long-term future? What are the worst-case scenarios, and how do we avoid them? How can we make sure the future is robustly positive and build a world where as many people are flourishing?


To elaborate on what I have in mind with this proposal, it seems important to conduct research beyond reducing existential risk over the next century – we should make sure that the future we have afterwards is good as well. I'd be interested in research following up on subjects like those of the posts:

... (read more)
8
Fai
This sounds great! I particularly liked that you brought up S-risks and MCE. I think these are important considerations.

Focus Groups Exploring Longtermism / Deliberative Democracy for Longtermism

Epistemic Institutions, Values and Reflective Processes

Right now longtermism is being developed within a relatively narrow set of stakeholders and participants relative to the broad set of people (and nonhumans) that would be affected by the decisions we make. We'd like to see focus groups that attempt to engage a more diverse group of people (diversity across many axes including but not limited to race, gender, age, geography, and socioeconomic status) and attempt to explain longtermism to them and explore what visions they have for the future of humanity (and nonhumans). Hopefully through many iterations we can find a way to go across what is likely rather large initial inferential distance to explore how a broader and more diverse group of people would think about longtermism once ideally informed. This can be related to and informed by engaging in deliberative democracy. This also could be helping to initiate what longtermists call "the long reflection".

Disclaimer: This is just my personal opinion and not the opinion of Rethink Priorities. This project idea was not seen by anyone else at Rethink Priorities prior to posting.

7
IanDavidMoss
I absolutely love this idea and really hope it gets funded! It reminds me in spirit of the stakeholder research that IDinsight did to help inform the moral weights GiveWell uses in its cost-effectiveness analysis. At scale, it could parallel aspects of the process used to come up with the Sustainable Development Goals.

Incubator for Independent Researchers

Training People to Work Independently on AI Safety

Problem: AI safety is bottlenecked by management and jobs. There are <10 orgs you can do AI safety full time at, and they are limited by the number of people they can manage and their research interests.

Solution: Make an “independent researcher incubator”. Train up people to work independently on AI safety. Match them with problems the top AI safety researchers are excited about. Connect them with advisors and teammates. Provide light-touch coaching/accountability. Provide enough funding so they can work full time or provide seed funding to establish themselves, after which they fundraise individually. Help them set up co-working or co-habitation with other researchers.

This could also be structured as a research organization instead of an incubator.

Expected value calculations in practice

Invest in creating the tools to approximate expected value calculations for speculative projects, even if hard.

Currently, we can’t compare the impact of speculative interventions in a principled way. When making a decision about where to work or donate, longtermists or risk-neutral neartermists may have to choose an organization based on status, network effects, or expert opinion. This is, obviously, not ideal.

We could instead push towards having expected value calculations for more things. In the same way that GiveWell did something similar for global health and development, we could try to do something similar for longtermism/speculative projects. Longer writeup here.

EA Marketing Agency

Improve Marketing in EA Domains at Scale

Problem: EAs aren’t good at marketing, and marketing is important.

Solution: Fund an experienced marketer who is an EA or EA-adjacent to start an EA marketing agency to help EA orgs.

AGI Early Warning System
Anonymous Fire Alarm for Spotting Red Flags in AI Safety

Problem: In a fast takeoff scenario, individuals at places like DeepMind or OpenAI may see alarming red flags but not share them because of myriad institutional/political reasons.

Solution: create an anonymous form - a “fire alarm” (like an whistleblowing Andon Cord of sorts) where these employees can report what they’re seeing. We could restrict the audience to a small council of AI safety leaders, who then can determine next steps. This could, in theory, provide days to months of additional response time.

Alignment Forum Writers

Pay Top Alignment Forum Contributors to Work Full Time on AI Safety

Problem: Some of AF’s top contributors don’t actually work full-time on AI safety because they have a day job to pay the bills.

Solution: Offer them enough money to quit their job and work on AI safety full time.

(Per Nick's note, reposting)

Political fellowships

Values and Reflective Processes, Empowering Exceptional People

We’re like to fund ways to pull people who don’t run for political office to run for political office. It's like a MacArthur. You get a call one day. You've been selected. You'd make a great public servant, even if you don't know it. You'd get some training, like DCCC and NRCC, and when you run, you get two million spent by the super-PAC run by the best. They've done the analysis. They'll provide funding. They've lined up endorsers. You've never thought about politics, but they've got your back. Say what you want to say, make a difference in the world: run the campaign you don't mind losing. And if you win, make it real.

3
Jan-Willem
Great idea, at TFG we have similar thoughts and are currently researching if we should run it and the best way to run a program like this. Would love to get input from people on this.

The Billionaire Nice List

Philanthropy

A regularly updated list of how much impact we estimate billionaires have created. Billionaires care about their public image, people like checking lists. Let's attempt to create a list which can be sorted by different moral weights and incentivises billionaires to do more good. 

9
PeterSlattery
I really like this. I had a similar idea focused on trying to change the incentive landscape for billionaires to make it as high status as possible to be as high impact as possible. I think that lists and awards could be a good start. Would be especially good to have the involvement of some aligned ultrawealthy people who might have a good understanding of what will be effective.
3
Nathan Young
Yeah, I would love those of us who know or are billionaires to give a sense of what motivates them.

Pro-immigration advocacy outside the United States

Economic Growth

Increasing migration to rich countries could dramatically reduce poverty and grow the world economy by up to 150%. Open Philanthropy has long had pro-immigration reform in the U.S. as a focus area, but the American political climate has been very hostile to and/or polarized on immigration, making it harder to make progress in the U.S. However, other high-income countries might be more receptive to increasing immigration, and would thus be easier places to make progress. For example, according to a 2018 Pew survey, 81% of Japanese citizens support increasing or keeping immigration levels about the same. It would be worth exploring which developed countries are most promising for pro-immigration advocacy, and then advocating for immigration there.

What this project could look like:

  1. Identify 2-5 developed countries where pro-immigration advocacy seems especially promising.
  2. Build partnerships with people and orgs in these countries with expertise in pro-immigration advocacy.
  3. Identify the most promising opportunities to increase immigration to these countries and act on them.

Related posts:

... (read more)
5
Greg_Colbourn
Japan is coming from a very low base - 2% of population is foreign-born - vs. 15% in the US. A lot of room for more immigrants before "saturation" is reached I guess. Although I imagine that xenophobia and racism is anti-correlated with immigration, at least at low levels [citation needed].
1
brb243
Top countries by refugees per capita The world's most neglected displacement crises Should these countries be supported in their efforts (I read I think $0.1/person/day for food) and the crises prevented such as by supporting the source area parties to make and abide by legal agreements over resources, prevent drug trade by higher-yield farming practices and education or urban career growth prospects, improve curricula to add skills development in care for others (teaching preventive healthcare and others' preferences-based interactions), etc - as a possibly cost-effective alternative to pro-immigration advocacy - then, either privileged persons will be able to escape the poor situation, which will not be solved or unskilled persons with poor norms will be present at places which may not improve their subjective wellbeing, which is given by the norms' internalization?
2
Eevee🔹
Your question is very long and hard to understand. Can you please reword it in plain English?
1
brb243
Displacement crises are large and neglected. For example, for one of the top 10 crises, 6,000 additional persons are displaced per day. Displaced persons can be supported by very low amounts, which make large differences. For example, $0.1/day for food and low amount for healthcare. In some cases, this would have otherwise not been provided. So, supporting persons in crises in emerging economies, without solving the issues, can be cost-effective compared to  spending comparable effort on immigration reform. Second, supporting countries that already host refugees of neglected crises to better accommodate these persons (so that they do not need to stay in refugee camps reliant on food aid and healthcare aid), for example, by special economic zones, if these allow for savings accumulation, and education, so that refugees can better integrate and the public welcomes it due to economic benefits, can be also competitive in cost-effectiveness compared to immigration reform in countries with high public attention and political controversy and much smaller refugee populations, such as the US. The intervention is more affordable, makes larger difference for the intended beneficiaries, has higher chance of political support, and can be institutionalized while solving the problem. Third, allocating comparable skills to neglected crises rather than to immigration reform in industrialized nations where unit decisionmaker's attention can be much more costly, such as the US, can resolve the causes of these crises, which can include limited ability to draft and enforce legal agreements around natural resources or mitigate violence related to limited alternative prospects of drug farmers by sharing economic alternatives, such as higher-yield commodity farming practices, agricultural value addition skills, or upskilling systems related to work in urban areas. So, the cost-effectiveness of solving neglected crises by legal, political, and humanitarian assistance can be much higher th

Improving ventilation

Biorisk

Ventilation emerged as a potential intervention to reduce the risk of COVID and other pathogens. Additionally, poor air quality is a health concern in its own right, negatively affecting cognition and cognitive development. Despite this, there still does not seem to be commonly accepted wisdom about what kind of ventilation interventions ought to be pursued in offices, bedrooms, and other locations.

We'd like to see a project that does rigorous research to establish strong ventilation strategies in a variety of contexts and explores their effectiveness on various ventilation issues. Once successful ventilation strategies are developed, assuming it would be cost-effective to do so, this project could then aim to roll out ventilation and campaign/market for ventilation interventions either as a for-profit, non-profit, or hybrid.

Disclaimer: This is just my personal opinion and not the opinion of Rethink Priorities. This project idea was not seen by anyone else at Rethink Priorities prior to posting.

Advocacy organization for unduly unpopular technologies

Public opinion on key technologies.

Some technologies have enormous benefits, but they are not deployed very much because they are unpopular. Nuclear energy could be a powerful tool for enhancing access to clean energy and combating climate change, but it faces public opposition in Western countries. Similarly, GMOs could help solve the puzzle of feeding the global population with fewer resources, but public opinion is largely against them. Cellular agriculture may soon face similar challenges. Public opinion on these technologies must urgently be shifted. We’d like to see NGOs that create the necessary support via institutions and the media, without falling into the trap of partisan warfare with traditional environmentalists.

4
Jackson Wagner
Probably want to avoid unifying all of these under one "we advocate for things that most people hate" advocacy group!  Although that would be pretty hilarious.  But funding lots of little different groups in some of these key areas is great, such as trying to make it easier to build clean energy projects of all kinds as I mention here.
4
simonfriederich
Right, it sounds absurd and maybe hilarious, but it's actually what I had in mind. The advantage is internal coherence. The idea is basically to let "ecomodernism" go mainstream, having a Greenpeace-like org that has ideas more similar to the Breakthrough Institute.  It's far from clear that this can work, but it's worth a try, in my view. About your suggestion: I love it and voted for it. 
2
Jackson Wagner
Maybe so... like an economics version of the ACLU that builds a reputation of sticking up for things that are good even though they're unpopular. Might work especially well if oriented around the legal system (where ACLU operates and where groups like Greenpeace and the ever-controversial NRA have had lots of success), rather than purely advocacy? Having a unified brand might help convince people that our side has a point. For instance, a group that litigates to fight against nimbyism by complaining about the overuse of environmental laws or zoning regulations... the nimbys would naturally see themselves as the heroes of the story and assume that lawyers on the pro-construction side were probably villains funded by big greedy developers. Seeing that their opposition was a semi-respected ACLU-like brand that fought for a variety of causes might help change people's minds on an issue. (On the other hand, I feel like the legal system is fundamentally friendlier terrain for stopping projects than encouraging them, so the legal angle might not work well for GMOs and power plants. But maybe there are areas like trying to ban Gain-of-Function research where this could be a helpful strategy.) We'd still probably want the brand of this group to be pretty far disconnected from EA -- groups like Greenpeace, the NRA, etc naturally attract a lot of controversy and demonization.
2
Andreas F
Since Lifecycle Analysis show that it most likely is the best option, I fully agree on the nuclear Part.  I also agree on the GMO part, since huge Meta Analysis show no adverse effects on the environment (compared as yield/area   & Biodiversity/dollar & Yield/dollar  labor/ yield), in comparison with other agriculture. I have no Assessment on Cellular Agriculture, but I do think, that it is fair to support such schemes ( at least until we have solid data regarding this, and then decide again).  
2
Peter S. Park
Note: Wanted to share an example. I think that while nuclear fission reactors are unpopular and this unpopularity is sticky, it is possible that efforts to preemptively decouple the reputation of nuclear fusion reactors with those of nuclear fission reactors can succeed (and that nuclear fusion's hypothetical positive reputation can be sticky over time). But it is also possible that the unpopularity of nuclear fission will stick to nuclear fusion.  Which of these two possibilities occurs, and how proactive action can change this, is mysterious at the moment. This is because our causal/theoretical understanding of the science of human behavior is incomplete. (see my submission, "Causal microfoundations for behavioral science") Preemptive action regarding historically unprecendented settings like emergent technologies---for which much of the relevant data may not yet exist---can be substantially informed by externally valid predictions of people's situation-specific behavior in such settings.
3
simonfriederich
Interesting thought. FWIW, I think it's more realistic that we can turn around public opinion on fission first, reap more of the benefits of fission, and then have a better public public landscape for fusion, then that we accept the unpopularity of fission as a given but will have somehow popular fusion. But I may well be wrong.

Building the grantmaker pipeline

Empowering Exceptional People, Effective Altruism

The amount of funding committed to Effective Altruism has grown dramatically in the past few years, with an estimated  $46 billion dollars currently earmarked for EA. With this significant increase in available funding, there is now a greatly increased need for talented and thoughtful grantmakers, who can effectively deploy this money. It's plausible that yearly EA grantmaking could increase by a factor of 5-10x over the coming decade, and this requires finding and training new grantmakers on best practices, as well as developing sound judgement.  We'd love to see projects that build the grantmaker pipeline, whether that's grantmaking fellowships, grantmaker mentoring, more frequent donor lotteries, more EA funds-style organisations with rotating fund managers, and more. 

NB: This might be a refinement of fellowships, but I think it's particularly important.

7
Jackson Wagner
This is such a good idea that I think FTX is already piloting a regranting scheme as a major prong of their Future Fund program! But it would be cool to build up the pipeline in other more general/systematic ways -- maybe with mentorship/fellowships, maybe with more experimental donation designs like donor lotteries and impact certificates, maybe with software that helps people to make EA-style impact estimates.
4
Cillian_
It seems that FTX's Regranting Program could be a great way to scalably distribute funds & build the grantmaker pipeline. We (Training for Good) are also developing a grantmaker training programme like what James has described here to help build up EA's grantmaking capacity (which could complement FTX's Regranting Program nicely). It will likely be an 8 week, part-time programme, with a small pot of "regranting" money for each participant and we're pretty excited to launch this in the next few months. In the meantime, we're looking for 5-10 people to beta test a scaled-down version of this programme (starting at the end of March). The time commitment for this beta test would be ~5 hours per week (~2 hrs reading, ~2 hrs projects, ~1 hr group discussion). If anyone reading this is interested, feel free to shoot me an email cillian@trainingforgood.com 

Automated Open Project Ideas Board

 The Future Fund

All of these ideas should be submitted to a board where anyone can forecast their value in dollars lives saved per $ as rated by a trusted research organisation, say Rethink Priorities. The forecasts can be reputation or prediction markets.  Then that research organisation  checks 1% of the ideas and scores them. These scores are used to weight the other forecasts. This creates a scalable system for ranking ideas. Then funders can donate to them as they see fit.

[anonymous]32
0
0

Massive US-China exchange programme

Great power conflict, AI

Fund (university) students to live in the other country in a host family: between US-China, Russia-US, China-India, potentially India-Pakistan. This is important if one thinks that personal experience make it less likely that individuals incentivise or encourage escalation, war and certain competitive dynamics.

8
Jackson Wagner
This might have a hard time meeting the same effectiveness bar as #13, "Talent Search" and #17, "Advocacy for US High-Skill Immigration", which might end up having some similar effects but seem like more leveraged interventions.
2
IanDavidMoss
I disagree, as this idea seems much more explicitly targeted at reducing the potential for great power conflict, and I haven't yet seen many other tractable ideas in that domain.
5
Alex D
My understanding is the Erasmus Programme was explicitly started in part to reduce the chance of conflict between European states.

Nuclear/Great Power Conflict Movement Building

Effective Altruism

Given the current situation in Ukraine, movement-building related to nuclear x-risk or great power conflict would likely be much more tractable than it was up until recently. We don't know how long this period will last for and the memory of the public can be short, so we intend to advantage of this opportunity. This outreach should focus on people with an interest in policy or potential student group organisers as these people are most likely to have an influence here.

Top ML researchers to AI safety researchers

Pay top ML researchers to switch to AI safety

Problem: <.001% of the world’s brightest minds are working on AI safety. Many are working on AI capabilities.

Solution: Pay them to switch. Pay them their same salary, or more, or maybe a lot more.

EA Productivity Fund

Increase the output of top longtermists by paying for things like coaching, therapy, personal assistants, and more.

Problem: Longtermism is severely talent constrained. Yet, even though these services could easily increase a top EAs productivity by 10-50%, many can’t afford them or would be put off by the cost (imposter syndrome or just because it feels selfish).

Solution: Create a lightly-administered fund to pay for them. It’s unclear what the best way would be to select who gets funding, but a very simple decision metric could be to give it to anybody who gets funding from Open Phil, LTFF, SFF, or FTX. This would leverage other people’s existing vetting work.

(Per Nick's note, reposting)

Market shaping and advanced market commitments

Epistemic institutions; Economic Growth

Market shaping is when an idea can only be jump-started by committed demand or other forces. Operation Warp Speed is the most recent example of market-shaping through advanced market commitments, but it has been used several times for other vaccine development. We are interested in funding work to understand when market-shaping makes sense, ideas for creating and funding market-shaping methods, and specific market-shaping or advanced market commitments in our areas of interest.

(I drafted this then realized that it is largely the same as Zac's comment above - so I've strong upvoted that comment and I'm posting here in case my take on it is useful.)

Crowding in other funding

We're excited to see ideas for structuring projects in our areas of interest that leverage our funds by aligning with the tastes of other funders and investors. While we are excited about spending billions of dollars on the best projects we can find, we're also excited to include other funders and investors in the journey of helping these projects scale in the best way possible. We would like to maximize the chance that other sources of funding come in. Some projects are inherently widely attractive and some others are only ever likely to attract (or want) longtermist funding. But, we expect that there are many projects where one or more general mechanisms can be applied to crowd in other funding. This may include:

  • Offering financial incentives (e.g. advanced market commitments)
  • Highlighting financial potential in major projects we would like to see (e.g. especially projects of the scale of the Grok / Brookfield bid for AGL)
  • Portfolio structures / financial engineering (e.g. Bridge Bio)
  • Appealing to social preferences (e.g. highlight points of 'common sense' overlap between longtermist views and ESG)
1
colin
I'll add that advanced market commitments are also useful in situations where a jump-start isn't explicitly required.  In that case, they can act similarly to prize based funding

An Organisation that Sells its Impact for Profit

Empowering Exceptional People, Epistemic Institutions

Nonprofits are inefficient in some respects: they don't maximize value for anyone the way for-profits do for their customers. Moreover, they lack market valuations, so successful nonprofits scale too slowly while unsuccessful ones linger too long. One way to address this is to start an organisation that only accepts funding that incentivizes impact. Its revenue would come from: (1) Selling Impact Cerificates, (2) Prizes, and/or (3) Grants (but only if they value the work at a similar level to the impact certificates). Such an organization could operate on an entirely for-profit basis. Funding would be raised from for-profit investors. Staff would be paid in salary plus equity. The main premise here is that increased salaries are a small price to pay for the efficiencies that can be gained from for-profit markets. Of course, this can only succeed if the funding mechanisms (1-3) become sufficiently popular, but given the increased funding in longtermist circles, this now looks increasingly likely.

See also Retrospective grant evaluations,  Retroactive public goods funding, Impact ... (read more)

Rationalism But For Group Psychology

Epistemic Institutions

LessWrong and the rationalist community have done well to highlight biases and help individuals become more rational, as well as creating a community around this. But most of the biggest things in life are done by groups and organizations.

We'd like to see a project that takes group psychology / organizational psychology and turns it into a rationalist movement with actionable advice to help groups be less biased and help groups achieve more impact, like how the original rationalist movement did so with individuals. We imagine this would involve identifying useful ideas from group psychology / organizational psychology literature and popularizing them in the rationalist community, as well as trying to intentionally experiment. Perhaps this could come up with better ideas for meetings, how to hire, how to attract talent, better ways to help align employees with organizational goals, better ways to keep track of projects, etc.

Disclaimer: This is just my personal opinion and not the opinion of Rethink Priorities. This project idea was not seen by anyone else at Rethink Priorities prior to posting.

9
Gavin
The Epistea Summer Experiment was a glorious example of this.  
1[comment deleted]

Wild animal suffering in space

Space governance, moral circle expansion.

 

Terraforming other planets might cause animals to come to exist in these planets, either because of intentional or unintentional behaviors. These animals might live net negative lives. 

Also, we cannot rule out the possibility that there are already wild "animals" (or any form of sentient beings) who might be suffering from net negative lives in other planets. (this does not relate directly to the Fermi Paradox, which is highly intelligent lives, not lives per se)

Relevant research include:

  • Whether wild animals lead net negative or positive lives on earth, under what conditions. And whether this might hold the same in different planets.
  • Tracking, or even doing research on using AI and robotics to monitor and intervene with habitats. This might be critical if there planets there are planets that has wild "animals", but are uninhabitable for humans to stay close and monitor (or even intervene with) the welfare of these animals.
  • Communication strategies related to wild animal welfare, as it seem to tend to cause controversy, if not outrage.
  • Philosophical research, including population ethics, environmental ethics, comparing welfare/suffering between species, moral uncertainty, suffering-focused vs non-suffering focused ethics.
  • General philosophical work on the ethics of space governance, in relation to nonhuman animals.
6
Dawn Drescher
Another great concern of mine is that even if biological humans are completely replaced with ems or de novo artificial intelligence, these processes will probably run on great server farms that likely produce heat and need cooling. That results in a temperature gradient that might make it possible for small sentient beings, such as invertebrates, to live there. Their conditions may be bad, they may be r-strategists and suffer in great proportions, and they may also be numerous if these AI server farms spread throughout the whole light cone of the future. My intuition is that very few people (maybe Simon Eckerström Liedholm?) have thought about this so far, so maybe there are easy interventions to make that less likely to happen.
5
Dawn Drescher
Brian Tomasik and Michael Dello-Iacovo have related articles.
3
DC
Here's a related question I asked.

AI alignment prize suggestion: Introduce AI Safety concepts into the ML community

Artificial Intelligence

Recently, there have been several papers published at top ML conferences that introduced concepts from the AI safety community into the broader ML community. Such papers often define a problem, explain why it matters, sometimes formalise it, often include extensive experiments to showcase the problem, sometimes include some initial suggestions for remedies. Such papers are useful in several ways: they popularise AI alignment concepts, pave the way for further research,  and demonstrate that researchers can do alignment research while also publishing in top venues. A great example would be Optimal Policies Tend To Seek Power, published in NeurIPS. Future Fund could advertise prizes for any paper that gets published in a top ML/NLP/Computer Vision conference (from ML, that would be NeurIPS, ICML, and ICLR) and introduces a key concept of AI alignment.

2
Yonatan Cale
Risk:  The course presents possible solutions to these risks, and the students feel like they "understood" AI risk, and in the future it will be harder to these students about AI risk since they feel like they already have an understanding, even though it is wrong. I am specifically worried about this because I try imagining who would write the course and who would teach it. Will these people be able to point out the problems in the current approaches to alignment? Will these people be able to "hold an argument" in class well enough to point out holes in the solutions that the students will suggest after thinking about the problem for five minutes? I'm not saying this isn't solvable, just a risk.

EA Macrostrategy:

Effective Altruism

Many people write about the general strategy that EA should take, but almost no-one outside of CEA has this as their main focus. Macrostrategy involves understanding all of the different organisations and projects in EA, how they work together, what the gaps are and the ways in which EA could fail to achieve its goals. Some resources should be spent here as an exploratory grant to see what this turns up.

Evaluating large foundations

Effective Altruism

Givewell looks at actors: object-level charities, people who do stuff. But logically, it's even more worth scrutinising megadonors (assuming that they care about impact or public opinion about their operations, and thus that our analysis could actually have some effect on them).

For instance, we've seen claims that the Global Fund, who spend $4B per year, meet a 2x GiveDirectly bar but not a Givewell Top Charity bar.

This matters because most charity - and even most good charity - is still not by EAs or run on EA lines. Also, even big cautious foundations can risk waste / harm, as arguably happened with the Gates Foundation and IHME - it's important to understand the base rate of conservative giving failing, so that we can compare hits-based giving. And you only have to persuade a couple of people in a foundation before you're redirecting massive amounts.

Refining EA communications and messaging

Values and Reflective Processes, Research That Can Help Us Improve

If we want to motivate a broad spectrum of people about the importance of doing good and ensuring the long-term goes well, it's imperative we find out which messages are "sticky" and which ones are forgotten quickly. Testing various communication frames, particularly for key target audiences like highly talented students, will support EA outreach projects in better tailoring their messaging. Better communications could hugely increase the number of people that consume EA content, relate to the values of the EA movement, and ultimately commit their life to doing good. We'd be excited to see people testing various frames and messaging, across a range of target audiences, using methodologies such as surveys, focus groups, digital media, and more.

1
Jack Lewars
I think this exists (but could be much bigger and should still be funded by this fund).

TL;DR: EA Retroactive Public Good's Funding

In your format:

Deciding which projects to fund is hard, and one of the reasons for that is that it's hard to guess which projects will succeed and which will fail. But wait, startups have solved this problem perfectly: Anybody is allowed to vet a startup and decide to invest (bet) their money on this startup succeeding, and if the startup does succeed, then the early investors get a big financial return.

The EA community could do the same, only it is missing the part where we give big financial returns to projects that turned out good.

This would make the fund's job much easier: They would have to vet which project helped IN RETROSPECT, which is much easier, and they'll leave the hard prediction work to the market.

Context for proposing this

I heard of a promising EA project that is for some reason having trouble raising funds. I'm considering funding it myself, though I am not rich and that would be somewhat broken to do. But I AM rich enough to fund this project and bet on it working well enough to get a Retroactive Public Good grant in the future, if such a thing existed. I also might have some advantage over the EA Fund in vetting this project.

In Vitalik's words:

https://medium.com/ethereum-optimism/retroactive-public-goods-funding-33c9b7d00f0c

2
Ben Dean
Related: Impact Certificates

EA Forum Writers

Pay top EA Forum contributors to write about EA topics full time

Problem: Some of the EA Forum’s top writers don’t work on EA, but contribute some of the community’s most important ideas via writing.

Solution: Pay them to write about EA ideas full time. This could be combined with the independent researcher incubator quite well.

5
Nathan Young
Pay users based on post karma.  (but not comment or question karma which are really easy to get in comparison)
3
Yitz
could lead to disincentive to post more controversial ideas there though
2
Chris Leong
Goodharts law
2
Nathan Young
don't think we'd be wedded to a single metric. Also isn't karma already weak to goodhearts law? I think we should already be concerned with this.
2
Nathan Young
I don't think we'd be wedded to this metric

Language models for detecting bad scholarship 

Epistemic institutions

Anyone who has done desk research carefully knows that many citations don't  support the claim they're cited for - usually in a subtle way, but sometimes a total nonsequitur. Here's a fun list of 13 features we need to protect ourselves.

This seems to be a side effect of academia scaling so much in recent decades - it's not that scientists are more dishonest than other groups, it's that they don't have time to carefully read everything in their sub-sub-field (... while maintaining their current arms-race publication tempo). 

Take some claim P which is below the threshold of obviousness that warrants a citation. 

It seems relatively easy, given current tech, to answer: (1) "Does the cited article say P?" This question is closely related to document summarisation - not a solved task, but the state of the art is workable. Having a reliable estimate of even this weak kind of citation quality would make reading research much easier - but under the above assumption of unread sources, it would also stop many bad citations from being written in the first place.

It is very hard to answer (2) "Is the cited ar... (read more)

A “Red Team” to rigorously explore possible futures and advocate against interventions that threaten to backfire

Research That Can Help Us Improve, Effective Altruism, Epistemic Institutions, Values and Reflective Processes

Motivation. There are a lot of proposals here. There are additional proposals on the Future Fund website. There are additional proposals also on various lists I have collected. Many EA charities are already implementing ambitious interventions. But really we’re quite clueless about what the future will bring.

This week alone I’ve discussed with friends and acquaintances three decisions in completely different contexts that might make the difference between paradise and hell for all sentient life, and not just in the abstract in the way that cluelessness forces us to assign some probability to almost any outcome but in the sense were we could point to concrete mechanisms along which the failure might occur. Yet we had to decide. I imagine that people in more influential positions than mine have to make similar decisions on almost a daily basis and on hardly any more information.

As a result, the robustness of an intervention has been the key criterion for prioritiza... (read more)

1
marswalker
I had a similar idea, and I think that a few more things need to be included in the discussion of this.  There are multiple levels of ideas in EA, and I think that a red team becomes much more valuable when they are engaging with issues that are applicable to the whole of EA.  I think ideas like the institutional critique of EA, the other heavy tail, and others are often not read and internalized by EAs. I think it is worth having a team that makes arguments like this, then breaks them down and provides methods for avoiding the pitfalls pointed out in them.  Things brought up in critique of EA should be specifically recognized and talked about as good. These ideas should be recognized, held up to be examined, then passed out to our community so that we can grow and overcome the objections.  I'm almost always lurking on the forum, and I don't often see posts talking about EA critiques.  That should change. 
2
Dawn Drescher
I basically agree but in this proposal I was really referring to such things as “Professor X is using probabilistic programming to model regularities in human moral preferences. How can that backfire and result in the destruction of our world? What other risks can we find? Can X mitigate them?” I also think that the category that you’re referring to is very valuable but I think those are “simply” contributions to priorities research as they are published by the Global Priorities Institute (e.g., working papers by Greaves and Tarsney come to mind). Rethink Priorities, Open Phil, FHI, and various individuals also occasionally publish articles that I would class that way. I think priorities research is one of the most important fields of EA and much broader than my proposal, but it is also well-known. Hence why my proposal is not meant to be about that.

Subsidise catastrophic risk-related markets on prediction markets

Prediction markets and catastrophic risk

Many markets don't exist because there isn't enough liquidity. A fund could create important longtermist markets on biorisk, AI safetry and nuclear war by pledging to provide significant liquidity once created. This would likely still only work for markets resolving in 1-10 years, due to inflation, but still*.

*It has been suggested to run prediction markets which use indices rather than currency. But people have shown reluctance to bet on ETH markets, so might show reluctance here too.

FTX, which itself runs prediction markets, might be particularly well-suited for prediction-market interventions like this.  I myself think that they could do a lot to advance people's understanding of prediction markets if in addition to their presidential prediction market, they also offered a conditional prediction market of how an indicator like the S&P 500 would do 1 week after the 2024 election, conditional on the Republicans winning vs the Democrats winning.  Conditional prediction markets for important indicators on big national elections would provide both directly useful info in addition to educating people about prediction markets' potential.

1
Alex D
My company seeks to predict or rapidly recognize health security catastrophes, and also requires an influx of capital when such an event occurs (since we wind up with loads of new consulting opportunities to help respond). Is there currently any way for us to incentivize thick markets on topics that are correlated with our business? The idea of getting the information plus the hedge is super appealing!

Pandemic preparedness in LMIC countries

Biorisk

COVID has shown us that biorisk challenges fall on all countries, regardless of how prepared and well-resourced the countries are. While there certainly are many problems with pandemic preparedness high-income countries that need to be addressed, LMIC countries face even more issues in helping detect, identify, contain, mitigate, and/or prevent currently known and novel pathogens. Additionally, even after high income countries successfully contain a pathogen it may continue to spread within LMIC countries opening up risk of further more virulent mutations.

We'd like to see a project that works with LMIC governments to understand their current pandemic prevention plans and understand their local context. This project would especially focused on novel pathogens that are more severe than currently known pathogens -- and help provide the resources and knowledge needed to upgrade their plans to match the best practices of current bio-risk experts. Such a project would likely benefit from a team that contains expertise working with LMIC countries. An emergency fund and expert advice can also be provisioned to be ready to go when pathogens are... (read more)

Getting former hiring managers from quant firms to help with alignment hiring

Artificial Intelligence, Empowering Exceptional People

Despite having lots of funding, alignment seems to not have been very successful at attracting top talent to date. Quant firms, on the other hand, have become known for very successfully acquiring talent and putting them to work on difficult conceptual and engineering problems. Although buy-in to alignment before one can contribute is often cited as a reason, this is, if anything, even more of a problem for quant firms, since very few people are inherently interested in quant trading as an end. As such, importing some of this know how could help substantially improve alignment hiring and onboarding efficiency.

On malevolence: How exactly does power corrupt?

Artificial Intelligence / Values and Reflective Processes

How does it happen, if it happens? Some plausible stories:

  • Backwards causation: People who are “corrupted” by power always had a lust for power but deluded others and maybe even themselves about their integrity;
     
  • Being a good ruler (of any sort) is hard and at times very unpleasant, even the nicest people will try to cover up their faults, covering up causes more problems... and at some point it is very hard to admit that you were incompetent ruler all along.
     
  • Power changes your incentives so much that it corrupts all but the strongest. The difference with the last one is that value drift is almost immediate upon getting power.
     
  • A mix of the last two would be: you get more and more adverse incentives with every rise in power.
     
  • It might also be the case that most idealist people come into power under very stressful circumstances, which forces them to make decisions favouring consolidation of power (kinda instrumental convergence).
     
  • See also this on the personalities of US presidents and their darknesses.
     
2
MaxRa
Yes, that's interesting and plausibly very useful to understand better. Might also affect some EAs at some point. The hedonic treadmill might be  part of it. You get used to the personal perks quickly, so you still feel motivated & justified to still put ~90% of your energy into problems that affect you personally -> removing threats to your rule, marginal status-improvements, getting along with people close to you And some discussion about the backwards causation idea is here in an oldie from Yudkowsky: Why Does Power Corrupt?

Bounty Budgets

Like Regranting, but for Bounties

Problem: In the same way that regranting decentralizes grantmaking, so do the same thing for bounties. For example, give the top 20 AI safety researchers up to $100,000 to create bounties or RFPs for, say, technical research problems. They could also reallocate their budget to other trusted people, creating a system of decentralized trust.

In theory, FTX’s regrantors could already do this with their existing budgets, but this would encourage people to think creatively about using bounties or RFPs.

Bounties are great because you only pay out if it's successful. If hypothetically each researcher created 5 bounties at $10,000 each that’d be 100 bounties - lots of experiments.

RFPs are great because it puts less risk on the applicants but also is a scalable, low-management way to turn money into impact.

Examples: 1) I’ll pay you $1,000 for every bounty idea that gets funded
2) Richard Ngo

More public EA charity evaluators

Effective Altruism

There are dozens of EA fundraising organizations deferring to just a handful of organizations that publish their research on funding opportunities, most notably GiveWell, Founders Pledge and Animal Charity Evaluators. We would like to see more professional funding opportunity research organizations sharing their research with the public, both to increase the quality of research in the areas that are currently covered - through competition and diversity of perspectives and methodologies - and to cover important areas that aren’t yet covered such as AI and EA meta.

Longtermist risk screening and certification of institutions

Artificial Intelligence, Biorisk and Recovery from Catastrophe

Companies, nonprofits and government institutions participate and invest in activities that might significantly increase global catastrophic risk like gain-of-function research or research that might increase the likelihood of unaligned AGI. We’d like to see an organisation that evaluates and proposes policies and practices that should be followed in order to reduce these risks. Institutions that commit to following these practices and submit themselves to independent audits could be certified. This could help investors and funders to screen institutions for potential risks. It could also be used in future corporate campaigns to move companies and investors into adopting responsible practices.

2
Nathan Young
How would this be effective, rather than creating additional work on granmakers and increasing the entry barriers for grantees. Seems to many similar schemes for other kinds of risk end up as meaningless box-ticking enterprises which would lead to less effectiveness and possibly reputational harm to EA. This is my prior when I hear a new audit proposed, though I hope it won't apply in your case.
1
Patrick Gruban 🔸
I agree that there is a risk that this leads to additional burden without meaningful impact.  Seeing the numbers of certifications currently deployed that are used public-facing for marketing as well as to reduce supply-chain risks (see for example this certifier) I would see the chance that longtermist causes like biosecurity risks will be incorporated into existing standards or launched as new standards within the next 10 years at 70%.  If we can preempt this with building one or more standards based on actual expected impact instead of just using it to tick boxes. If this bet works out then we might make a counterfactual impact however I would also like to see the organisation shut down after doing research if it doesn't see a path to a certification having impact.

Resilient ways to archive valuable technical / cultural / ecological information
Biorisk and recovery from catastrophe

 In ancient Sumeria, clay tablets recording ordinary market transactions were considered disposable.  But today's much larger and wealthier civilization considers them priceless for the historical insight they offer.  By the same logic, if human civilization millennia from now becomes a flourishing utopia, they'll probably wish that modern-day civilization had done a better job at resiliently preserving valuable information.  For example, over the past 120 years, around 1 vertebrate species has gone extinct each year, meaning we permanently lose the unique genetic info that arose in that species through millions of years of evolution.
There are many existing projects in this space -- like the internet archive, museums storing cultural artifacts, and efforts to protect endangered species.  But almost none of these projects are designed robustly enough to last many centuries with the long-term future in mind.  Museums can burn down, modern digital storage technologies like CDs and flash memory aren't designed to last for centuries, and many... (read more)

2
Dawn Drescher
Agreed, very important in my view! I’ve been meaning to post a very similar proposal with one important addition: Anthropogenic causes of civilizational collapse are (arguably) much more likely than natural ones. These anthropogenic causes are enabled by technology. If we preserve an unbiased sample of today’s knowledge or even if it’s the knowledge that we consider to have been most important, it may just steer the next cycle of our civilization right into the same kind of catastrophe again. If we make the information particularly durable, maybe we’ll even steer all future cycles of our civilization into the same kind of catastrophe. The selection of the information needs to be very carefully thought out. Maybe only information on thorium reactors rather than uranium ones; only information on clear energy sources; only information on proof of stake; only information on farming low-suffering food; no prose or poetry that glorifies natural death or war; etc. I think that is also something that none of the existing projects take into account.

AI Safety “school” / More AI safety Courses

Train People in AI Safety at Scale

Problem: Part of the talent bottleneck is caused by there not being enough people who have the relevant skills and knowledge to do AI safety work. Right now, there’s no clear way to gain those skills. There’s the AGI Fundamentals curriculum, which has been a great success, but aside from that, there’s just a handful of reading lists. This ambiguity and lack of structure lead to way fewer people getting into the field than otherwise would.

Solution: Create an AI safety “school” or a bunch more AI safety courses. Make it so that if you finish the AGI Fundamentals course there are a lot more courses where you can dive deeper into various topics (e.g. an interpretability course, values learning course, an agent foundations course, etc). Make it so there’s a clear curriculum to build up your technical skills (probably just finding the best existing courses, putting them in the right order, and adding some accountability systems). This could be funded course by course, or funded as a school, which would probably lead to more and better quality content in the long run.

Offer paid sabbatical to people considering changing careers

Empowering Exceptional People

People sometimes are locked-in in their non-EA careers because while working, they do not have time to:

  • Prioritize what altruistic job would fit them best
  • Learn what they need for this job

Create an organization that will offer paid sabbaticals to people considering changing careers to more EA-aligned jobs to help this transition. During the sabbatical, they could be members of a community of people in a similar situation, with coaching available.

Agree. I think that having an Advance Market Commitment system for this makes sense. E.g., FTX says 'We will fund mid-career academics/professionals for up to x months to do y. ' My experience is that most of the high value people I know who are good professional are sufficiently time poor and dissuaded by uncertainty that they won't spend 2-5 hours to apply for something they don't know they will get. The barriers and costs are probably greater than most EA funders realise.

An alternative/related idea is to have a simple EOI system where people can submit a fleshed out CV and a paragraph and then get  a AMC on an application - e.g., We think that there is a more than 60% chance that we would fund this and would therefore welcome a full application.

A public EA impact investing evaluator

Effective Altruism, Empowering Exceptional People

Charity evaluators that publicly share their research - such as GiveWell, Founders Pledge and Animal Charity Evaluators - have arguably not only helped move a lot of money to effective funding opportunities but also introduced many people to the principles of effective altruism, which they have applied in their lives in various ways. Apart from some relatively small projects (1) (2) (3) there is currently no public EA research presence in the growing impact investing sector, which is both large in the amount of money being invested and in its potential to draw more exceptional people’s attention to the effective altruism movement. We’d love to see an organization that takes GiveWell-quality funding opportunity research to the impact investing space and publicly shares its findings.

2
Brendon_Wong
Seeing this late, but this is a wonderful idea! Will Roderick and I worked on "GiveWell for Impact Investing" a while ago and published this research on the EA Forum. We ultimately pursued other professional priorities, but we continue to think the space is very promising, stay involved, and may reenter it in the future.

Predicting Our Future Grants

Epistemic Institutions, Research That Can Help Us Improve

If we had access to a crystal ball that allowed us to know exactly what our grants five years from now otherwise would have been, we can make substantially better decisions now. Just making the grants we'd otherwise have made five years in the future can save a lot of grantmaking time and money, as well as cause many amazing projects to happen more quickly.

We don't have a  crystal ball that lets us see future grants. But perhaps high-quality forecasts can be the next best thing. Thus, we're extremely excited about people experimenting with Prediction-Evaluation setups to predict the Future Fund's future grants with high accuracy, helping us to potentially allocate better grants more quickly. 

Participatory longtermism

Values and reflective processes, Effective Altruism

Most longtermist and EA ideas come from a small group of people with similar backgrounds, but could affect the global population now and in the future. This creates the risk of longtermist decisionmakers not being aligned with that wider population. Participatory methods aim to involve people decisionmaking about issues that affect them, and they have become common in fields such as international development, global health, and humanitarian aid. Although a lot could be learned from existing participatory methods, they would need to be adapted to issues of concern to EAs and longtermists. The fund could support the development of new participatory methods that fit with EA and longtermist concerns, and could fund the running of participatory processes on key issues. 

Additional notes:

  • There is a field called participatory futures, however it seems not very rigorous [based on a very rough impression, however see comment below about this], and as far as I know hasn't been applied to EA issues.
  • Participedia has writeups of participatory methods and case studies from a variety of fields.
6
Gavin
This comments section is pretty participatory.
3
MaxRa
Cool idea! :) You might be interested in skimming the report Deliberation May Improve Decision-Making from Rethink Priorities. > In this essay from Rethink Priorities, we discuss the opportunities that deliberative reforms offer for improving institutional decision-making. We begin by describing deliberation and its links to democratic theory, and then sketch out examples of deliberative designs. Following this, we explore the evidence that deliberation can engender fact-based reasoning, opinion change, and under certain conditions can motivate longterm thinking. So far, most deliberative initiatives have not been invested with a direct role in the decision-making process and so the majority of policy effects we see are indirect. Providing deliberative bodies with a binding and direct role in decision-making could improve this state of affairs. We end by highlighting some limitations and areas of uncertainty before noting who is already working in this area and avenues for further research.
3
JBPDavies
Love the idea - just writing to add that Futures Studies, participatory futures in particular & future scenario methodologies could be really useful for Longtermist research. Methods in these fields can be highly rigorous (I've been working with some futures experts as part of a project to design 3 visions of the future - which have just finished going through a lengthly stress-testing and crowd-sourcing process to open them up to public reflection and input), especially if the scenario design is approached in a systematised way using a well-developed framework. I could imagine various projects that aim to create a variety of different desirable visions of the future through participatory methods, identifying core characteristics, pathways towards them, system dynamics and so on to illustrate the value and importance of longtermist governance to get there. Just one idea, but there are plenty of ways to apply this field to EA/Longtermism! Would love to talk about your idea more as it also chimes with a paper I'm drafting, 'Contesting Longtermism', looking at some of the core tensions within the concept and how these could be opened up to wider input. If you're interested in talking about it, feel free to reach out to me at j.b.p.davies@uu.nl
1
agnode
Thanks for the point about rigor - I'm not that familiar with participatory futures but had encountered it through an organisation that tends to be a bit hypey. But good to know there is rigorous work in that field.  I agree that there are lots of opportunities to apply to EA/Longtermism and your paper sounds interesting. I'll send an email. 

Research on the long-run determinants of civilizational progress
Economic growth

What factors were the root cause of the industrial revolution?  Why did industrialization happen in the time and place and ways that it did?  How have the key factors supporting economic growth changed over the last two centuries?  Why do some developing countries manage to "catch up" to the first world, while others lag behind or get stuck in a "middle-income trap"?  Is the pace of entrepreneurship or scientific innovation slowing down -- and if so, what can we do about it?  Is increasing amounts of "vetocracy" an inevitable disease that afflicts all stable and prosperous societies (as Holden Karnofsky argues here), or can we hope to change our culture or institutions to restore dynamism?  At FTX, we'd be interested to fund research into these "progress studies" questions.  We're also interested in funding advocacy groups promoting potential policy reforms derived from the ideas of the progress studies movement.

2
Jackson Wagner
See also many of Zac Townsend's ideas, the idea of nuclear power & GMO advocacy, and my list of object-level planks in the progress-studies platform.

Pay prestigious universities to host free EA-related courses to very large numbers of government officials from around the world

Empowering Exceptional People

The direct benefit of the courses would be to give government officials better tools for thinking and talking with each other.

 The indirect benefit could be to allow large numbers of pre-disposed officials to be seen by <some organisation> who could use the opportunity to identify those with particular potential and offer them extra support or opportunities so they can make an even bigger impact.

The need for it to be free is to overcome the blocker of otherwise needing to write a business case for attendance which may then require some sort of tortuous approval process.

The need for it to be hosted at a prestigious university is to overcome the blocker of justifying to bosses or colleagues why the course is worthwhile by allowing piggybacking off the University's brand.

Infrastructure to support independent researchers

Epistemic Institutions, Empowering Exceptional People  

The EA and Longtermist communities appear to contain a relatively large proportion of independent researchers compared to traditional academia. While working independently can provide the freedom to address impactful topics by liberating researchers from the perversive incentives, bureaucracy, and other constraints imposed on academics, the lack of institutional support can impose other difficulties that range from routine (e.g. difficulties accessing pay-walled publications) to restrictive (e.g. lack of mentorship, limited opportunities for professional development). Virtual independent scholarship institutes have recently emerged to provide institutional support (e.g. affiliation for submitting journal articles, grant management) for academic researchers working independently. We expect that facilitating additional and more productive independent EA and Longtermist research will increase the demographic diversity and expand the geographical inclusivity of these communities of researchers. Initially, we would like to determine the main needs and limitations independent... (read more)

4
Jackson Wagner
(I think this is a good idea!  For anyone perusing these FTX project ideas in the future, here is a post I wrote exploring drawbacks and uncertanties that prevent people like me from getting excited about independent research as a career.)

EA Health Institute/Chief Wellness Officer 

Empowering Exceptional People, Effective Altruism, Community Building 

Optimizing physical and mental health can improve cognitive performance and decrease burnout. We need EAs/longtermists to have the health resilience to weather the storm - physical fitness, sleep, nutrition, mental health.  An institution could be created to assist EA aligned organizations and individuals. Using best practices from high performance workplace health, both personal and organizational, and innovative new ideas, a wellness team could help EAs have sustainable and productive careers. This could be done through consulting, coaching, preparation of educational materials or retreats. From a community growth perspective, EA becomes more attractive to some when one doesn’t have to sacrifice health for deeply meaningful work.

(Disclosure -I'm a physician/physician wellness SME - helping with this could be a good personal fit)

Unified, quantified world model

Epistemic Institutions, Effective Altruism, Values and Refelctive Processes, Research That Can Help Us Improve

Effective altruism started out, to some extend, with a strong focus on quantitative prioritization along the lines of GiveWell’s quantitative models, the Disease Control Priorities studies, etc. But they largely ignore complex, often nonlinear effects of these interventions on culture, international coordination, and the long-term future. Attempts to transfer the same rigor to quantative models of the long-term future (such as Tarsney’s set of models in The Epistemic Challenge to Longtermism) are still in their infancy. Otherwise effective altruist prioritization today is a grab bag of hundreds of considerations that interact in complex ways that (probably) no one has an overview over. Decision-makers may forget to take half of them into account if they haven’t recently thought about them. That makes it hard to prioritize, and misprioritization becomes more and more costly with every year.

A dedicated think tank could create and continually expand a unified world model that (1) is a repository of all considerations that affect altruistic decisi... (read more)

3
Max Ghenis
Cool - you might also be interested in my submission, "Comprehensive, personalized, open source simulation engine for public policy reforms". It's not in the pitch but my intent is for it to be global as well.
3
Dawn Drescher
Awesome, upvoted! You can also have a look at my “Red team” proposal. It proposes to use methods from your field applied to any EA interventions (political and otherwise) to steel them against the risk of having harmful effects.

Civic sector software

Economic Growth, Values and Reflective Processes

Software and software vendors are among the biggest barriers to instituting new public policies or processes. The last twenty years have seen staggering advances in technology, user interfaces, and user-centric design, but governments have been left behind, saddled with outdated, bespoke, and inefficient software solutions. Worse, change of any kind can be impractical with existing technology systems or when choosing from existing vendors. This fact prevents public servants from implementing new evidence-based practices, becoming more data-driven, or experimenting with new service models.

Recent improvements in civic technology are often at the fringes of government activity, while investments in best practices or “what works” are often impossible for any government to implement because of technology. So while over the last five years, there has been an explosion of investments and activity around “civic innovation,” the results are often mediocre. On the one hand, governments end up with little more than tech toys or apps that have no relationship to the outcomes that matter (e.g. poverty alleviation, service deli... (read more)

3
Yonatan Cale
Hey, this is somewhat my domain. The bottleneck is not building software, it is more like "governments are old gray organizations that don't want to change anything". If you find any place where the actual software development is the bottleneck, I'd be very happy to hear and maybe take part in it. I also expect many other EA developers to want to take part, it sounds like a good project

(For context, I was the Chief Data Officer of the California State Government and CTO of Newark, NJ when Cory Booker was Mayor). 

I actually think the way to do this is to partner with one city and build everything they need to run the city. The problem is that people can't use piecemeal systems very well. It would just take a huge initial set of capital -- like exactly the type of capital that could be provided here. 

1
Yonatan Cale
Ah ok forget about it being somewhat my domain :P Sounds like a really interesting suggestion. Especially if it would be for a city that "matters" (that will help people do important things?), I think this project could interest me and others   (I'm interested if you have opinions about https://zencity.io/, as a domain expert)
1
Max Ghenis
Somewhat related, I submitted "Comprehensive, personalized, open source simulation engine for public policy reforms". Governments could also use the simulation engine to explore policy reforms and to improve operations, e.g. to establish individual households' eligibility for means-tested benefit programs.

Teaching secondary school students about the most pressing issues for humanity's long-term future

Values and Reflective Processes, Effective Altruism

Secondary education focuses mostly on the past and present, and tends not to address the most pressing issues for humanity’s long-term future. I would like to see textbooks, courses, and/or curriculum reform that promote evidence-based and thoughtful discourse about the major threats facing the long-term future of humanity. Secondary school students are a promising group for such outreach and education because they have their whole careers ahead of them, and numerous studies have shown that they  care about the future. This may serve a significant benefit in making more young people care about  these issues and support them with either their time or money

High-quality human data

Artificial Intelligence

Most proposals for aligning advanced AI require collecting high-quality human data on complex tasks such as evaluating whether a critique of an argument was good, breaking a difficult question into easier subquestions, or examining the outputs of interpretability tools. Collecting high-quality human data is also necessary for many current alignment research projects. 

We’d like to see a human data startup that prioritizes data quality over financial cost. It would follow complex instructions, ensure high data quality and reliability, and operate with a fast feedback loop that’s optimized for researchers’ workflow. Having access to this service would make it quicker and easier for safety teams to iterate on different alignment approaches

Some alignment research teams currently manage their own contractors because existing services (such as surgehq.ai and scale.ai) don’t fully address their needs; a competent human data startup could free up considerable amounts of time for top researchers.

Such an organization could also practice and build capacity for things that might be needed at ‘crunch time’ – i.e., rapidly producing moderately la... (read more)

Advocacy for digital minds

Artificial Intelligence, Values and Reflective Processes, Effective Altruism

Digital sentience is likely to be widespread in the most important future scenarios. It may be possible to shape the development and deployment of artificially sentient beings in various ways, e.g. through corporate outreach and lobbying. For example, constitutions can be drafted or revised to grant personhood on the basis of sentience; corporate charters can include responsibilities to sentient subroutines; and laws regarding safe artificial intelligence can be tailored to consider the interests of a sentient system. We would like to see an organization dedicated to identifying and pursuing opportunities to protect the interests of digital minds. There could be one or multiple organizations. We expect foundational research to be crucial here; a successful effort would hinge on thorough research into potential policies and the best ways of identifying digital suffering.

X-risk Art Competitions

Fund competitions to make x-risk art to create emotion

Problem: Some EAs find longtermism intellectually compelling but not emotionally compelling, so they don’t work on it, yet feel guilty.

Solution: Hold competitions where artists make art explicitly intended to make x-risk emotionally compelling. Use crowd voting to determine winners.

Translate EA content at scale

Reach More Potential EAs in Non-English Languages

Problem: Lots of potential EAs don’t speak English, but most EA content hasn’t been translated

Solution: Pay people to translate the top EA content of all time into the most popular languages, then promote it to the relevant language communities.

7
Dawn Drescher
Little addition: I imagine that knowledgeable EAs in the respective target countries should do that as opposed to professional translators so that they can do full language and cultural mediation rather than just translating the words.

Provide personal assistants for EAs

 Empowering Exceptional People

Many senior EAs spend way too much with busywork because it is hard to get a good personal assistant. This is currently so because: 

  1. There is no obvious source of reliable, vetted assistants.
  2. If an EA wants to become an assistant, it is harder for them to find a job for EA or on EA-related projects.
  3. Assistants have an incentive to have many clients, to avoid loss of income if they would lose their client. This leads to assistants having less time per client, and thus more time is spent on communication and less on work itself.
  4. Assistants tend to be paid personally by EAs instead of by their employers. That leads to using them less than would be optimal.
  5. There is no community of assistants that would be sharing knowledge and helping each other.

All these factors would be removed if an agency managed personal assistants.

4
Dawn Drescher
Kat Woods (Nonlinear) is someone to talk to when it comes to this project.