Hide table of contents

The FTX Foundation's Future Fund is a philanthropic fund making grants and investments to ambitious projects in order to improve humanity's long-term prospects.

We have a longlist of project ideas that we’d be excited to help launch. 

We’re now announcing a prize for new project ideas to add to this longlist. If you submit an idea, and we like it enough to add to the website, we’ll pay you a prize of $5,000 (or more in exceptional cases). We’ll also attribute the idea to you on the website (unless you prefer to be anonymous). 

All submissions must be received in the next week, i.e. by Monday, March 7, 2022. 

We are excited about this prize for two main reasons:

  • We would love to add great ideas to our list of projects.
  • We are excited about experimenting with prizes to jumpstart creative ideas.

To participate, you can either

  • Add your proposal as a comment to this post (one proposal per comment, please), or
  • Fill in this form

Please write your project idea in the same format as the project ideas on our website. Here’s an example:

Early detection center

Biorisk and Recovery from Catastrophes

By the time we find out about novel pathogens, they’ve already spread far and wide, as we saw with Covid-19. Earlier detection would increase the amount of time we have to respond to biothreats. Moreover, existing systems are almost exclusively focused on known pathogens—we could do a lot better by creating pathogen-agnostic systems that can detect unknown pathogens. We’d like to see a system that collects samples from wastewater or travelers, for example, and then performs a full metagenomic scan for anything that could be dangerous

You can also provide further explanation, if you think the case for including your project idea will not be obvious to us on its face.

Some rules and fine print:

  • You may submit refinements of ideas already on our website, but these might receive only a portion of the full prize.
  • At our discretion, we will award partial prizes for submissions that are proposed by multiple people, or require additional work for us to make viable.
  • At our discretion, we will award larger prizes for submissions that we really like.
  • Prizes will be awarded at the sole discretion of the Future Fund.

We’re happy to answer questions, though it might take us a few days to respond due to other programs and content we're launching right now.

We’re excited to see what you come up with!

(Thanks to Owen Cotton-Barratt for helpful discussion and feedback.)

Comments731
Sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Retrospective grant evaluations

Research That Can Help Us Improve

EA funders allocate over a hundred million dollars per year to longtermist causes, but a very small fraction of this money is spent evaluating past grantmaking decisions. We are excited to fund efforts to conduct retrospective evaluations to examine which of these decisions have stood the test of time. He hope that these evaluations will help us better score a grantmaker's track record and generally make grantmaking more meritocratic and, in turn, more effective. We are interested in funding evaluations not just of our own grantmaking decisions (including decisions by regrantors in our regranting program), but also of decisions made by other grantmaking organizations in the longtermist EA community.

4
Avi Lewis
I'd like to expand on this: a think-tank/paper that formulates a way of evaluating all grants by a set of objective, quantifiable, criteria. This in order to better inform future allocation decisions so that each dollar spent ends up making the greatest impact possible. In this respect Retrospective Grant Evaluations, is but one variable to measure grant effectiveness. I have a few more ideas that can be combined to create some kind of weighted scoring mechanism for grant evaluation: * Social return on investment (SROI). Arriving at a set of non-monetary variables to quantify social impact * Cost effective analysis. GiveWell is a leader in this. We could consider applying some of their key learnings from the non-for-profit space to EA projects * Horizon Scanning. Governmental bodies have departments that perform this kind of work. A proposal could be assessed by it's alignment with emerging technology forecasts * Backcasting. Seek out ventures that are working towards a desirable future goal * Pareto optimal.  Penalize ideas that could have potential negative impact on factors/people outside of the intended target audience. * Competence and track record. Prioritize grant allocators/judges based on previous successful grants. Prioritize grants to founder or organizations with a proven track record of competence Obviously this list could go on and  this is just a small number of possible variables. The idea is simply to build a model that can score the utility of a proposed grant. 
1
brb243
Is this neglecting the notion that some of the grants are to strategically develop interest by presentation appealing to different decisionmakers, since the objectives are rather already known, such as improve lives of humans and animals in the long term and prevent actors, including those who use and develop AI to reduce the wellbeing of these individuals? It can be a bit of a reputational loss risk to evaluate 'well, we started convincing the government to focus on the long term by appealing by the extent of the future so now we can start talking about the quality of life in various geographies, and if this goes well then we move onto the advancement of animal-positive systems across the spacetime?'

This list should have karma hidden and entries randomised. I guess most poeple do not read and vote all the way to the bottom. I certainly didn't the first time I read it.

I agree; something like Reddit's contest mode would be useful here. I've sorted the list by "newest first" to avoid mostly seeing the most upvoted entries.

7
Stephen Clare
I'm (pleasantly) surprised by the number of entries! But as a result the Forum seems pretty far from optimal as a platform for this discussion. Would be helpful to have a way to filter by focus area, for example.
3
Nathan Young
Yeah I suggest it should be done like this, with search and filters as you suggest. https://forum.effectivealtruism.org/posts/KigFfo4TN7jZTcqNH/the-future-fund-s-project-ideas-competition?commentId=G7aLWq4zypE77Fn6f 
6
Taras Morozov
To prove the point: ATM the most upvoted comment is also the oldest one - Pablo's Retrospective grant evaluations.
4
Greg_Colbourn
The winners have been announced. It's interesting to note the low correlation between comment karma and awards. Of the (3 out of 6) public submissions, the winners had a mean of 20 karma [as of posting this comment], minimum 18, and the (9 out of 15) honourable mentions a mean of 39 (suggesting perhaps these were somewhat weighted "by popular demand"), minimum 16. None of the winners were in the top 75 highest rated comments; 8/9 of the publicly posted honourable mentions were (including 4 in the top 11).  There are 6 winners and 15 honourable mentions listed in OP (21 total); the top 21 public submissions had a mean karma of 52, minimum 38; the top 50 a mean of 40, minimum 28; and the top 100 a mean of 31, minimum 18. And there are 86 public submissions not amongst the awardees with higher karma than the lowest karma award winner.  See spreadsheet for details. Given that half of the winners were private entries (2/3 if accounting for the fact that one was only posted publicly 2 weeks after the deadline), and 40% of the honourable mentions, one explanation could be that private entries were generally higher quality. Note karma is an imperfect measure (so in addition to the factor Nathan mentions, maybe the discrepancy isn't that surprising).
2
Nathan Young
Alternatively, there could be an alternate ranking mode where you get two comments shown at once and you choose if one is better if they are about the same. Even a few people doing that would start to get a sense of if they agree with the overall ranking.

Starting EA community offices

Effective altruism

Some cities, such as Boston and New York, are home to many EAs and some EA organizations, but lack dedicated EA spaces. Small offices in these cities could greatly facilitate local EA operations. Possible uses of such offices include: serving as an EA community center, hosting talks or reading groups, providing working space for small EA organizations, reducing overhead for event hosting, etc.

(Note: I believe someone actually is looking into starting such an office in Boston. I think (?) that might already be funded, but many other cities could plausibly benefit from offices of their own.)

Here is a more ambitious version:

EA Coworking Spaces at Scale

Effective Altruism

The EA community has created several great coworking spaces, but mostly in an ad hoc way, with large overheads. Instead, a standard EA office could be created in upto 100 towns and cities. Companies,  community organisers, and individuals  working full-time on EA projects would be awarded a membership that allows them to use these offices in any city. Members gain from being able to work more flexibly, in collaboration with people with similar interests (this especially helps independent researchers with motivation). EA organisations benefit from decreased need to do office management (which can be done centrally without special EA expertise). EA community organisers gain easier access to an event space and standard resources, such as a library, and hotdesking space, and some access to the expertise of others using the office.

Here is an even more ambitious one:

Found an EA charter city

Effective Altruism

A place where EAs could live, work, and research for long periods, with an EA school for their children, an EA restaurant, and so on. Houses and a city UBI could be interesting incentives.

9
RyanCarey
What would be the value add of an EA city, over and above that of an EA school and coworking space? For example, I don't see why you need to eat at an EA restaurant, rather than just a regular restaurant with tasty and ethical food. Note also that the libertarian "Free State Project" seems to have failed, despite there being many more libertarians than effective altruists.
2
mako yass
Lower cost of living, meaning you can have more people working on less profitable stuff. I'm not sure 5000 free staters (out of 20k signatories) should be considered failure.
2
RyanCarey
Right, but it sounds like it didn't go well afterwards? https://www.google.com/amp/s/newrepublic.com/amp/article/159662/libertarian-walks-into-bear-book-review-free-town-project
1
Leo
Mere libertarians may have failed, as anarchists did in similar attempts. But I believe that EAs can do better. An EA city would be a perfect place to apply many of the ideas and polices we are currently advocating for.
3
RyanCarey
Could you elaborate on the policies? And what, roughly, are you picturing - an EA-sympathising municipal government, or a more of a Honduran special economic zone type situation?
1
Leo
I don't think I will elaborate on policies, given that they are the last thing to worry about. Even RP negative report counts new policies among the benefits of charter cities. Now we are supposed to have effective ways to improve welfare, why wouldn't we build a new city, start from scratch, do it better than everybody else, and show it to the world? While I agree that this can't be done without putting a lot of thinking into it, I believe it must be done sooner or later. From a longtermist point of view: how could we ever expect to carry out a rational colonization of other planets when nobody on earth has ever been able to successfully found at least one rational city?
1
mako yass
Note, VR is going to get really good in the next three years, so I wouldn't personally recommend getting too invested in any physical offices, but I guess as long as we're renting it won't be our problem.
4
Jeff Kaufman 🔸
I think it is pretty unlikely that VR improvements on the scale of 3y make people stop caring about being actually in person. This is a really hard problem that people have been working on for decades, and while we have definitely made a lot of progress if we were 3y from "who needs offices?" I would expect to already see many early adopters pushing VR as a comfortable environment for general work (VR desktop) or meetings.
1
mako yass
What problem are you referring to. Face tracking and remote presence didn't have a hardware platform at all until 2016, and wasn't a desirable product until maybe this year (mostly due to covid), and wont be a strongly desirable product until hardware starts to improve dramatically next year. And due to the perversity of social software economics, it wont be profitable in proportion to its impact, so it'll come late. There are currently zero non-blurry face tracking headsets with that are light enough to wear throughout a workday, so you should expect to not see anyone using VR for work. But we know that next year there will be at least one of those (apple's headset). It will appear suddenly and without any viable intermediaries. This could be a miracle of apple, but from what I can tell, it's not. Competitors will be capable of similar feats a few years later. (I expect to see limited initial impact from applevr (limited availability and reluctance from apple to open the gates), the VR office wont come all at once, even though the technical requirements will.) (You can get headsets with adequate visual acuity (60ppd) right now, but they're heavy, which makes them less convenient to use than 4k screens. They're expensive, and they require a bigger, heavier, and possibly even more expensive computer to drive them (though this was arguably partly a software problem), which also means they wont have the portability benefits that 2025's VR headsets will have, which means they're not going to be practical for much at all, and afaik the software for face tracking isn't available for them, and even if it were, it wouldn't have a sufficiently large user network in professional realms.)
2
Chris Leong
You think they'll get past the dizziness problem?
1
mako yass
I think everyone will adapt. I vaguely remember hearing that there might be a relatively large contingent of people who never do adapt, I was unable to confirm this with 15 minutes of looking just now, though. Every accessibility complaint I came across seemed to be a solvable software problem rather than anything fundamental.
6
Chris Leong
I heard that New York was starting a coworking space as well
2
JanB
I think Berlin has something like this
4
victor.yunenko
Indeed, the space was organized by Effektiv Spenden: teamwork-berlin.org
1
Yonatan Cale
I think EA Israel would have more people working remotely in international organizations if we had community offices. [We recently got an office which I'm going to check out tomorrow; Not an ideal location for me but will try!]

Investment strategies for longtermist funders

Research That Can Help Us Improve, Epistemic Institutions, Economic growth

Because of their non-standard goals, longtermist funders should arguably follow investment strategies that differ from standard best practices in investing. Longtermists place unusual value on certain scenarios and may have different views of how the future is likely to play out. 

We'd be excited to see projects that make a contribution towards producing a pipeline of actionable recommendations in this regard. We think this is mostly a matter of combining a knowledge of finance with detailed views of the future for our areas of interest (i.e. forecasts for different scenarios with a focus on how giving opportunities may change and the associated financial winners/losers). There is a huge amount of room for research on these topics. Useful contributions could be made by research that develops these views of the future in a financially-relevant way, practical analysis of existing or potential financial instruments, and work to improve coordination on these topics.

Some of the ways the strategies of altruistic funders may differ include:

  • Mission-correlated investing
... (read more)

I have had a similar idea, which I didn't submit, relating to trying to create investor access to tax-deductible longtermist/patient philanthropy funds across all major EA hubs. Ideally these would be scaled up/modelled on the existing EA long term future fund (which I recall reading about but can't find now, sorry)

 

Edit - found it and some ideas - see this and top level post.

2
Greg_Colbourn
Just going to note that SBF/FTX/Alameda are already setting a very high benchmark when it comes to investing!
1
brb243
A systemic change investment strategy for your review.
1
JBPDavies
You may be interested in the following project I'm working for: https://deeptransitions.net/news/the-deep-transition-futures-project-investing-in-transformation/ . The project goal is developing a new investment philosophy & strategy (complete with new outcome metrics) aimed at achieving transformational systems change. The project leverages the Deep Transitions theoretical framework as developed within the field of Sustainability Transitions and Science, Technology and Innovation Studies to create a theory of change and subsequently enact it with a group of public and private investors. Would recommend diving into this if you're interested in the nexus of investment and transformation of current systems/shaping future trajectories. I can't say too much about future plans at this stage, except that following the completion of the current phase (developing the philosophy, strategies and metrics), there will be an extended experimentation phase in which these are applied, tested and continuously redeveloped.

Highly effective enhancement of productivity, health, and wellbeing for people in high-impact roles

Effective Altruism

When it comes to enhancement of productivity, health, and wellbeing, the EA community does not sufficiently utilise division of labour. Currently, community members need to obtain the relevant knowledge themselves and do related research, e.g. on health issues, themselves. We would like to see dedicated experts on these issues that offer optimal productivity, health, and wellbeing, as a service. As a vision, a person working in a high-impact role could book calls with highly trained nutrition specialists, exercise specialists, sleep specialists, personal coaches, mental trainers, GPs with sufficient time, and so on, increasing their work output by 50% while costing little time. This could involve innovative methods such as ML-enabled optimal experiment design to figure out which interventions work for each individual.

Note: Inspired by conversations with various people. I won't name them here because I don't want to ask for permission first, but will share the prize money with them if I win something.

6
Brendon_Wong
I was going to write a similar comment for researching and promoting well-being and well-doing improvements for EAs as well as the general public! Since this already exists in similar form as a comment, strong upvoting instead. Relevant articles include Ben Williamson’s project (https://forum.effectivealtruism.org/posts/i2Q3DTsQq9THhFEgR/introducing-effective-self-help) and Dynomight’s article on “Effective Selfishness” (https://dynomight.net/effective-selfishness/). I also have a forthcoming article on this. Multiple project ideas that have been submitted also echo this general sentiment. For example “ Improving ventilation,” “Reducing amount of time productive people spend doing paperwork,” and “ Studying stimulants' and anti-depressants' long-term effects on productivity and health in healthy people (e.g. Modafinil, Adderall, and Wellbutrin).” Edit: I am launching this as a project called Better! Please get in touch if you're interested in funding, collaborating on, or using this!

Reducing gain-of-function research on potentially pandemic pathogens

Biorisk

Lab outbreaks and other lab accidents with infectious pathogens happen regularly. When such accidents happen in labs that work on gain-of-function research (on potentially pandemic pathogens), the outcome could be catastrophic. At the same time, the usefulness of gain-of-function research seems limited; for example, none of the major technological innovations that helped us fight COVID-19 (vaccines, testing, better treatment, infectious disease modelling) was enabled by gain-of-function research. We'd like to see projects that reduce the amount of gain-of-function research done in the world, for example by targeting coordination between journals or funding bodies, or developing safer alternatives to gain-of-function research.

 

Additional notes:

  • There are many stakeholders In the research system (funders, journals, scientists, hosting institutions, hosting countries). I think the concentration of power is strongest in journals: there are only a few really high profile life-science journals(*). Currently, they do publish gain-of-function research. Getting high-profile journals to coordinate against publishi
... (read more)

Putting Books in Libraries

Effective Altruism
 

The idea of this project is to come up with a menu of ~30 books and a list of ~10000 libraries, and to offer to buy for each library, any number of books from the menu. This would ensure that folks interested in EA-related topics, who browse a library, discover these ideas. The books would be ones that teach people to use an effective altruist mindset, similar to those on this list. The libraries could be ones that are large, or that that serve top universities or cities with large English-speaking populations.

The case for the project is that if you assume that the value of discovering one new EA contributor is $200k, and that each book is read once per year (which seems plausible based on at least one random library) then the project will deliver far greater than the financial costs, of about $20 per book. The time costs would be minimised by doing much of the correspondence with libraries over the space over a short period of weeks to months. It also can serve as a useful experiment for even larger-scale book distributions, and could be replicated in other languages.

I like this idea, but I wonder - how many  people / students actually use physical libraries still? I don't think I've used one in over 15 years. My impression is that most are in chronic decline (and many have closed over the last decade).

5
Cillian_
A way around this could be to provide e-books and audio books instead of physical copies. Would also make the distribution easier. (In the UK at least, it's possible to borrow e & audio from your local library using the Libby app)
3
Greg_Colbourn
I imagine that e-book systems (text and audio) work via access to large libraries, rather than needing people to request books be added individually? So maybe there is no action needed on this front (although someone should probably check that most EA books are available in such collections).
2
mic
My understanding is that individual libraries license an ebook for a number of uses or a set period of time (say, two years).
2
mic
I think print books are still preferred by more readers compared to e-books. You might as well donate the books in both the physical and digital formats and probably also as an audiobook. It looks like libraries don't generally have an official way for you to donate print books virtually or to donate e-books, so I think you would have to inquire with them about whether you can make a donation and ask them to use that to buy specific books. Note that the cost of e-book licenses to libraries is many times the consumer sale price.

I really like this project idea! It's ambitious and yet approachable, and it seems that a lot of this work could be delegated to virtual personal assistants. Before starting the project, it seems that it would be valuable to quickly get a sense of how often EA books in libraries are read. For example, you could see how many copies of Doing Good Better are currently checked out, or perhaps you could nicely ask a library if they could tell you how many times a given book has been checked out.

In terms of the cost estimates, how would targeted social media advertising compare? Say targeting people who are already interested in charity and volunteering, or technology, or veg*anism, and offering to send them a free book.

8
RyanCarey
Not sure, but targeted social media advertising would also be a great project.
6
Greg_Colbourn
Added.

Never Again: A Blue-Ribbon Panel on COVID Failures

Biorisk, Epistemic Institutions

Since effective altruism came to exist as a movement, COVID was the first big test of a negative event that was clearly within our areas of concern and expertise. Despite many high-profile warnings, the world was clearly not prepared to meet the moment and did not successfully contain COVID and prevent excess deaths to the extent that should've been theoretically possible if these warnings had been properly heeded. What went wrong?

We'd like to see a project that goes into extensive detail about the global COVID response - from governments, non-profits, for-profit companies, various high-profile individuals, and the effective altruism movement - and understands what the possibilities were for policy action given what we knew at the time and where things fell apart. What could've gone better and - more importantly - how might we be better prepared for the next disaster? And rather than try to re-fight the last war, what needs to be done now for us to better handle a future disaster that may not be bio-risk at all?

Disclaimer: This is just my personal opinion and not the opinion of Rethink Priorities. This project idea was not seen by anyone else at Rethink Priorities prior to posting.

Minor note about the name: "Never Again" is a slogan often associated with the Holocaust. I think that people using it for COVID might be taken as appropriation or similar. I might suggest a different name. 

https://en.wikipedia.org/wiki/Never_again 

2
Peter Wildeford
Sorry - I was not aware of this
2
Ozzie Gooen
No worries! I assumed as such.

Are you thinking of EAs running this themselves?  We already have an informal sense of what some top priorities are for action in biosafety/pandemic-preparedness going forwards (ramp up investment in vaccines and sterilizing technology, improve PPE, try to ban Gain of Function research, etc), even if this has never been tied together into a unified and rigorously prioritized framework.

I think the idea of a blue-ribbon panel on Covid failures could have huge impact if it had (in the best-case) official buy-in from government agencies like the CDC, or (failing that) at least something like "support from a couple prestigious universities" or "participation from a pair of senators that care about the issue" or "we don't get the USA or UK but we do get a small European country like Portugal to do a Blue Ribbon Covid Panel".   In short, I think this idea might ideally look more like "lobby for the creation of an official Blue Ribbon Panel, and also try to contribute to it and influence it with EA research" rather than just running it entirely as an internal EA research project.  But maybe I am wrong and a really good, comprehensive EA report could change a lot of minds.

2
IanDavidMoss
This is a great point. Also worth noting that there have been some retrospectives already, e.g. this one by the WHO: https://theindependentpanel.org/wp-content/uploads/2021/05/COVID-19-Make-it-the-Last-Pandemic_final.pdf It would be worth considering the right balance between putting resources toward conducting an original analysis vs. mustering the political will for implementing recommendations from retrospectives like those above.
4
Jan_Kulveit
Note that CSER is running a project roughly in this direction.
4
Sean_o_h
An early output from this project: Research Agenda (pre-review) Lessons from COVID-19 for GCR governance: a research agenda The Lessons from Covid-19 Research Agenda offers a structure to study the COVID-19 pandemic and the pandemic response from a Global Catastrophic Risk (GCR) perspective. The agenda sets out the aims of our study, which is to investigate the key decisions and actions (or failures to decide or to act) that significantly altered the course of the pandemic, with the aim of improving disaster preparedness and response in the future. It also asks how we can transfer these lessons to other areas of (potential) global catastrophic risk management such as extreme climate change, radical loss of biodiversity and the governance of extreme risks posed by new technologies. Our study aims to identify key moments- ‘inflection points’- that significantly shaped the catastrophic trajectory of COVID-19. To that end this Research Agenda has identified four broad clusters where such inflection points are likely to exist: pandemic preparedness, early action, vaccines and non-pharmaceutical interventions. The aim is to drill down into each of these clusters to ascertain whether and how the course of the pandemic might have gone differently, both at the national and the global level, using counterfactual analysis. Four aspects are used to assess candidate inflection points within each cluster: 1. the information available at the time; 2. the decision-making processes used; 3. the capacity and ability to implement different courses of action, and 4. the communication of information and decisions to different publics. The Research Agenda identifies crucial questions in each cluster for all four aspects that should enable the identification of the key lessons from COVID-19 and the pandemic response.
2
Sean_o_h
https://www.cser.ac.uk/research/lessons-covid-19/

Cognitive enhancement research and development (nootropics, devices, ...)

Values and Reflective Processes, Economic Growth

Improving people's ability to think has many positive effects on innovation, reflection, and potentially individual happiness. We'd like to see more rigorous research on nootropics, devices that improve cognitive performance, and similar fields. This could target any aspect of thinking ability---such as long/short term memory, abstract reasoning, creativity---and any stage of the research and development pipeline---from wet lab research or engineering over testing in humans to product development.

 

Additional notes on cognitive enhancement research:

  • Importance:
    • Sign of impact: You already seem to think that AI-based cognitive aids would be good from a longtermist perspective, so you will probably think that non-AI-based cognitive enhancement is also at least positive. (I personally think that's somewhat likely but not obvious and would love to see more analysis on it).
    • Size of impact: AI-based cognitive enhancement is probably more promising right now. But non-AI-based cognitive enhancement is still pretty promising, there is some precedent (e.g. massive benefit
... (read more)
5
Jackson Wagner
I think this is an underrated idea, and should be considered a good refinement/addition to the FTX theme #2 of "AI-based cognitive aids".  If it's worth kickstarting AI-based research assistant tools in order to make AI safety work go better, then doesn't the same logic apply towards: * Supporting the development of brain-computer interfaces like Neuralink. * Research into potential nootropics (glad to hear you are working on replicating the creatine study!) or the negative cognitive impact of air pollution and other toxins. * Research into tools/techniques to increase focus at work, management best practices for research organizations, and other factors that increase productivity/motivation. * Ordinary productivity-enhancing research software like better note-taking apps, virtual reality remote collaboration tools, etc.   The idea of AI-based cognitive aids only deserves special consideration insofar as: 1. Work on AI-based tools will also contribute to AI safety research directly, but won't accelerate AI progress more generally.  (This assumption seems sketchy to me.) 2. The benefit of AI-based tools will get stronger and stronger as AI becomes more powerful, so it will be most helpful in scenarios where we need help the most.  (IMO this assumption checks out.  But this probably also applies to brain-computer interfaces, which might allow humans to interact with AI systems in a more direct and high-bandwidth way.)

Create and distribute civilizational restart manuals

A number of "existential risks" we are worried about may not directly kill off everybody, but would still cause enough deaths and chaos to make rebuilding extremely difficult. Thus, we propose that people design and distribute "civilizational restart manuals" to places that are likely to survive biological or nuclear catastrophes, giving humanity more backup options in case of extreme diasters.

The first version can be really cheap, perhaps involving storing paper copies of parts of Wikipedia plus 10 most important books sent to 100 safe and relatively uncorrelated locations -- somewhere in New Zealand, the Antarctica research base, a couple of nuclear bunkers, nuclear submarines, etc.

We are perhaps even more concerned about great moral values like concern for all sentient beings surviving and re-emerging than preserving civilization itself, so we would love for people to do further research and work into considering how to preserve cosmopolitan values as well.

My comment from another thread applies here too:

Agreed, very important in my view! I’ve been meaning to post a very similar proposal with one important addition:

Anthropogenic causes of civilizational collapse are (arguably) much more likely than natural ones. These anthropogenic causes are enabled by technology. If we preserve an unbiased sample of today’s knowledge or even if it’s the knowledge that we consider to have been most important, it may just steer the next cycle of our civilization right into the same kind of catastrophe again. If we make the information particularly durable, maybe we’ll even steer all future cycles of our civilization into the same kind of catastrophe.

The selection of the information needs to be very carefully thought out. Maybe only information on thorium reactors rather than uranium ones; only information on clear energy sources; only information on proof of stake; only information on farming low-suffering food; no prose or poetry that glorifies natural death or war; etc.

I think that is also something that none of the existing projects take into account.

5
Greg_Colbourn
Relatedly, see this post about continuing AI Alignment research after a GCR.
2
Dawn Drescher
Very good!
3
ben.smith
Building on the above idea... Research the technology required to restart modern civilization and ensure the technology is understood and accessible in safe havens throughout the world A project could ensure that not only the know-how but also the technology exists dispersed in various parts of the world to enable a restart. For instance, New Zealand is often considered a relatively safe haven, but New Zealand’s economy is highly specialized and for many technologies, relies on importing technology rather than producing it indigenously. Kick-starting civilization from wikipedia could prove very slow. Physical equipment and training enabling strategic technologies important for restart could be planted in locations like New Zealand and other social contexts which are relatively safe. At an extreme, industries could be subsidized which localize technology required for a restart. This would not necessarily mean the most advanced technology; rather, it means technologies that have been important to develop to the point we are at now.  
3
Linch
Yes this is exciting to me, and related. Though of course generalist research talent is in short supply within EA, so the bar for any large-scale research project taking off is nontrivially high.
2
Dawn Drescher
I didn’t write this up as a separate proposal as it seemed a bit self-serving, but creating underground cities for EAs with all the ALLFED technology and whatnot and all these backups could enable us to afterwards build a utopia with all the best voting methods and academic journals that require Bayesian analyses and publish negative results and Singer on the elementary school curriculum and universal basic income etc.
2
Hauke Hillebrandt
All of wikipedia is just 20GB. Maybe there could be an way to share backups via Bittorrent or an 'offline version' of it... it would fit comfortably on most modern smartphones.
8
Linch
Digital solutions are not great because ideally you want something that can survive centuries or at least decades. But offline USBs in prominent + safe locations might still be a good first step anyway.
2
Greg_Colbourn
I've got a full version of the English Wikipedia, complete with images, on my phone (86GB). It's very easy to get using the Kiwix app.
2
Greg_Colbourn
I note there isn't much on Kiwix in terms of survival/post-apocalype collections (just a few TED talks and YouTube videos): a low-hanging fruit ripe for the picking.
2
Greg_Colbourn
Maybe someone should make an EA related collection and upload it to Kiwix? (Best books, EA Forum, AI Alignment Forum, LessWrong, SSC/ACX etc). This might be a good way of 80/20-ing preserving valuable information. As a bonus, people can easily and cheaply bury old phones with the info on, along with solar/hand-crank chargers.
1
wbryk
The  group who discovers this restart manual could gain a huge advantage over the other  groups in the world population -- they might reach the industrial age within a few decades while everyone else is still in the stone age. This discoverer group will therefore have a huge influence over the world civilization they create. I wonder if there were a way to ensure that this group has good values, even better values than our current world. For example, imagine there were a series of value tests within the restart manual that the discoverers were required to pass in order to unlock the next stage of the manual. Either multiple groups rediscover the manual and fail until one group succeeds, or some subgroup unlocks the next step and is able to leap technologically above the others in the group fast enough to ensure that their values flourish. If those value tests somehow ensure that a high score means the test-takers care deeply about the values we want them to have, then only those who've adopted these values will rule the earth. As a side note, this would be a really cool short story or movie :)

SEP for every subject

Epistemic institutions

Create free online encyclopedias for every academic subject (or those most relevant to longtermism) written by experts and regularly updated. Despite the Stanford Encyclopedia of Philosophy being widely-known and well-loved there are few examples from other subjects. Often academic encyclopedias are both behind institutional paywalls and not accessible on sci-hub (e.g. https://oxfordre.com/). This would provide decisionmakers and the public with better access to academic views on a variety of topics. 

5
Peter S. Park
Can editing efforts be directed to Wikipedia? Or would this not suffice because everyone can edit it?
2
agnode
I've read that experts often get frustrated with wikipedia because their work ends up getting undone by non-experts. Also there probably needs to be financial support and incentives for this kind of work. 
1
brb243
Yeah make it accessible and normally accepted.
2
Yitz
This would have to be a separate project from my proposed direct Wikipedia editing, but I'd  be very much in support of this (I see the efforts as being complementary)

Preventing factory farming from spreading beyond the earth

Space governance, moral circle expansion (yes I am also proposing a new area of interest.)

 

Early space advocates such as Gerard O’Neill and Thomas Heppenheimer had both included animal husbandry in their designs of space colonies. In our time, the European Space Agency, the Canadian Space Agency, the Beijing University of Aeronautics and Astronautics, and NASA, have all expressed interests or announced projects to employ fish or insect farming in space. 

This, if successful, might multiply the suffering of farmed animals by many times of the numbers of farmed animals on earth currently, spanned across the long-term future. Research is needed in areas like:

... (read more)

Purchase a top journal

Metascience

Journals give bad incentives to academics - they require new knowledge to be written in hard to understand language, without pre-registration at great cost and sometimes focused on unimportant topics. Taking over a top journal and ensuring it incentivised high quality work on the most important topics would begin to turn the scientific system around.

We could, of course, simply get the future fund to pay for this. There is, however, an alternative that might be worth thinking about.

This seems like the kind of thing that dominant assurance contracts are designed to solve. We could run a Kickstarter, and use the future fund to pay the early backers if we fail to reach the target amount. This should incentivise all those who want the journals bought to chip in.

Here is one way we could do this:

  1. Use a system like pol.is to identify points of consensus between universities. This should be about the rules going forward if we buy the journal. For example, do they all want pre-registration? What should the copyright situation be? How should peer-review work? How should the journal be ran? etc
  2. Whatever the consensus is, commit to implementing it if the buyout is successful
  3. Start crowdsourcing the funds needed. To maximise the chance of success, this should be done using a DAC (dominant assurance contract). This works like any other crowdfunding mechanism (GoFundMe, Kickstarter, etc), except we have a pool of money that is used to pay the early backers if we fail to meet the goal. If the standard donation size we're asking the unis for i
... (read more)
3
Jonathan Nankivell
Update: I emailed Alex Tabarrok to get his thoughts on this. He originally proposed using dominant assurance contracts to solve public good problems, and he has experience testing it empirically. He makes the following points about my suggestion: * The first step is the most important. Without clarity of what the public good will be and who is expected to pay for it, the DAC won't work * You should probably focus on libraries as the potential source of funding. They are the ones who pay subscription fees, they are the ones who would benefit from this * DACs are a novel forum of social technology. It might be best to try to deliver smaller public goods first, allowing people to get more familiar, before trying to buy a journal He also suggested other ways to solve the same problem: * Have you considered starting a new journal? This should be cheaper. There would also be a coordination questions to solve to make it prestigious, but this one might be easier * Have you considered 'flipping' a journal? Could you take the editors, reviewers and community that supports an existing journal, and persuade them to start a similar but open access journal? (The Fair Open Access Alliance seem to have had success facilitating this. Perhaps we should support them?) My current (and weakly held) position is that flipping editorial boards to create new open access journals is the best way to improve publishing standards. Small steps towards a much better world. Would it be possible to for the Future Fund to entice 80% of the big journals to do this? The top journal in every field? Maybe.
2
brb243
This is a reputational loss risk of an actor in the broader EA community seeking to influence the scientific discourse by economic/peer unreviewed means? There are repositories, such as of the Legal Priorities Project, of papers, that are cool and the EA community pays attention to aggregate narratives to keep some of its terms rather exclusive and convincing. If you mean coordinating research, to learn from the scientific community, then it can make sense to read papers and corresponding with academics. Maybe on the EA Forum or so. No need to buy a journal.
2
James Bailey
Agree, was thinking of submitting a proposal like this. A few ways to easily improve most journals: -Require data and code to be shared -Open access, but without the huge author fees most open access journals charge -If you do charge any fees, use them to pay reviewers for fast reviews
1
Jonas Moss
Shouldn't reviewers be paid, regardless of fees? It is a tough job, and there should strong incentives to do it properly. 

A Longtermist Nobel Prize

All Areas

The idea is to upgrade the Future of Life Award to be more desirable. The prizemoney would be increased from $50k to$10M SEK (roughly $1.1M) per individual to match the Nobel Prizes. Both for prestige, and to make sure ideal candidates are selected, the selection procedure would be reviewed, adding extra judges or governance mechanisms as needed. This would not immediately mean that longtermism has something to match the prestige of a Nobel, but it would give a substantial reward and offer top longtermists something to strive for.

(A variation on a suggestion by DavidMoss)

2
Gavin
How much of the prestige is the money value, how much just the age of the prize, and how much the association with a fancy institution like the Swedish monarchy?  I seem to remember that Heisenberg etc were more excited by the money than the prize, back in the day.
2
RyanCarey
The money isn't necessary - see the Fields Medal. Nor is the Swedish Monarchy - see the Nobel Memorial Prize in Econ. Age obviously helps. And there's some self-reinforcement - people want the prize that others want. My guess is that money does help, but this could be further investigated.
4
Hauke Hillebrandt
The Jacobs Foundation awards $1m prizes to scientist as a grant - I think this might be one of the biggest - one could award $5-10m to make it the most prestigious prize in the world.
1
Taras Morozov
I think Templeton Prize has become prestigious because they give more money than the Nobel on purpose.

Megastar salaries for AI alignment work

Artificial Intelligence

Aligning future superhuman AI systems is arguably the most difficult problem currently facing humanity; and the most important. In order to solve it, we need all the help we can get from the very best and brightest. To the extent that we can identify the absolute most intelligent, most capable, and most qualified people on the planet – think Fields Medalists, Nobel Prize winners, foremost champions of intellectual competition, the most sought-after engineers – we aim to offer them salaries competitive with top sportspeople, actors and music artists to work on the problem. This is complementary to our AI alignment prizes, in that getting paid is not dependent on results. The pay is for devoting a significant amount of full time work (say a year), and maximum brainpower, to the problem; with the hope that highly promising directions in the pursuit of a full solution will be forthcoming. We will aim to provide access to top AI alignment researchers for guidance, affiliation with top-tier universities, and an exclusive retreat house and office for fellows of this program to use, if so desired.

5
Greg_Colbourn
Here's a more fleshed out version, FAQ style. Comments welcome.

Longtermist Policy Lobbying Group

Biorisk, Recovery from Catastrophe, Epistemic Institutions, Values and Reflective Processes

Many social movements find a lot of opportunity by attempting to influence policy to achieve their goals . While longtermism can and should remain bi-partisan, there may be many opportunities to pull the rope sideways on policy areas of concern.

We'd like to see a project that attempts to carefully understand the lobbying process and explores garnering support for identified tractable policies. We think while such a project could scale to be very large once successful, anyone working on this project should really aim to start small and tred carefully, aiming to avoid issues around the unilateralist curse and ensuring to not make longtermism into an overly partisan issue. It's likely that longtermist lobbying might also be best done as lobbying for other clear areas related to longtermism but as a distinct idea, such as lobbying for climate change mitigation or lobbying for pandemic preparedness.

Disclaimer: This is just my personal opinion and not the opinion of Rethink Priorities. This project idea was not seen by anyone else at Rethink Priorities prior to posting.

4
IanDavidMoss
I think some form of lobbying for longtermist-friendly policies would be quite valuable. However, I'm skeptical that running lobbying work through a single centralized "shop" is going to be the most efficient use of funds. Lobbying groups tend to specialize in a specific target audience, e.g., particular divisions of the US federal government or stakeholders in a particular industry, because the relationships are really important to success of initiatives and those take time to develop and maintain. My guess is that effective strategies to get desired policies implemented will depend a lot on the intersection of the target audience + substance of the policy + the existing landscape of influences on the relevant decision-makers. In practice, this would probably mean at the very least developing a lot of partnerships with colleague organizations to help get things done or perhaps more likely setting up a regranting fund of some kind to support those partners. Happy to chat about this further since we're actively working on setting something like this up at EIP.
4
Peter Wildeford
I agree with you on the value of not overly centralizing this and of having different groups specialize in different policy areas and/or approaches.

Landscape Analysis: Longtermist Policy

Biorisk, Recovery from Catastrophe, Epistemic Institutions, Values and Reflective Processes

Many social movements find a lot of opportunity by attempting to influence policy to achieve their goals - what ought we do for longtermist policy? Longtermism can and should remain bi-partisan but there may be many opportunities to pull the rope sideways on policy areas of concern.

We'd like to see a project that attempts to collect a large number of possible longtermist policies that are tractable, explore strategies for pushing these policies, and also use public opinion polling on representative samples to understand which policies are popular. Based on this information, we could then suggest initiatives to try to push for.

Disclaimer: This is just my personal opinion and not the opinion of Rethink Priorities. This project idea was not seen by anyone else at Rethink Priorities prior to posting.

2
PeterSlattery
I really like this idea and think that having a global policy network could be valuable over the long term. Particularly if coordinated with other domains of EA work. For instance, I can imagine RT and various other researcher orgs and researchers providing evidence on demand to EAs who are directly embedded within policy production. 
1
JBPDavies
Hi Peter (if I may!), I love this and your other Longtermism suggestions, thanks for submitting them! Not sure if you saw my below suggestion of a Longtermism Policy Lab - but maybe this is exactly the kind of activity that could fall under such an organisation/programme (within Rethink even)? Likewise for your suggestion of a Lobbying group - by working directly with societal partners (e.g. National Ministries across the world) you could begin implementation directly through experimentation.  I've been involved in a similar (successful) project called the 'Transformative Innovation Policy Consortium (TIPC)', which works with, for example, the Colombian governement to shape innovation policy towards sustainable and just transformation (as opposed to systems optimisation).  Would love to talk to you about these your ideas for this space if you're interested. I'm working with the Institutions for Longtermism research platform at Utrecht University & we're still trying to shape our focus, so there may be some scope for piloting ideas.
2
IanDavidMoss
JBPDavies, it sounds like you and I should connect as well -- I run the Effective Institutions Project and I'd love to learn more about your Institutions for Longtermism research and provide input/ideas as appropriate.
1
JBPDavies
Sounds fantastic - drop me an email at j.b.p.davies@uu.nl and I would love to set up a meeting. In the meantime I'll dive into EIP's work!
2
Peter Wildeford
Sure! Email me at peter@rethinkpriorities.org and I will set up a meeting.
1
brb243
If it shows that policies that safeguard the long-term objectives of the top lobbyists in the nation while disregarding others' preferences are the most popular, do you recommend them as attention-captivating conversation starters so that impartial consideration can be explained one-on-one to support its internalization by regulators by implementing measures to prevent the enactment of these, possible catastrophically risky (codified dystopia for some actors) popular policies, if I understand it correctly?

Experiments to scale mentorship and upskill people

Empowering Exceptional People, Effective Altruism

For many very important and pressing problems, especially those focused on improving the far future, there are very few experts working full-time on these problems. What's more, these fields are nascent, and there are few well-defined paths for young or early-career people to follow, it can be hard to enter the field. Experts in the field are often ideal mentors - they can vet newcomers, help them navigate the field, provide career advice, collaborate on projects and gain access to new opportunities, but there are currently very few people qualified to be mentors. We'd love to see projects that experiment with ways to improve the mentorship pipeline so that more individuals can work on pressing problems. The kinds of possible solutions possible are very broad - from developing expertise in some subset of mentorship tasks (such as vetting) in a scalable way, increasing the pool of mentors, improving existing mentors' ability to provide advice by training them, experimenting with better mentor-mentee matchmaking, running structured mentorship programs, and more.

Our World in Base Rates

Epistemic Institutions

Our World In Data are excellent; they provide world-class data and analysis on a bunch of subjects. Their COVID coverage made it obvious that this is a very great public good. 

So far, they haven't included data on base rates; but from Tetlock we know that base rates are the king of judgmental forecasting (EAs generally agree). Making them easily available can thus help people think better about the future. Here's a cool corporate example. 

e.g. 

85% of big data projects fail”; 
10% of people refuse to be vaccinated because of fearing needles (pre-COVID so you can compare to the COVID hesitancy)”; 
"11% of ballot initiatives pass"
“7% of Emergent Ventures applications are granted”; 
“50% of applicants get 80k advice”; 
“x% of applicants get to the 3rd round of OpenPhil hiring”, "which takes y months"; 
“x% of graduates from country [y] start a business”.

MVP:

  • come up with hundreds of baserates relevant to EA causes
  • scrape Wikidata for them, or diffbot.com
  • recurse: get people to forecast the true value, or later value (put them in a private competition on Foretold,  index them on metaforecast.org)


Later, Q... (read more)

I think this is neat. 

Perhaps-minor note: if you'd do it at scale, I imagine you'd want something more sophisticated than coarse base rates. More like, "For a project that has these parameters, our model estimates that you have a 85% chance of failure."

I of course see this as basically a bunch of estimation functions, but you get the idea.

Proportional prizes for prescient philanthropists

Effective Altruism, Economic Growth, Empowering Excetional People

A low-tech alternative to my proposal for impact markets is to offer regular, reliable prizes for early supporters of exceptionally impactful charities. These can be founders, advisors, or donors. The prizes would not only go to the top supporters but proportionally to almost anyone who can prove that they’ve contributed (or where the charity has proof of the contribution), capped only at a level where the prize money is close to the cost of the administrative overhead.

Donors may be rewarded in proportion to the aggregate size of their donations, advisors may be rewarded in proportion to their time investment valued at market rates, founders may be rewarded in proportion to the sum of both.

If these prizes are awarded reliably, maybe by several entities, they may have some of the same benefits as impact markets. Smart and altruistic donors, advisors, and charity serial entrepreneurs can accumulate more capital that they can use to support their next equally prescient project.

5
IanDavidMoss
Reading this again, I want to register that I am much more excited about the idea of rewarding donors for early investment than I am about the other elements of the plan. As someone who has founded multiple organizations, the task of attaching precise retrospective monetary values to different people's contributions of time, connections, talent, etc. in a way that will satisfy everyone as fair sounds pretty infeasible. Early donations, by contrast, are an objective and verifiable measure of value that is much easier to reward in practice. You could just say that the first, say $500k that the org raises is eligible for retroactive reward/matching/whatever, with maybe the first $100k or something weighted more heavily. It's also worth thinking through the incentives that a system like this would set up, especially at scale. It would result in more seed funding and more small charities being founded and sustained for the first couple of years. I personally think that's a good thing at the present time, but I also know people who argue that we should be taking better advantage of economies of scale in existing organizations. There is probably a point  at which there is too much entrepreneurship, and it's worth figuring out what that point is before investing heavily in this idea.
4
Dawn Drescher
Owen Cotton-Barrett and I have thought about this for a while and have mostly arrived at the solution that beneficiaries who collaborated on a project need to hash this out with each other. So make a contract, like in a for-profit startup, who owns how much of the impact of the project. I think that capable charity entrepreneurs are a scarce resource as well, so that we should try hard to foster them. So that’s probably where a large chunk of the impact is. When it comes to the incentive structures: We – mostly Matt Brooks and I but the rest of the team will be around – will hold a talk on the risks from perverse incentives in our system at the Funding the Commons II conference tomorrow. Afterwards I can also link the video recording here. My big write-up, which is more comprehensive than the presentation but unfinished, is linked from the other proposal proposal. That said … I don’t quite understand… More funding for donors -> more donors -> more money to charities -> higher scale, right? So this system would enable charities to hire more so people can specialize etc., not the opposite? Thanks!
3
colin
This is really interesting. Setting up individual projects as DAOs could be an effective way to manage this.  The DAO issues tokens to founders, advisors, and donors.  If retrospectively it turns out that this was a particularly impactful project the funder can buy and burn the DAO tokens, which will drive up the price, thereby rewarding all of the holders.
2
Dawn Drescher
Yep! There’s this other proposal for impact markets linked above. That’s basically that with slight tweaks. It’s all written in a technology-agnostic way, but one of the implementations that we’re currently looking into is on the blockchain. There’s even a bit of a prototype already. :-D
2
IanDavidMoss
I really like this idea, and FWIW find it much more intuitive to grasp than your impact markets proposal.
2
Dawn Drescher
Sweet, thanks! :-D Then it’ll also help me explain impact markets to people.

High quality, EA Audio Library (HEAAL)

all/meta, though I think the main value add is in AI

(Nonlinear has made a great rough/low quality version of this, so at least some credit/prize should go to them.)

Audio has several advantages over text when it comes to consuming long-form content, with one significant example being that people can consume it while doing some other task (commuting, chores, exercising) meaning the time cost of consumption is almost 0. If we think that broad, sustained engagement with key ideas is important, making the cost of engagement much lower is a clear win. Quoting Holden's recent post:

I think a highly talented, dedicated generalist could become one of the world’s 25 most broadly knowledgeable people on the subject (in the sense of understanding a number of different agendas and arguments that are out there, rather than focusing on one particular line of research), from a standing start (no background in AI, AI alignment or computer science), within a year

What does high quality mean here, and what content might get covered?

  • High quality means read by humans (I'm imagining paying maths/compsci students who'll be able to handle mathematical n

... (read more)
2
Nathan Young
Frankly, I'd like the ability to send a written feed to somewhere and have it turned into audio, maybe crowdfunded. Clearly non-linear can do it, so why can't I have it for, say, Bryan Caplan's writing.
3
alex lawsen
If you're ok with autogenerated content of roughly the quality of nonlinear, both Pocket and Evie are reasonable choices.

High-quality human performance is much more engaging than autogenerated audio, fwiw.

4
alex lawsen
Hence the original pitch!
2
Nathan Young
Non-Linear could be paid to repost the most upvoted posts but with voice actors. 

Teaching buy-out fund

Allocate EA Researchers from Teaching Activities to Research

Problem: Professors spend a lot of their time teaching instead of researching. Many don’t know that many universities offer “teaching buy-outs”, where if you pay a certain amount of money, you don’t have to teach. Many also don’t know that a lot of EA funding would be interested in paying that.

Solution: Make a fund that's explicitly for this, to make it so more EAs know. This is the 80/20 of promoting the idea. Alternatively, funders can just advertise this offering in other ways.

Adversarial collaborations on important topics

Epistemic Institutions

There are many important topics, such as the level of risk from advanced artificial intelligence and how to reduce it, among which there are  reasonable people with very different views. We are interested in experimenting with various types of adversarial collaborations, which we define as people with opposing views working to clarify their disagreement and either resolve the disagreement or identify an experiment/observation that would resolve it. We are especially excited about combining adversarial collaborations with forecasting on any double cruxes identified from them. Some ideas for experimentation might be varying the number of participants, varying the level of moderation and strictness of enforced structure, and introducing AI-based aids.

Existing and past work relevant to this space include the Adversarial Collaboration Project, SlateStarCodex's adversarial collaboration contests, and the Late 2021 MIRI Conversations.

1
brb243
What topics? Which are not yet covered? (E. g. militaries already talk about peace) What adversaries? Are they rather collaborators (such as considering mergers and acquisitions and industry interest benefits for private actors and trade and alliance advantages for public actors)? Do you mean decisionmaker-nondecisionmaker collaborations - the issue is that systems are internalized, so you can get from the nondecisionmakers I want to be as powerful over others as the decisionmakers or also an inability to express or know their preferences (a chicken is in the cage so what can it say or a cricket is on the farm what do they know about their preferences) - probably, adversaries would prefer to talk about 'how can we get the other to give us profit' rather than 'how can we make impact' since the agreement is 'not impact, profit?'

Foundational research on the value of the long-term future

Research That Can Help Us Improve

If we successfully avoid existential catastrophe in the next century, what are the best pathways to reaching existential security, and how likely is it? How optimistic should we be about the trajectory of the long-term future? What are the worst-case scenarios, and how do we avoid them? How can we make sure the future is robustly positive and build a world where as many people are flourishing?


To elaborate on what I have in mind with this proposal, it seems important to conduct research beyond reducing existential risk over the next century – we should make sure that the future we have afterwards is good as well. I'd be interested in research following up on subjects like those of the posts:

... (read more)
8
Fai
This sounds great! I particularly liked that you brought up S-risks and MCE. I think these are important considerations.

Focus Groups Exploring Longtermism / Deliberative Democracy for Longtermism

Epistemic Institutions, Values and Reflective Processes

Right now longtermism is being developed within a relatively narrow set of stakeholders and participants relative to the broad set of people (and nonhumans) that would be affected by the decisions we make. We'd like to see focus groups that attempt to engage a more diverse group of people (diversity across many axes including but not limited to race, gender, age, geography, and socioeconomic status) and attempt to explain longtermism to them and explore what visions they have for the future of humanity (and nonhumans). Hopefully through many iterations we can find a way to go across what is likely rather large initial inferential distance to explore how a broader and more diverse group of people would think about longtermism once ideally informed. This can be related to and informed by engaging in deliberative democracy. This also could be helping to initiate what longtermists call "the long reflection".

Disclaimer: This is just my personal opinion and not the opinion of Rethink Priorities. This project idea was not seen by anyone else at Rethink Priorities prior to posting.

7
IanDavidMoss
I absolutely love this idea and really hope it gets funded! It reminds me in spirit of the stakeholder research that IDinsight did to help inform the moral weights GiveWell uses in its cost-effectiveness analysis. At scale, it could parallel aspects of the process used to come up with the Sustainable Development Goals.

Incubator for Independent Researchers

Training People to Work Independently on AI Safety

Problem: AI safety is bottlenecked by management and jobs. There are <10 orgs you can do AI safety full time at, and they are limited by the number of people they can manage and their research interests.

Solution: Make an “independent researcher incubator”. Train up people to work independently on AI safety. Match them with problems the top AI safety researchers are excited about. Connect them with advisors and teammates. Provide light-touch coaching/accountability. Provide enough funding so they can work full time or provide seed funding to establish themselves, after which they fundraise individually. Help them set up co-working or co-habitation with other researchers.

This could also be structured as a research organization instead of an incubator.

Expected value calculations in practice

Invest in creating the tools to approximate expected value calculations for speculative projects, even if hard.

Currently, we can’t compare the impact of speculative interventions in a principled way. When making a decision about where to work or donate, longtermists or risk-neutral neartermists may have to choose an organization based on status, network effects, or expert opinion. This is, obviously, not ideal.

We could instead push towards having expected value calculations for more things. In the same way that GiveWell did something similar for global health and development, we could try to do something similar for longtermism/speculative projects. Longer writeup here.

EA Marketing Agency

Improve Marketing in EA Domains at Scale

Problem: EAs aren’t good at marketing, and marketing is important.

Solution: Fund an experienced marketer who is an EA or EA-adjacent to start an EA marketing agency to help EA orgs.

AGI Early Warning System
Anonymous Fire Alarm for Spotting Red Flags in AI Safety

Problem: In a fast takeoff scenario, individuals at places like DeepMind or OpenAI may see alarming red flags but not share them because of myriad institutional/political reasons.

Solution: create an anonymous form - a “fire alarm” (like an whistleblowing Andon Cord of sorts) where these employees can report what they’re seeing. We could restrict the audience to a small council of AI safety leaders, who then can determine next steps. This could, in theory, provide days to months of additional response time.

Alignment Forum Writers

Pay Top Alignment Forum Contributors to Work Full Time on AI Safety

Problem: Some of AF’s top contributors don’t actually work full-time on AI safety because they have a day job to pay the bills.

Solution: Offer them enough money to quit their job and work on AI safety full time.

(Per Nick's note, reposting)

Political fellowships

Values and Reflective Processes, Empowering Exceptional People

We’re like to fund ways to pull people who don’t run for political office to run for political office. It's like a MacArthur. You get a call one day. You've been selected. You'd make a great public servant, even if you don't know it. You'd get some training, like DCCC and NRCC, and when you run, you get two million spent by the super-PAC run by the best. They've done the analysis. They'll provide funding. They've lined up endorsers. You've never thought about politics, but they've got your back. Say what you want to say, make a difference in the world: run the campaign you don't mind losing. And if you win, make it real.

3
Jan-Willem
Great idea, at TFG we have similar thoughts and are currently researching if we should run it and the best way to run a program like this. Would love to get input from people on this.

The Billionaire Nice List

Philanthropy

A regularly updated list of how much impact we estimate billionaires have created. Billionaires care about their public image, people like checking lists. Let's attempt to create a list which can be sorted by different moral weights and incentivises billionaires to do more good. 

9
PeterSlattery
I really like this. I had a similar idea focused on trying to change the incentive landscape for billionaires to make it as high status as possible to be as high impact as possible. I think that lists and awards could be a good start. Would be especially good to have the involvement of some aligned ultrawealthy people who might have a good understanding of what will be effective.
3
Nathan Young
Yeah, I would love those of us who know or are billionaires to give a sense of what motivates them.

Pro-immigration advocacy outside the United States

Economic Growth

Increasing migration to rich countries could dramatically reduce poverty and grow the world economy by up to 150%. Open Philanthropy has long had pro-immigration reform in the U.S. as a focus area, but the American political climate has been very hostile to and/or polarized on immigration, making it harder to make progress in the U.S. However, other high-income countries might be more receptive to increasing immigration, and would thus be easier places to make progress. For example, according to a 2018 Pew survey, 81% of Japanese citizens support increasing or keeping immigration levels about the same. It would be worth exploring which developed countries are most promising for pro-immigration advocacy, and then advocating for immigration there.

What this project could look like:

  1. Identify 2-5 developed countries where pro-immigration advocacy seems especially promising.
  2. Build partnerships with people and orgs in these countries with expertise in pro-immigration advocacy.
  3. Identify the most promising opportunities to increase immigration to these countries and act on them.

Related posts:

... (read more)
5
Greg_Colbourn
Japan is coming from a very low base - 2% of population is foreign-born - vs. 15% in the US. A lot of room for more immigrants before "saturation" is reached I guess. Although I imagine that xenophobia and racism is anti-correlated with immigration, at least at low levels [citation needed].
1
brb243
Top countries by refugees per capita The world's most neglected displacement crises Should these countries be supported in their efforts (I read I think $0.1/person/day for food) and the crises prevented such as by supporting the source area parties to make and abide by legal agreements over resources, prevent drug trade by higher-yield farming practices and education or urban career growth prospects, improve curricula to add skills development in care for others (teaching preventive healthcare and others' preferences-based interactions), etc - as a possibly cost-effective alternative to pro-immigration advocacy - then, either privileged persons will be able to escape the poor situation, which will not be solved or unskilled persons with poor norms will be present at places which may not improve their subjective wellbeing, which is given by the norms' internalization?
2
Eevee🔹
Your question is very long and hard to understand. Can you please reword it in plain English?
1
brb243
Displacement crises are large and neglected. For example, for one of the top 10 crises, 6,000 additional persons are displaced per day. Displaced persons can be supported by very low amounts, which make large differences. For example, $0.1/day for food and low amount for healthcare. In some cases, this would have otherwise not been provided. So, supporting persons in crises in emerging economies, without solving the issues, can be cost-effective compared to  spending comparable effort on immigration reform. Second, supporting countries that already host refugees of neglected crises to better accommodate these persons (so that they do not need to stay in refugee camps reliant on food aid and healthcare aid), for example, by special economic zones, if these allow for savings accumulation, and education, so that refugees can better integrate and the public welcomes it due to economic benefits, can be also competitive in cost-effectiveness compared to immigration reform in countries with high public attention and political controversy and much smaller refugee populations, such as the US. The intervention is more affordable, makes larger difference for the intended beneficiaries, has higher chance of political support, and can be institutionalized while solving the problem. Third, allocating comparable skills to neglected crises rather than to immigration reform in industrialized nations where unit decisionmaker's attention can be much more costly, such as the US, can resolve the causes of these crises, which can include limited ability to draft and enforce legal agreements around natural resources or mitigate violence related to limited alternative prospects of drug farmers by sharing economic alternatives, such as higher-yield commodity farming practices, agricultural value addition skills, or upskilling systems related to work in urban areas. So, the cost-effectiveness of solving neglected crises by legal, political, and humanitarian assistance can be much higher th

Improving ventilation

Biorisk

Ventilation emerged as a potential intervention to reduce the risk of COVID and other pathogens. Additionally, poor air quality is a health concern in its own right, negatively affecting cognition and cognitive development. Despite this, there still does not seem to be commonly accepted wisdom about what kind of ventilation interventions ought to be pursued in offices, bedrooms, and other locations.

We'd like to see a project that does rigorous research to establish strong ventilation strategies in a variety of contexts and explores their effectiveness on various ventilation issues. Once successful ventilation strategies are developed, assuming it would be cost-effective to do so, this project could then aim to roll out ventilation and campaign/market for ventilation interventions either as a for-profit, non-profit, or hybrid.

Disclaimer: This is just my personal opinion and not the opinion of Rethink Priorities. This project idea was not seen by anyone else at Rethink Priorities prior to posting.

Advocacy organization for unduly unpopular technologies

Public opinion on key technologies.

Some technologies have enormous benefits, but they are not deployed very much because they are unpopular. Nuclear energy could be a powerful tool for enhancing access to clean energy and combating climate change, but it faces public opposition in Western countries. Similarly, GMOs could help solve the puzzle of feeding the global population with fewer resources, but public opinion is largely against them. Cellular agriculture may soon face similar challenges. Public opinion on these technologies must urgently be shifted. We’d like to see NGOs that create the necessary support via institutions and the media, without falling into the trap of partisan warfare with traditional environmentalists.

4
Jackson Wagner
Probably want to avoid unifying all of these under one "we advocate for things that most people hate" advocacy group!  Although that would be pretty hilarious.  But funding lots of little different groups in some of these key areas is great, such as trying to make it easier to build clean energy projects of all kinds as I mention here.
4
simonfriederich
Right, it sounds absurd and maybe hilarious, but it's actually what I had in mind. The advantage is internal coherence. The idea is basically to let "ecomodernism" go mainstream, having a Greenpeace-like org that has ideas more similar to the Breakthrough Institute.  It's far from clear that this can work, but it's worth a try, in my view. About your suggestion: I love it and voted for it. 
2
Jackson Wagner
Maybe so... like an economics version of the ACLU that builds a reputation of sticking up for things that are good even though they're unpopular. Might work especially well if oriented around the legal system (where ACLU operates and where groups like Greenpeace and the ever-controversial NRA have had lots of success), rather than purely advocacy? Having a unified brand might help convince people that our side has a point. For instance, a group that litigates to fight against nimbyism by complaining about the overuse of environmental laws or zoning regulations... the nimbys would naturally see themselves as the heroes of the story and assume that lawyers on the pro-construction side were probably villains funded by big greedy developers. Seeing that their opposition was a semi-respected ACLU-like brand that fought for a variety of causes might help change people's minds on an issue. (On the other hand, I feel like the legal system is fundamentally friendlier terrain for stopping projects than encouraging them, so the legal angle might not work well for GMOs and power plants. But maybe there are areas like trying to ban Gain-of-Function research where this could be a helpful strategy.) We'd still probably want the brand of this group to be pretty far disconnected from EA -- groups like Greenpeace, the NRA, etc naturally attract a lot of controversy and demonization.
2
Andreas F
Since Lifecycle Analysis show that it most likely is the best option, I fully agree on the nuclear Part.  I also agree on the GMO part, since huge Meta Analysis show no adverse effects on the environment (compared as yield/area   & Biodiversity/dollar & Yield/dollar  labor/ yield), in comparison with other agriculture. I have no Assessment on Cellular Agriculture, but I do think, that it is fair to support such schemes ( at least until we have solid data regarding this, and then decide again).  
2
Peter S. Park
Note: Wanted to share an example. I think that while nuclear fission reactors are unpopular and this unpopularity is sticky, it is possible that efforts to preemptively decouple the reputation of nuclear fusion reactors with those of nuclear fission reactors can succeed (and that nuclear fusion's hypothetical positive reputation can be sticky over time). But it is also possible that the unpopularity of nuclear fission will stick to nuclear fusion.  Which of these two possibilities occurs, and how proactive action can change this, is mysterious at the moment. This is because our causal/theoretical understanding of the science of human behavior is incomplete. (see my submission, "Causal microfoundations for behavioral science") Preemptive action regarding historically unprecendented settings like emergent technologies---for which much of the relevant data may not yet exist---can be substantially informed by externally valid predictions of people's situation-specific behavior in such settings.
3
simonfriederich
Interesting thought. FWIW, I think it's more realistic that we can turn around public opinion on fission first, reap more of the benefits of fission, and then have a better public public landscape for fusion, then that we accept the unpopularity of fission as a given but will have somehow popular fusion. But I may well be wrong.

Building the grantmaker pipeline

Empowering Exceptional People, Effective Altruism

The amount of funding committed to Effective Altruism has grown dramatically in the past few years, with an estimated  $46 billion dollars currently earmarked for EA. With this significant increase in available funding, there is now a greatly increased need for talented and thoughtful grantmakers, who can effectively deploy this money. It's plausible that yearly EA grantmaking could increase by a factor of 5-10x over the coming decade, and this requires finding and training new grantmakers on best practices, as well as developing sound judgement.  We'd love to see projects that build the grantmaker pipeline, whether that's grantmaking fellowships, grantmaker mentoring, more frequent donor lotteries, more EA funds-style organisations with rotating fund managers, and more. 

NB: This might be a refinement of fellowships, but I think it's particularly important.

7
Jackson Wagner
This is such a good idea that I think FTX is already piloting a regranting scheme as a major prong of their Future Fund program! But it would be cool to build up the pipeline in other more general/systematic ways -- maybe with mentorship/fellowships, maybe with more experimental donation designs like donor lotteries and impact certificates, maybe with software that helps people to make EA-style impact estimates.
4
Cillian_
It seems that FTX's Regranting Program could be a great way to scalably distribute funds & build the grantmaker pipeline. We (Training for Good) are also developing a grantmaker training programme like what James has described here to help build up EA's grantmaking capacity (which could complement FTX's Regranting Program nicely). It will likely be an 8 week, part-time programme, with a small pot of "regranting" money for each participant and we're pretty excited to launch this in the next few months. In the meantime, we're looking for 5-10 people to beta test a scaled-down version of this programme (starting at the end of March). The time commitment for this beta test would be ~5 hours per week (~2 hrs reading, ~2 hrs projects, ~1 hr group discussion). If anyone reading this is interested, feel free to shoot me an email cillian@trainingforgood.com 

Top ML researchers to AI safety researchers

Pay top ML researchers to switch to AI safety

Problem: <.001% of the world’s brightest minds are working on AI safety. Many are working on AI capabilities.

Solution: Pay them to switch. Pay them their same salary, or more, or maybe a lot more.

EA Productivity Fund

Increase the output of top longtermists by paying for things like coaching, therapy, personal assistants, and more.

Problem: Longtermism is severely talent constrained. Yet, even though these services could easily increase a top EAs productivity by 10-50%, many can’t afford them or would be put off by the cost (imposter syndrome or just because it feels selfish).

Solution: Create a lightly-administered fund to pay for them. It’s unclear what the best way would be to select who gets funding, but a very simple decision metric could be to give it to anybody who gets funding from Open Phil, LTFF, SFF, or FTX. This would leverage other people’s existing vetting work.

Automated Open Project Ideas Board

 The Future Fund

All of these ideas should be submitted to a board where anyone can forecast their value in dollars lives saved per $ as rated by a trusted research organisation, say Rethink Priorities. The forecasts can be reputation or prediction markets.  Then that research organisation  checks 1% of the ideas and scores them. These scores are used to weight the other forecasts. This creates a scalable system for ranking ideas. Then funders can donate to them as they see fit.

[anonymous]32
0
0

Massive US-China exchange programme

Great power conflict, AI

Fund (university) students to live in the other country in a host family: between US-China, Russia-US, China-India, potentially India-Pakistan. This is important if one thinks that personal experience make it less likely that individuals incentivise or encourage escalation, war and certain competitive dynamics.

8
Jackson Wagner
This might have a hard time meeting the same effectiveness bar as #13, "Talent Search" and #17, "Advocacy for US High-Skill Immigration", which might end up having some similar effects but seem like more leveraged interventions.
2
IanDavidMoss
I disagree, as this idea seems much more explicitly targeted at reducing the potential for great power conflict, and I haven't yet seen many other tractable ideas in that domain.
5
Alex D 🔸
My understanding is the Erasmus Programme was explicitly started in part to reduce the chance of conflict between European states.

Nuclear/Great Power Conflict Movement Building

Effective Altruism

Given the current situation in Ukraine, movement-building related to nuclear x-risk or great power conflict would likely be much more tractable than it was up until recently. We don't know how long this period will last for and the memory of the public can be short, so we intend to advantage of this opportunity. This outreach should focus on people with an interest in policy or potential student group organisers as these people are most likely to have an influence here.

(Per Nick's note, reposting)

Market shaping and advanced market commitments

Epistemic institutions; Economic Growth

Market shaping is when an idea can only be jump-started by committed demand or other forces. Operation Warp Speed is the most recent example of market-shaping through advanced market commitments, but it has been used several times for other vaccine development. We are interested in funding work to understand when market-shaping makes sense, ideas for creating and funding market-shaping methods, and specific market-shaping or advanced market commitments in our areas of interest.

(I drafted this then realized that it is largely the same as Zac's comment above - so I've strong upvoted that comment and I'm posting here in case my take on it is useful.)

Crowding in other funding

We're excited to see ideas for structuring projects in our areas of interest that leverage our funds by aligning with the tastes of other funders and investors. While we are excited about spending billions of dollars on the best projects we can find, we're also excited to include other funders and investors in the journey of helping these projects scale in the best way possible. We would like to maximize the chance that other sources of funding come in. Some projects are inherently widely attractive and some others are only ever likely to attract (or want) longtermist funding. But, we expect that there are many projects where one or more general mechanisms can be applied to crowd in other funding. This may include:

  • Offering financial incentives (e.g. advanced market commitments)
  • Highlighting financial potential in major projects we would like to see (e.g. especially projects of the scale of the Grok / Brookfield bid for AGL)
  • Portfolio structures / financial engineering (e.g. Bridge Bio)
  • Appealing to social preferences (e.g. highlight points of 'common sense' overlap between longtermist views and ESG)
1
colin
I'll add that advanced market commitments are also useful in situations where a jump-start isn't explicitly required.  In that case, they can act similarly to prize based funding

An Organisation that Sells its Impact for Profit

Empowering Exceptional People, Epistemic Institutions

Nonprofits are inefficient in some respects: they don't maximize value for anyone the way for-profits do for their customers. Moreover, they lack market valuations, so successful nonprofits scale too slowly while unsuccessful ones linger too long. One way to address this is to start an organisation that only accepts funding that incentivizes impact. Its revenue would come from: (1) Selling Impact Cerificates, (2) Prizes, and/or (3) Grants (but only if they value the work at a similar level to the impact certificates). Such an organization could operate on an entirely for-profit basis. Funding would be raised from for-profit investors. Staff would be paid in salary plus equity. The main premise here is that increased salaries are a small price to pay for the efficiencies that can be gained from for-profit markets. Of course, this can only succeed if the funding mechanisms (1-3) become sufficiently popular, but given the increased funding in longtermist circles, this now looks increasingly likely.

See also Retrospective grant evaluations,  Retroactive public goods funding, Impact ... (read more)

Rationalism But For Group Psychology

Epistemic Institutions

LessWrong and the rationalist community have done well to highlight biases and help individuals become more rational, as well as creating a community around this. But most of the biggest things in life are done by groups and organizations.

We'd like to see a project that takes group psychology / organizational psychology and turns it into a rationalist movement with actionable advice to help groups be less biased and help groups achieve more impact, like how the original rationalist movement did so with individuals. We imagine this would involve identifying useful ideas from group psychology / organizational psychology literature and popularizing them in the rationalist community, as well as trying to intentionally experiment. Perhaps this could come up with better ideas for meetings, how to hire, how to attract talent, better ways to help align employees with organizational goals, better ways to keep track of projects, etc.

Disclaimer: This is just my personal opinion and not the opinion of Rethink Priorities. This project idea was not seen by anyone else at Rethink Priorities prior to posting.

9
Gavin
The Epistea Summer Experiment was a glorious example of this.  
1[comment deleted]

Wild animal suffering in space

Space governance, moral circle expansion.

 

Terraforming other planets might cause animals to come to exist in these planets, either because of intentional or unintentional behaviors. These animals might live net negative lives. 

Also, we cannot rule out the possibility that there are already wild "animals" (or any form of sentient beings) who might be suffering from net negative lives in other planets. (this does not relate directly to the Fermi Paradox, which is highly intelligent lives, not lives per se)

Relevant research include:

  • Whether wild animals lead net negative or positive lives on earth, under what conditions. And whether this might hold the same in different planets.
  • Tracking, or even doing research on using AI and robotics to monitor and intervene with habitats. This might be critical if there planets there are planets that has wild "animals", but are uninhabitable for humans to stay close and monitor (or even intervene with) the welfare of these animals.
  • Communication strategies related to wild animal welfare, as it seem to tend to cause controversy, if not outrage.
  • Philosophical research, including population ethics, environmental ethics, comparing welfare/suffering between species, moral uncertainty, suffering-focused vs non-suffering focused ethics.
  • General philosophical work on the ethics of space governance, in relation to nonhuman animals.
6
Dawn Drescher
Another great concern of mine is that even if biological humans are completely replaced with ems or de novo artificial intelligence, these processes will probably run on great server farms that likely produce heat and need cooling. That results in a temperature gradient that might make it possible for small sentient beings, such as invertebrates, to live there. Their conditions may be bad, they may be r-strategists and suffer in great proportions, and they may also be numerous if these AI server farms spread throughout the whole light cone of the future. My intuition is that very few people (maybe Simon Eckerström Liedholm?) have thought about this so far, so maybe there are easy interventions to make that less likely to happen.
5
Dawn Drescher
Brian Tomasik and Michael Dello-Iacovo have related articles.
3
DC
Here's a related question I asked.

AI alignment prize suggestion: Introduce AI Safety concepts into the ML community

Artificial Intelligence

Recently, there have been several papers published at top ML conferences that introduced concepts from the AI safety community into the broader ML community. Such papers often define a problem, explain why it matters, sometimes formalise it, often include extensive experiments to showcase the problem, sometimes include some initial suggestions for remedies. Such papers are useful in several ways: they popularise AI alignment concepts, pave the way for further research,  and demonstrate that researchers can do alignment research while also publishing in top venues. A great example would be Optimal Policies Tend To Seek Power, published in NeurIPS. Future Fund could advertise prizes for any paper that gets published in a top ML/NLP/Computer Vision conference (from ML, that would be NeurIPS, ICML, and ICLR) and introduces a key concept of AI alignment.

2
Yonatan Cale
Risk:  The course presents possible solutions to these risks, and the students feel like they "understood" AI risk, and in the future it will be harder to these students about AI risk since they feel like they already have an understanding, even though it is wrong. I am specifically worried about this because I try imagining who would write the course and who would teach it. Will these people be able to point out the problems in the current approaches to alignment? Will these people be able to "hold an argument" in class well enough to point out holes in the solutions that the students will suggest after thinking about the problem for five minutes? I'm not saying this isn't solvable, just a risk.

EA Macrostrategy:

Effective Altruism

Many people write about the general strategy that EA should take, but almost no-one outside of CEA has this as their main focus. Macrostrategy involves understanding all of the different organisations and projects in EA, how they work together, what the gaps are and the ways in which EA could fail to achieve its goals. Some resources should be spent here as an exploratory grant to see what this turns up.

Evaluating large foundations

Effective Altruism

Givewell looks at actors: object-level charities, people who do stuff. But logically, it's even more worth scrutinising megadonors (assuming that they care about impact or public opinion about their operations, and thus that our analysis could actually have some effect on them).

For instance, we've seen claims that the Global Fund, who spend $4B per year, meet a 2x GiveDirectly bar but not a Givewell Top Charity bar.

This matters because most charity - and even most good charity - is still not by EAs or run on EA lines. Also, even big cautious foundations can risk waste / harm, as arguably happened with the Gates Foundation and IHME - it's important to understand the base rate of conservative giving failing, so that we can compare hits-based giving. And you only have to persuade a couple of people in a foundation before you're redirecting massive amounts.

Refining EA communications and messaging

Values and Reflective Processes, Research That Can Help Us Improve

If we want to motivate a broad spectrum of people about the importance of doing good and ensuring the long-term goes well, it's imperative we find out which messages are "sticky" and which ones are forgotten quickly. Testing various communication frames, particularly for key target audiences like highly talented students, will support EA outreach projects in better tailoring their messaging. Better communications could hugely increase the number of people that consume EA content, relate to the values of the EA movement, and ultimately commit their life to doing good. We'd be excited to see people testing various frames and messaging, across a range of target audiences, using methodologies such as surveys, focus groups, digital media, and more.

1
Jack Lewars
I think this exists (but could be much bigger and should still be funded by this fund).

TL;DR: EA Retroactive Public Good's Funding

In your format:

Deciding which projects to fund is hard, and one of the reasons for that is that it's hard to guess which projects will succeed and which will fail. But wait, startups have solved this problem perfectly: Anybody is allowed to vet a startup and decide to invest (bet) their money on this startup succeeding, and if the startup does succeed, then the early investors get a big financial return.

The EA community could do the same, only it is missing the part where we give big financial returns to projects that turned out good.

This would make the fund's job much easier: They would have to vet which project helped IN RETROSPECT, which is much easier, and they'll leave the hard prediction work to the market.

Context for proposing this

I heard of a promising EA project that is for some reason having trouble raising funds. I'm considering funding it myself, though I am not rich and that would be somewhat broken to do. But I AM rich enough to fund this project and bet on it working well enough to get a Retroactive Public Good grant in the future, if such a thing existed. I also might have some advantage over the EA Fund in vetting this project.

In Vitalik's words:

https://medium.com/ethereum-optimism/retroactive-public-goods-funding-33c9b7d00f0c

2
Ben Dean
Related: Impact Certificates

EA Forum Writers

Pay top EA Forum contributors to write about EA topics full time

Problem: Some of the EA Forum’s top writers don’t work on EA, but contribute some of the community’s most important ideas via writing.

Solution: Pay them to write about EA ideas full time. This could be combined with the independent researcher incubator quite well.

5
Nathan Young
Pay users based on post karma.  (but not comment or question karma which are really easy to get in comparison)
3
Yitz
could lead to disincentive to post more controversial ideas there though
2
Chris Leong
Goodharts law
2
Nathan Young
don't think we'd be wedded to a single metric. Also isn't karma already weak to goodhearts law? I think we should already be concerned with this.
2
Nathan Young
I don't think we'd be wedded to this metric

A “Red Team” to rigorously explore possible futures and advocate against interventions that threaten to backfire

Research That Can Help Us Improve, Effective Altruism, Epistemic Institutions, Values and Reflective Processes

Motivation. There are a lot of proposals here. There are additional proposals on the Future Fund website. There are additional proposals also on various lists I have collected. Many EA charities are already implementing ambitious interventions. But really we’re quite clueless about what the future will bring.

This week alone I’ve discussed with friends and acquaintances three decisions in completely different contexts that might make the difference between paradise and hell for all sentient life, and not just in the abstract in the way that cluelessness forces us to assign some probability to almost any outcome but in the sense were we could point to concrete mechanisms along which the failure might occur. Yet we had to decide. I imagine that people in more influential positions than mine have to make similar decisions on almost a daily basis and on hardly any more information.

As a result, the robustness of an intervention has been the key criterion for prioritiza... (read more)

1
marswalker
I had a similar idea, and I think that a few more things need to be included in the discussion of this.  There are multiple levels of ideas in EA, and I think that a red team becomes much more valuable when they are engaging with issues that are applicable to the whole of EA.  I think ideas like the institutional critique of EA, the other heavy tail, and others are often not read and internalized by EAs. I think it is worth having a team that makes arguments like this, then breaks them down and provides methods for avoiding the pitfalls pointed out in them.  Things brought up in critique of EA should be specifically recognized and talked about as good. These ideas should be recognized, held up to be examined, then passed out to our community so that we can grow and overcome the objections.  I'm almost always lurking on the forum, and I don't often see posts talking about EA critiques.  That should change. 
2
Dawn Drescher
I basically agree but in this proposal I was really referring to such things as “Professor X is using probabilistic programming to model regularities in human moral preferences. How can that backfire and result in the destruction of our world? What other risks can we find? Can X mitigate them?” I also think that the category that you’re referring to is very valuable but I think those are “simply” contributions to priorities research as they are published by the Global Priorities Institute (e.g., working papers by Greaves and Tarsney come to mind). Rethink Priorities, Open Phil, FHI, and various individuals also occasionally publish articles that I would class that way. I think priorities research is one of the most important fields of EA and much broader than my proposal, but it is also well-known. Hence why my proposal is not meant to be about that.

Subsidise catastrophic risk-related markets on prediction markets

Prediction markets and catastrophic risk

Many markets don't exist because there isn't enough liquidity. A fund could create important longtermist markets on biorisk, AI safetry and nuclear war by pledging to provide significant liquidity once created. This would likely still only work for markets resolving in 1-10 years, due to inflation, but still*.

*It has been suggested to run prediction markets which use indices rather than currency. But people have shown reluctance to bet on ETH markets, so might show reluctance here too.

FTX, which itself runs prediction markets, might be particularly well-suited for prediction-market interventions like this.  I myself think that they could do a lot to advance people's understanding of prediction markets if in addition to their presidential prediction market, they also offered a conditional prediction market of how an indicator like the S&P 500 would do 1 week after the 2024 election, conditional on the Republicans winning vs the Democrats winning.  Conditional prediction markets for important indicators on big national elections would provide both directly useful info in addition to educating people about prediction markets' potential.

1
Alex D 🔸
My company seeks to predict or rapidly recognize health security catastrophes, and also requires an influx of capital when such an event occurs (since we wind up with loads of new consulting opportunities to help respond). Is there currently any way for us to incentivize thick markets on topics that are correlated with our business? The idea of getting the information plus the hedge is super appealing!

Pandemic preparedness in LMIC countries

Biorisk

COVID has shown us that biorisk challenges fall on all countries, regardless of how prepared and well-resourced the countries are. While there certainly are many problems with pandemic preparedness high-income countries that need to be addressed, LMIC countries face even more issues in helping detect, identify, contain, mitigate, and/or prevent currently known and novel pathogens. Additionally, even after high income countries successfully contain a pathogen it may continue to spread within LMIC countries opening up risk of further more virulent mutations.

We'd like to see a project that works with LMIC governments to understand their current pandemic prevention plans and understand their local context. This project would especially focused on novel pathogens that are more severe than currently known pathogens -- and help provide the resources and knowledge needed to upgrade their plans to match the best practices of current bio-risk experts. Such a project would likely benefit from a team that contains expertise working with LMIC countries. An emergency fund and expert advice can also be provisioned to be ready to go when pathogens are... (read more)

Language models for detecting bad scholarship 

Epistemic institutions

Anyone who has done desk research carefully knows that many citations don't  support the claim they're cited for - usually in a subtle way, but sometimes a total nonsequitur. Here's a fun list of 13 features we need to protect ourselves.

This seems to be a side effect of academia scaling so much in recent decades - it's not that scientists are more dishonest than other groups, it's that they don't have time to carefully read everything in their sub-sub-field (... while maintaining their current arms-race publication tempo). 

Take some claim P which is below the threshold of obviousness that warrants a citation. 

It seems relatively easy, given current tech, to answer: (1) "Does the cited article say P?" This question is closely related to document summarisation - not a solved task, but the state of the art is workable. Having a reliable estimate of even this weak kind of citation quality would make reading research much easier - but under the above assumption of unread sources, it would also stop many bad citations from being written in the first place.

It is very hard to answer (2) "Is the cited ar... (read more)

Getting former hiring managers from quant firms to help with alignment hiring

Artificial Intelligence, Empowering Exceptional People

Despite having lots of funding, alignment seems to not have been very successful at attracting top talent to date. Quant firms, on the other hand, have become known for very successfully acquiring talent and putting them to work on difficult conceptual and engineering problems. Although buy-in to alignment before one can contribute is often cited as a reason, this is, if anything, even more of a problem for quant firms, since very few people are inherently interested in quant trading as an end. As such, importing some of this know how could help substantially improve alignment hiring and onboarding efficiency.

On malevolence: How exactly does power corrupt?

Artificial Intelligence / Values and Reflective Processes

How does it happen, if it happens? Some plausible stories:

  • Backwards causation: People who are “corrupted” by power always had a lust for power but deluded others and maybe even themselves about their integrity;
     
  • Being a good ruler (of any sort) is hard and at times very unpleasant, even the nicest people will try to cover up their faults, covering up causes more problems... and at some point it is very hard to admit that you were incompetent ruler all along.
     
  • Power changes your incentives so much that it corrupts all but the strongest. The difference with the last one is that value drift is almost immediate upon getting power.
     
  • A mix of the last two would be: you get more and more adverse incentives with every rise in power.
     
  • It might also be the case that most idealist people come into power under very stressful circumstances, which forces them to make decisions favouring consolidation of power (kinda instrumental convergence).
     
  • See also this on the personalities of US presidents and their darknesses.
     
2
MaxRa
Yes, that's interesting and plausibly very useful to understand better. Might also affect some EAs at some point. The hedonic treadmill might be  part of it. You get used to the personal perks quickly, so you still feel motivated & justified to still put ~90% of your energy into problems that affect you personally -> removing threats to your rule, marginal status-improvements, getting along with people close to you And some discussion about the backwards causation idea is here in an oldie from Yudkowsky: Why Does Power Corrupt?

Bounty Budgets

Like Regranting, but for Bounties

Problem: In the same way that regranting decentralizes grantmaking, so do the same thing for bounties. For example, give the top 20 AI safety researchers up to $100,000 to create bounties or RFPs for, say, technical research problems. They could also reallocate their budget to other trusted people, creating a system of decentralized trust.

In theory, FTX’s regrantors could already do this with their existing budgets, but this would encourage people to think creatively about using bounties or RFPs.

Bounties are great because you only pay out if it's successful. If hypothetically each researcher created 5 bounties at $10,000 each that’d be 100 bounties - lots of experiments.

RFPs are great because it puts less risk on the applicants but also is a scalable, low-management way to turn money into impact.

Examples: 1) I’ll pay you $1,000 for every bounty idea that gets funded
2) Richard Ngo

More public EA charity evaluators

Effective Altruism

There are dozens of EA fundraising organizations deferring to just a handful of organizations that publish their research on funding opportunities, most notably GiveWell, Founders Pledge and Animal Charity Evaluators. We would like to see more professional funding opportunity research organizations sharing their research with the public, both to increase the quality of research in the areas that are currently covered - through competition and diversity of perspectives and methodologies - and to cover important areas that aren’t yet covered such as AI and EA meta.

Longtermist risk screening and certification of institutions

Artificial Intelligence, Biorisk and Recovery from Catastrophe

Companies, nonprofits and government institutions participate and invest in activities that might significantly increase global catastrophic risk like gain-of-function research or research that might increase the likelihood of unaligned AGI. We’d like to see an organisation that evaluates and proposes policies and practices that should be followed in order to reduce these risks. Institutions that commit to following these practices and submit themselves to independent audits could be certified. This could help investors and funders to screen institutions for potential risks. It could also be used in future corporate campaigns to move companies and investors into adopting responsible practices.

2
Nathan Young
How would this be effective, rather than creating additional work on granmakers and increasing the entry barriers for grantees. Seems to many similar schemes for other kinds of risk end up as meaningless box-ticking enterprises which would lead to less effectiveness and possibly reputational harm to EA. This is my prior when I hear a new audit proposed, though I hope it won't apply in your case.
1
Patrick Gruban 🔸
I agree that there is a risk that this leads to additional burden without meaningful impact.  Seeing the numbers of certifications currently deployed that are used public-facing for marketing as well as to reduce supply-chain risks (see for example this certifier) I would see the chance that longtermist causes like biosecurity risks will be incorporated into existing standards or launched as new standards within the next 10 years at 70%.  If we can preempt this with building one or more standards based on actual expected impact instead of just using it to tick boxes. If this bet works out then we might make a counterfactual impact however I would also like to see the organisation shut down after doing research if it doesn't see a path to a certification having impact.

Resilient ways to archive valuable technical / cultural / ecological information
Biorisk and recovery from catastrophe

 In ancient Sumeria, clay tablets recording ordinary market transactions were considered disposable.  But today's much larger and wealthier civilization considers them priceless for the historical insight they offer.  By the same logic, if human civilization millennia from now becomes a flourishing utopia, they'll probably wish that modern-day civilization had done a better job at resiliently preserving valuable information.  For example, over the past 120 years, around 1 vertebrate species has gone extinct each year, meaning we permanently lose the unique genetic info that arose in that species through millions of years of evolution.
There are many existing projects in this space -- like the internet archive, museums storing cultural artifacts, and efforts to protect endangered species.  But almost none of these projects are designed robustly enough to last many centuries with the long-term future in mind.  Museums can burn down, modern digital storage technologies like CDs and flash memory aren't designed to last for centuries, and many... (read more)

2
Dawn Drescher
Agreed, very important in my view! I’ve been meaning to post a very similar proposal with one important addition: Anthropogenic causes of civilizational collapse are (arguably) much more likely than natural ones. These anthropogenic causes are enabled by technology. If we preserve an unbiased sample of today’s knowledge or even if it’s the knowledge that we consider to have been most important, it may just steer the next cycle of our civilization right into the same kind of catastrophe again. If we make the information particularly durable, maybe we’ll even steer all future cycles of our civilization into the same kind of catastrophe. The selection of the information needs to be very carefully thought out. Maybe only information on thorium reactors rather than uranium ones; only information on clear energy sources; only information on proof of stake; only information on farming low-suffering food; no prose or poetry that glorifies natural death or war; etc. I think that is also something that none of the existing projects take into account.

AI Safety “school” / More AI safety Courses

Train People in AI Safety at Scale

Problem: Part of the talent bottleneck is caused by there not being enough people who have the relevant skills and knowledge to do AI safety work. Right now, there’s no clear way to gain those skills. There’s the AGI Fundamentals curriculum, which has been a great success, but aside from that, there’s just a handful of reading lists. This ambiguity and lack of structure lead to way fewer people getting into the field than otherwise would.

Solution: Create an AI safety “school” or a bunch more AI safety courses. Make it so that if you finish the AGI Fundamentals course there are a lot more courses where you can dive deeper into various topics (e.g. an interpretability course, values learning course, an agent foundations course, etc). Make it so there’s a clear curriculum to build up your technical skills (probably just finding the best existing courses, putting them in the right order, and adding some accountability systems). This could be funded course by course, or funded as a school, which would probably lead to more and better quality content in the long run.

Offer paid sabbatical to people considering changing careers

Empowering Exceptional People

People sometimes are locked-in in their non-EA careers because while working, they do not have time to:

  • Prioritize what altruistic job would fit them best
  • Learn what they need for this job

Create an organization that will offer paid sabbaticals to people considering changing careers to more EA-aligned jobs to help this transition. During the sabbatical, they could be members of a community of people in a similar situation, with coaching available.

Agree. I think that having an Advance Market Commitment system for this makes sense. E.g., FTX says 'We will fund mid-career academics/professionals for up to x months to do y. ' My experience is that most of the high value people I know who are good professional are sufficiently time poor and dissuaded by uncertainty that they won't spend 2-5 hours to apply for something they don't know they will get. The barriers and costs are probably greater than most EA funders realise.

An alternative/related idea is to have a simple EOI system where people can submit a fleshed out CV and a paragraph and then get  a AMC on an application - e.g., We think that there is a more than 60% chance that we would fund this and would therefore welcome a full application.

A public EA impact investing evaluator

Effective Altruism, Empowering Exceptional People

Charity evaluators that publicly share their research - such as GiveWell, Founders Pledge and Animal Charity Evaluators - have arguably not only helped move a lot of money to effective funding opportunities but also introduced many people to the principles of effective altruism, which they have applied in their lives in various ways. Apart from some relatively small projects (1) (2) (3) there is currently no public EA research presence in the growing impact investing sector, which is both large in the amount of money being invested and in its potential to draw more exceptional people’s attention to the effective altruism movement. We’d love to see an organization that takes GiveWell-quality funding opportunity research to the impact investing space and publicly shares its findings.

2
Brendon_Wong
Seeing this late, but this is a wonderful idea! Will Roderick and I worked on "GiveWell for Impact Investing" a while ago and published this research on the EA Forum. We ultimately pursued other professional priorities, but we continue to think the space is very promising, stay involved, and may reenter it in the future.

Predicting Our Future Grants

Epistemic Institutions, Research That Can Help Us Improve

If we had access to a crystal ball that allowed us to know exactly what our grants five years from now otherwise would have been, we can make substantially better decisions now. Just making the grants we'd otherwise have made five years in the future can save a lot of grantmaking time and money, as well as cause many amazing projects to happen more quickly.

We don't have a  crystal ball that lets us see future grants. But perhaps high-quality forecasts can be the next best thing. Thus, we're extremely excited about people experimenting with Prediction-Evaluation setups to predict the Future Fund's future grants with high accuracy, helping us to potentially allocate better grants more quickly. 

Participatory longtermism

Values and reflective processes, Effective Altruism

Most longtermist and EA ideas come from a small group of people with similar backgrounds, but could affect the global population now and in the future. This creates the risk of longtermist decisionmakers not being aligned with that wider population. Participatory methods aim to involve people decisionmaking about issues that affect them, and they have become common in fields such as international development, global health, and humanitarian aid. Although a lot could be learned from existing participatory methods, they would need to be adapted to issues of concern to EAs and longtermists. The fund could support the development of new participatory methods that fit with EA and longtermist concerns, and could fund the running of participatory processes on key issues. 

Additional notes:

  • There is a field called participatory futures, however it seems not very rigorous [based on a very rough impression, however see comment below about this], and as far as I know hasn't been applied to EA issues.
  • Participedia has writeups of participatory methods and case studies from a variety of fields.
6
Gavin
This comments section is pretty participatory.
3
MaxRa
Cool idea! :) You might be interested in skimming the report Deliberation May Improve Decision-Making from Rethink Priorities. > In this essay from Rethink Priorities, we discuss the opportunities that deliberative reforms offer for improving institutional decision-making. We begin by describing deliberation and its links to democratic theory, and then sketch out examples of deliberative designs. Following this, we explore the evidence that deliberation can engender fact-based reasoning, opinion change, and under certain conditions can motivate longterm thinking. So far, most deliberative initiatives have not been invested with a direct role in the decision-making process and so the majority of policy effects we see are indirect. Providing deliberative bodies with a binding and direct role in decision-making could improve this state of affairs. We end by highlighting some limitations and areas of uncertainty before noting who is already working in this area and avenues for further research.
3
JBPDavies
Love the idea - just writing to add that Futures Studies, participatory futures in particular & future scenario methodologies could be really useful for Longtermist research. Methods in these fields can be highly rigorous (I've been working with some futures experts as part of a project to design 3 visions of the future - which have just finished going through a lengthly stress-testing and crowd-sourcing process to open them up to public reflection and input), especially if the scenario design is approached in a systematised way using a well-developed framework. I could imagine various projects that aim to create a variety of different desirable visions of the future through participatory methods, identifying core characteristics, pathways towards them, system dynamics and so on to illustrate the value and importance of longtermist governance to get there. Just one idea, but there are plenty of ways to apply this field to EA/Longtermism! Would love to talk about your idea more as it also chimes with a paper I'm drafting, 'Contesting Longtermism', looking at some of the core tensions within the concept and how these could be opened up to wider input. If you're interested in talking about it, feel free to reach out to me at j.b.p.davies@uu.nl
1
agnode
Thanks for the point about rigor - I'm not that familiar with participatory futures but had encountered it through an organisation that tends to be a bit hypey. But good to know there is rigorous work in that field.  I agree that there are lots of opportunities to apply to EA/Longtermism and your paper sounds interesting. I'll send an email. 

Research on the long-run determinants of civilizational progress
Economic growth

What factors were the root cause of the industrial revolution?  Why did industrialization happen in the time and place and ways that it did?  How have the key factors supporting economic growth changed over the last two centuries?  Why do some developing countries manage to "catch up" to the first world, while others lag behind or get stuck in a "middle-income trap"?  Is the pace of entrepreneurship or scientific innovation slowing down -- and if so, what can we do about it?  Is increasing amounts of "vetocracy" an inevitable disease that afflicts all stable and prosperous societies (as Holden Karnofsky argues here), or can we hope to change our culture or institutions to restore dynamism?  At FTX, we'd be interested to fund research into these "progress studies" questions.  We're also interested in funding advocacy groups promoting potential policy reforms derived from the ideas of the progress studies movement.

2
Jackson Wagner
See also many of Zac Townsend's ideas, the idea of nuclear power & GMO advocacy, and my list of object-level planks in the progress-studies platform.

Pay prestigious universities to host free EA-related courses to very large numbers of government officials from around the world

Empowering Exceptional People

The direct benefit of the courses would be to give government officials better tools for thinking and talking with each other.

 The indirect benefit could be to allow large numbers of pre-disposed officials to be seen by <some organisation> who could use the opportunity to identify those with particular potential and offer them extra support or opportunities so they can make an even bigger impact.

The need for it to be free is to overcome the blocker of otherwise needing to write a business case for attendance which may then require some sort of tortuous approval process.

The need for it to be hosted at a prestigious university is to overcome the blocker of justifying to bosses or colleagues why the course is worthwhile by allowing piggybacking off the University's brand.

High-quality human data

Artificial Intelligence

Most proposals for aligning advanced AI require collecting high-quality human data on complex tasks such as evaluating whether a critique of an argument was good, breaking a difficult question into easier subquestions, or examining the outputs of interpretability tools. Collecting high-quality human data is also necessary for many current alignment research projects. 

We’d like to see a human data startup that prioritizes data quality over financial cost. It would follow complex instructions, ensure high data quality and reliability, and operate with a fast feedback loop that’s optimized for researchers’ workflow. Having access to this service would make it quicker and easier for safety teams to iterate on different alignment approaches

Some alignment research teams currently manage their own contractors because existing services (such as surgehq.ai and scale.ai) don’t fully address their needs; a competent human data startup could free up considerable amounts of time for top researchers.

Such an organization could also practice and build capacity for things that might be needed at ‘crunch time’ – i.e., rapidly producing moderately la... (read more)

Infrastructure to support independent researchers

Epistemic Institutions, Empowering Exceptional People  

The EA and Longtermist communities appear to contain a relatively large proportion of independent researchers compared to traditional academia. While working independently can provide the freedom to address impactful topics by liberating researchers from the perversive incentives, bureaucracy, and other constraints imposed on academics, the lack of institutional support can impose other difficulties that range from routine (e.g. difficulties accessing pay-walled publications) to restrictive (e.g. lack of mentorship, limited opportunities for professional development). Virtual independent scholarship institutes have recently emerged to provide institutional support (e.g. affiliation for submitting journal articles, grant management) for academic researchers working independently. We expect that facilitating additional and more productive independent EA and Longtermist research will increase the demographic diversity and expand the geographical inclusivity of these communities of researchers. Initially, we would like to determine the main needs and limitations independent... (read more)

4
Jackson Wagner
(I think this is a good idea!  For anyone perusing these FTX project ideas in the future, here is a post I wrote exploring drawbacks and uncertanties that prevent people like me from getting excited about independent research as a career.)

EA Health Institute/Chief Wellness Officer 

Empowering Exceptional People, Effective Altruism, Community Building 

Optimizing physical and mental health can improve cognitive performance and decrease burnout. We need EAs/longtermists to have the health resilience to weather the storm - physical fitness, sleep, nutrition, mental health.  An institution could be created to assist EA aligned organizations and individuals. Using best practices from high performance workplace health, both personal and organizational, and innovative new ideas, a wellness team could help EAs have sustainable and productive careers. This could be done through consulting, coaching, preparation of educational materials or retreats. From a community growth perspective, EA becomes more attractive to some when one doesn’t have to sacrifice health for deeply meaningful work.

(Disclosure -I'm a physician/physician wellness SME - helping with this could be a good personal fit)

Unified, quantified world model

Epistemic Institutions, Effective Altruism, Values and Refelctive Processes, Research That Can Help Us Improve

Effective altruism started out, to some extend, with a strong focus on quantitative prioritization along the lines of GiveWell’s quantitative models, the Disease Control Priorities studies, etc. But they largely ignore complex, often nonlinear effects of these interventions on culture, international coordination, and the long-term future. Attempts to transfer the same rigor to quantative models of the long-term future (such as Tarsney’s set of models in The Epistemic Challenge to Longtermism) are still in their infancy. Otherwise effective altruist prioritization today is a grab bag of hundreds of considerations that interact in complex ways that (probably) no one has an overview over. Decision-makers may forget to take half of them into account if they haven’t recently thought about them. That makes it hard to prioritize, and misprioritization becomes more and more costly with every year.

A dedicated think tank could create and continually expand a unified world model that (1) is a repository of all considerations that affect altruistic decisi... (read more)

3
Max Ghenis
Cool - you might also be interested in my submission, "Comprehensive, personalized, open source simulation engine for public policy reforms". It's not in the pitch but my intent is for it to be global as well.
3
Dawn Drescher
Awesome, upvoted! You can also have a look at my “Red team” proposal. It proposes to use methods from your field applied to any EA interventions (political and otherwise) to steel them against the risk of having harmful effects.

Civic sector software

Economic Growth, Values and Reflective Processes

Software and software vendors are among the biggest barriers to instituting new public policies or processes. The last twenty years have seen staggering advances in technology, user interfaces, and user-centric design, but governments have been left behind, saddled with outdated, bespoke, and inefficient software solutions. Worse, change of any kind can be impractical with existing technology systems or when choosing from existing vendors. This fact prevents public servants from implementing new evidence-based practices, becoming more data-driven, or experimenting with new service models.

Recent improvements in civic technology are often at the fringes of government activity, while investments in best practices or “what works” are often impossible for any government to implement because of technology. So while over the last five years, there has been an explosion of investments and activity around “civic innovation,” the results are often mediocre. On the one hand, governments end up with little more than tech toys or apps that have no relationship to the outcomes that matter (e.g. poverty alleviation, service deli... (read more)

3
Yonatan Cale
Hey, this is somewhat my domain. The bottleneck is not building software, it is more like "governments are old gray organizations that don't want to change anything". If you find any place where the actual software development is the bottleneck, I'd be very happy to hear and maybe take part in it. I also expect many other EA developers to want to take part, it sounds like a good project

(For context, I was the Chief Data Officer of the California State Government and CTO of Newark, NJ when Cory Booker was Mayor). 

I actually think the way to do this is to partner with one city and build everything they need to run the city. The problem is that people can't use piecemeal systems very well. It would just take a huge initial set of capital -- like exactly the type of capital that could be provided here. 

1
Yonatan Cale
Ah ok forget about it being somewhat my domain :P Sounds like a really interesting suggestion. Especially if it would be for a city that "matters" (that will help people do important things?), I think this project could interest me and others   (I'm interested if you have opinions about https://zencity.io/, as a domain expert)
1
Max Ghenis
Somewhat related, I submitted "Comprehensive, personalized, open source simulation engine for public policy reforms". Governments could also use the simulation engine to explore policy reforms and to improve operations, e.g. to establish individual households' eligibility for means-tested benefit programs.

Teaching secondary school students about the most pressing issues for humanity's long-term future

Values and Reflective Processes, Effective Altruism

Secondary education focuses mostly on the past and present, and tends not to address the most pressing issues for humanity’s long-term future. I would like to see textbooks, courses, and/or curriculum reform that promote evidence-based and thoughtful discourse about the major threats facing the long-term future of humanity. Secondary school students are a promising group for such outreach and education because they have their whole careers ahead of them, and numerous studies have shown that they  care about the future. This may serve a significant benefit in making more young people care about  these issues and support them with either their time or money

Advocacy for digital minds

Artificial Intelligence, Values and Reflective Processes, Effective Altruism

Digital sentience is likely to be widespread in the most important future scenarios. It may be possible to shape the development and deployment of artificially sentient beings in various ways, e.g. through corporate outreach and lobbying. For example, constitutions can be drafted or revised to grant personhood on the basis of sentience; corporate charters can include responsibilities to sentient subroutines; and laws regarding safe artificial intelligence can be tailored to consider the interests of a sentient system. We would like to see an organization dedicated to identifying and pursuing opportunities to protect the interests of digital minds. There could be one or multiple organizations. We expect foundational research to be crucial here; a successful effort would hinge on thorough research into potential policies and the best ways of identifying digital suffering.

X-risk Art Competitions

Fund competitions to make x-risk art to create emotion

Problem: Some EAs find longtermism intellectually compelling but not emotionally compelling, so they don’t work on it, yet feel guilty.

Solution: Hold competitions where artists make art explicitly intended to make x-risk emotionally compelling. Use crowd voting to determine winners.

Translate EA content at scale

Reach More Potential EAs in Non-English Languages

Problem: Lots of potential EAs don’t speak English, but most EA content hasn’t been translated

Solution: Pay people to translate the top EA content of all time into the most popular languages, then promote it to the relevant language communities.

7
Dawn Drescher
Little addition: I imagine that knowledgeable EAs in the respective target countries should do that as opposed to professional translators so that they can do full language and cultural mediation rather than just translating the words.

Provide personal assistants for EAs

 Empowering Exceptional People

Many senior EAs spend way too much with busywork because it is hard to get a good personal assistant. This is currently so because: 

  1. There is no obvious source of reliable, vetted assistants.
  2. If an EA wants to become an assistant, it is harder for them to find a job for EA or on EA-related projects.
  3. Assistants have an incentive to have many clients, to avoid loss of income if they would lose their client. This leads to assistants having less time per client, and thus more time is spent on communication and less on work itself.
  4. Assistants tend to be paid personally by EAs instead of by their employers. That leads to using them less than would be optimal.
  5. There is no community of assistants that would be sharing knowledge and helping each other.

All these factors would be removed if an agency managed personal assistants.

4
Dawn Drescher
Kat Woods (Nonlinear) is someone to talk to when it comes to this project.

Institutions as coordination mechanisms

Artificial Intelligence, Biorisk and Recovery from Catastrophe, Great Power Relations, Space Governance, Values and Reflective Processes

A lot of major problems - such as biorisk, AI governance risk and the risks of great power war - can be modeled as coordination problems, and may be at least partially solved via better coordination among the relevant actors. We’d love to see experiments with institutions that use mechanism design to allow actors to coordinate better. One current example of such an institution is NATO: Article 5 is a coordination mechanism that aligns the interests of NATO member states. But we could create similar institutions for e.g. biorisk, where countries commit to a matching mechanism - where “everyone acts in a certain way if everyone else does” - with costs imposed to defectors to solve a tragedy of the commons dynamic.

2
Brendon_Wong
Sjir, you may be interested in Roote's work on meta existential risk!
1
Sjir Hoeijmakers🔸
Thank you!

Experiments with and within video games

Values and Reflective Processes, Empowering Exceptional People

Video games are a powerful tool to reach hundreds of millions of people, an engine of creativity and innovation, and a fertile ground for experimentation. We’d love to see experiments with and within video games that help create new tools to address major issues. For instance, we’d love experiments with new governance and incentive systems and institutions, new ways to educate people about pressing problems, games that simulate actual problems and allow players to brainstorm solutions, and games that help identify and recruit exceptional people.

Replicate the Project Ideas Competition for other types of communities than EAs

Research That Can Help Us Improve

People have contributed a lot of really insightful and promising ideas here. Given that "there are no wrong ideas in brainstorming" and that there may be systematic blind spots for effective altruists/longtermists' paradigm, perhaps doing this broad-idea-crowdsourcing exercise in other types of communities could get us new, potentially promising ideas. 

Regular prizes/awards for EA art

Effective Altruism

Works of art (e.g. stories, music, visual art) can be a major force inspiring people to do something or care about something. Prizes can directly lead to work (see for example the creative writing contest), but might also have an even bigger role in defining and promoting some type of work or some quality in works. Creating a (for example) annual prize/award scheme might go a long way towards defining and promoting an EA-aligned genre (consider how the existence of Hugo and Nebula awards helps define and promote science fiction). The existence of a prestigious / high-paying prize for the presence of specific qualities in a work is also likely to draw attention to those qualities more broadly; news like "Work X wins award for its depiction of [thoughtful altruism] / [the long-term future] / [epistemic rigor under uncertainty]" might make those qualities more of a conversation topic and something that more artists want to depict and explore, with knock-on effects for culture.

Impact markets to smooth out retroactive funding

Effective Altruism, Empowering Excetional People, Economic Growth, Epistemic Institutions

Yonatan Cale already made the case for retroactive funding, i.e. that it’s easier to tell what has succeeded than what will succeed. The questions of what will succeed, in turn, can be answered by a market.

Investors will try to predict which charities will succeed to the point of receiving retroactive funding. A retroactive funder can make larger grants in proportion to their reduction in uncertainty (5–10x), time savings from having to do less vetting (~ 2x), and delay (~ 1.5x). Hence investors with enough foresight can even make a profit and turn the prediction of retro fund decisions into their business model. Promising charities can bootstrap rapidly with these early financial injections, successful serial charity entrepreneurs can accumulate more and more capital to reinvest into their next charity venture, and funders save time because they have to do only a fraction of the vetting.

We – Kenny Bambridge, Matt Brooks, Dony Christie, Denis Drescher, and a number of advisors – are actively working toward this goal. I’ve been thinking about the m... (read more)

Studying Economics Growth Deterrents and Cost Disease

Economic growth

Economic growth has forces working against it. Cost disease is the most well-known and pernicious of these in developed economies. We are interested in funding work on understanding, preventing, and reversing cost disease and other mechanisms that are slowing economic growth. 

(Inspired by Patrick Collison)

Secure full-stack open-source computing for information security

Artificial Intelligence, Biorisk, Research that will help us improve

Much of our sensitive research and weaponry, like AI, biolabs, nuclear weapons, etc, are built upon insecure infrastructure. Think of a scenario in the future where one hacker could hack and control fleets of self-driving cars and essentially have a swarm of missiles. Real information security would need to build the full stack of computing from the hardware, OS, compilers, to application layers. It would also ideally be open-source and inspectable to ensure security.

Funding Stress/Penetration Tests of vital orgs/infrastructure

Cyber Risks, Cybersecurity

Most orgs don't spend enough on ensuring their infrastructure is safe from hackers and we should ensure that labs working on AI safety, biorisk companies, EA orgs etc. are safe from malicious hackers.

Longtermist democracy / institutional quality index

Values and Reflective Processes, Epistemic Institutions

Several indices exist to quantify the degree of liberal democracy in all countries and territories around the world, like Freedom in the World and the EIU's Democracy Index. These indices are convenient for describing and comparing the state of liberal democracy in different countries, because they distill the various complicated aspects of a state's political system into one or more numbers that are easy for a layperson to understand.

We propose a "democracy index" that emphasizes the qualities of political systems that are most relevant to making the long-term future go well. Such qualities could include voting systems, free and fair elections, voter competence, and capacity for long-term planning in government - and the set of qualities used could be based on research such as this post. This index would help make analysis of countries and territories' political systems more accessible to EAs/longtermists who aren't political scientists, since it would distill them down to a few easy-to-understand numbers. It would also help the longtermist community track progress towards bet... (read more)

Fund Sentinel, a nationwide pandemic early response system (originally suggested by alexrjl)

Biowarfare

Fund the biosecurity program explained on this podcast. Any time anyone gets sick you sequence a sample. Any unknown genetic material gets sequenced again at a higher level. This allows for rapid response to new pathogens.

 

Politician forecasting stipend

Politics, better epistemics

Many people think politicians are underpaid. Many think they have a poor grasp of the likelihood of future events. Offer every Senator and Representative a yearly sum to make public predictions about future public statistics. The forecasting would help them correct their own errors and provide a valuable source of information on who makes good decisions about the future and who doesn't.

3
Jakob
See one version of this here: https://forum.effectivealtruism.org/posts/KigFfo4TN7jZTcqNH/the-future-fund-s-project-ideas-competition?commentId=rTHGFbfr8DXwqnA2B
[anonymous]20
0
0

Making Future Grantmaking More Optimal

Effective altruism

  • The EA community will likely spend much more money in the future than what they spend know. Grantmaking is hard and the right setup is controversial. Hence, it might make sense to spend money on how to do it well.
  • One could invite people to so called "donation parliaments" with 100 randomly selected citizens who get expert/EA input, invite 100 top academics to give away 10 million. Try out expert committees or democratic control. Organising such donation parliaments etc could also receive positive media attention.

Moderators for EA/Longtermist FB/Groups or Discords

Effective Altruism

(Refinement of EA-relevant Substacks, Youtube, social media, etc. )

Given the huge amount of funding available to EA, we probably don't want to skimp on moderators for major Facebook or Slack or Discord groups even though these have traditionally been run by volunteers. It'd be worthwhile at least experimenting to see if paid part-time moderators would be able to add extra value by writing up summaries/content for the groups, running online calls, setting up networking spreadsheets and spending more time thinking through strategy.

Risks: We might end up paying money for work that we would have gotten for free. Attempts to set up networking spreadsheets or run calls might have minimal participation and hence minimal impact.

1
mic
Fyi paid part-time moderators would need buy-in from the online community – the EA Corner Discord seems against paid moderators, for example. I really appreciate how many ideas you're proposing, Chris!
2
Chris Leong
Of course. I guess I could see some negative effects from how it could encourage people to seek the mod roles as a way of being paid rather than because they could do a good job. I think that this issue could mostly be avoided by offering student level rates. Running a Facebook group seems like a nice entrypoint into EA movement building. Thanks, you're welcome!
3
Greg_Colbourn
Incentives could also be aligned by offering existing volunteer mods a salary to spend more time moderating.

More Insight Timelines

In 2018, the Median Group produced  an impressive timeline of all of the insights required for current AI, stretching back to China's Han Dynasty(!)

The obvious extension is to alignment insights. Along with some judgment calls about relative importance, this would help any effort to estimate / forecast progress, and things like the importance of academia and non-EAs to AI alignment. (See our past work for an example of something in dire need of an exhaustive weighted insight list.)

Another set in need of collection are more general breakthroughs - engineering records broken, paradigms invented, art styles inaugurated - to help us answer giant vague questions about e.g. slowdowns in physics, perverse equilibria in academic production, and "Are Ideas Getting Harder to Find?"
 

Research differential technological progress and trajectory changes
Research That Can Help Us Improve, Values and Reflective Processes

The idea of Differential technological progress (DTP) may be a crucial consideration for many at-first-glance good ideas like:

- improving scientific publishing

- increasing GDP

- increasing average intelligence

 

But given its importance, there hasn`t been much research and publications on GTP.

Central question for research is how to use DTP to prioritize interventions. Examples of subquestion to research are:

- when in the past there were intentional trajectory changes.

- what subgoals seem to be good when DTP is considered.

- and so on.

Bridging-based Ranking for  Recommender Systems

Artificial Intelligence, Epistemic Institutions, Values and Reflective Processes, Great Power Relations

Recommender systems are used by platforms like FB/Meta, YouTube/Google, Twitter, TikTok, etc. to direct the attention of billions of people every day. These systems, due to a combination of psychological, sociological, organizational, etc. factors are currently most likely to reward content producers with attention if they stoke division (e.g.  outgroup animosity). Because attention is a currency that can be converted into money, power, and status, this "bias toward division” has impacts groups at every scale; from local school boards to Congress to geopolitics.  

Ensuring that recommender systems can mitigate this bias is crucial to functional democracy, to cooperation on catastrophic risks (e.g. AGI, pandemics,  climate change), and simply to reducing the likelihood of escalating wars. We urgently need more research on how to better design recommender systems;  we need to create open source implementations that do the right thing from the start which can be adopted by cash-strapped startups; and we need a mix of pressure and support to ensure these improvements will be rapidly deployed at platform scale.

Headhunter Office: targeted recruitment of aligned MDs, and other mid-career professionals

Effective Altruism, Community Growth and Diversity

I am a physician, and I have several conversations a week with bright, altruistic, and burned out colleagues. These professionals are often in a position to earn to give, and also can be entrepreneurial and adept at navigating complex systems and could be future organizational leaders or 'founder types'. Currently, there are cosmetic MLM groups and others recruiting from this group of physicians looking to make their lives more fulfilling and meaningful while still earning an income - there is literally an MD Exit strategy facebook group. 

I propose an EA headhunter office to recruit for the community. For example, recruiting physicians explicitly, using some of the successful techniques that pharma uses like having physicians recruit their peers. Perhaps there are similar aligned mid-career professionals in law, public administration, engineering, etc.

Support for EAs having children 

Empowering Exceptional People, Effective Altruism 

Children of EAs are much more likely to become EAs (100-1000x?) and future generations of EAs may have a large impact. Having children usually means a pause in work which is poorly compensated and difficult to time. I propose an institute to support EAs wishing to have children. EAs could be could be supported with fertility costs, including egg freezing, and be given grants for parental leave which improve parental and child health outcomes. There are many trade offs in parenting, which could be discussed in an EA parenting forum. Building a community could benefit these EA parents and their children.

Evidence for 100-1000x estimate? What is the base rate for children following their parents? When I've seen this discussed before, the conclusion is usually that memetic transfer of EA is much easier than genetic transfer of EA.

1
Lauren Reid
That’s a good point. I agree it’s the culture more than the DNA which matters and I don’t know the real numbers. Of course, my husband and I were identified as gifted and it looks very likely our children will be too, and we also have read them Open Boarders as a bedtime story. Adopted children of EAs are also probably much more likely than average to become EAs. I think of organized religions and how they encourage children and future generations - there may be a lesson for us there.
3
JackM
I would have thought it would be higher value on the margin to spread EA to talented people who already exist than to make more people.
3
Larks
Very minor, but just wanted to check you were aware of, and had joined if interested, the EA parents facebook group.
3
Lauren Reid
I didn’t know that, nor did my husband (Alex D), who is much more in the EA space. Thank you for posting, I will join. We are seriously considering going to Nassau with our neuro atypical kids (4 and 7) next winter, and are trying to figure out how it would work for schooling/childcare in particular.

DIY decentralized nucleic acid observatory

Biorisk and Recovery from Catastrophes

As part of the larger effort of building an early detection center for novel pathogens, a smaller self-sustaining version is needed for remote locations. The ideal early-detection center would not only have surveillance stations in the largest hubs and airports of the world, but also in as many medium sized ones as possible. For this it is needed to provide a ready-made, small and transportable product which allows meta-genomic surveillance of wastewater or air ventilation. One solution would be designing a workflow utilizing the easily scalable and portable technology of nanopore sequencing and combining it with a workflow to extract nucleic acids from wastewater. The sharing of instructions on how to build and use this method could lead to a "do it yourself" (DIY) and decentralized version of a nucleic acid observatory. Instead of staffing a whole lab at a central location, it would be possible to only have one or two personnel in key locations who use this product to sequence samples directly and only transmit the data to the larger surveillance effort. 


 

A global observatory for institutional improvement opportunities

Research That Can Help Us Improve, Great Power Relations, Epistemic Institutions

Actions taken by powerful institutions—such as central governments, large corporations, influential media outlets, and R&D labs—can dramatically shape people's lives today and cast a shadow long into the future. It can be hard to know what philanthropic strategies would be most likely to drive better would outcomes, however, because each individual institution is itself a complex ecosystem of incentives, external pressures, norms, policies, and bureaucratic structures. An ongoing project to document how important institutions operate in practice and spot relevant windows of opportunity (e.g., legislation under consideration, upcoming leadership transitions, etc.) as they emerge would be very helpful for mapping the strategic landscape across virtually all of our other interest areas.

EA content translation service

Effective Altruism, Movement Building

(Maybe add to #30 - diversity in EA)

EA-related texts are often using academic language needed to convey complex concepts. For non-native speakers reading and understanding those texts takes a lot more time than reading about the same topic in their native language would. Furthermore, today many educated people in important positions, especcially in non-western countries, do not at or only poorly speak English. (This is likely part of the reason that EA currently mainly exists in English speaking countries and almost exclusively consists of people speaking English well.)

To make EA widely known and easy to understand there needs to be a translation service enabling e.g. 80k, important Forum posts or the Precipice to be read in different languages. This would not only make EA easier to understand - and thus spread ideas further - but also likely increase epistemic diversity of the community by making EA more international.

Pipeline for writing books

Effective altruism

It's plausible that more EAs/longtermists should be writing books on the interesting subjects they are experts in, but they currently do not because of a lack of experience or other types of friction. Crowdsourced resources, networks, and grants may help facilitate this. Books written by EAs would have at least two benefits: (a) dissemination of knowledge, and (b) earning-to-give opportunities (via royalties).

This is an interesting idea; it definitely seems plausible that EAs (who often have a lot of unique knowledge!) might be underrating the benefits of writing books.  Could you expand a little on what you are thinking here?  (I'd also be interested to hear from anyone else with relevant experience.)  How hard is it to publish a book?  If you try, do you have a high chance of getting rejected?  How do people usually do marketing and get people to read their stuff? 

Maybe this is too cynical of me (or too internet-centric), but I doubt the main benefits would come from earning royalties (not likely to be very profitable relative to other things skilled EAs could be doing!) or spreading knowledge (just read the blog posts!).  But I think trying to publish more EA books might help greatly with:

  1. Prestige and legibility (just like how academic papers are considered more legit than blog posts by academics and governments).  It might be easier for, say, the US Democratic Party to get behind an EA-inspired pandemic-prevention plan, foreign-aid revamp, or predction-markets-y institutional-reform agenda if they could point to a prestigious book rather than
... (read more)
1
Peter S. Park
Thanks so much, Jackson! I have never published a book, but some EAs have written quite famous and well-written books. In addition to what you suggested, I was thinking "80,000 pages" could organize mentoring relationships for other EAs who are interested in writing a book, writer's circles, a crowdsourced step-by-step guide, etc. Networking in general is very important for publishing and publicizing books, from what I can gather, so any help on getting one's foot in the door could be quite helpful.
1
Logan Riggs
My brother has written several books and currently coaches people on how to publish it and market it on Amazon. He would be open to being paid for advice in this area (just dm me) I think the dissemination and prestige are the best arguments so far.

Institute/Grants for improving the science of indoor air quality

Biorisk

During Covid we learned that ‘air is the new poop’ in terms of hygiene. Improving indoor air quality can prevent respiratory pathogen transmission both in the case of a pandemic and for general health. A granting agency could support advances in indoor air quality and their implementation such as in airplanes and classrooms.

Longtermism Policy Lab

Epistemic Institutions, Values and Reflective Processes, Great Power Relations, Space Governance, Research That Can Help Us Improve

Despite the growing recognition of the importance of long-term perspectives, governance remains oriented around short-term incentives. More coordination and collaboration between researchers and policymakers, practicioners and industry professionals is needed to translate research into policy. The Longtermism Policy Lab will bridge this gap, working with societal partners and governments at all levels (local to global) to undertake policy experiments. The Lab will also contain a research component, establishing and pursuing an ambitious interdisciplinary Longtermism research agenda, including an emphasis on research that doesn't fit well within either academia or traditional research institutes. We want to see this organisation serve as a direct link between longtermism as a governance approach and its implementation within all levels of governance across the globe.

(Per Nick's post, reposting)

Private-sector ARPA models

All

Many of the technological innovations of the last fifty years have their genesis in experiments run by DARPA. ARPA models are characterized by individual decision-makers taking on risky bets within defined themes, setting ambitious goals, and mobilizing top researchers and entrepreneurs to meet them. We are interested in funding work to study these funding models and to create similar models in our areas of interest. 

In case you drew inspiration from some of our suggestions in the megaprojects article, we would like to retroactively apply. 

Promote Ethical Corporate Behavior

EA to purchase 5% of Blackrock and 5% of Vanguard shares. To be clear, I don't mean 5% of their index funds, but rather 5% of the underlying fund management companies.

EA's investment of circa $10 billion can be leveraged into a board seat on companies that manage circa $20 trillion in assets. EA could lobby these companies to apply a corporate ethics tests on all their index funds. E.g. excluding coal and promoting other EA priorities.

4
MaxRa
Thanks for this suggesting, I'm really interested in this general direction, in case anybody wants to dig into it a bit more. It spontaneously seems unlikely to me that investing this large share of EA money is the best bet, but I wonder if there are other ways to influence them (e.g. as I understand Blackrock and Vanguard senior managers simply got convinced that climate change is a downside for their longterm profit, and they probably should believe the same for misaligned AI / AI races). Maybe another route would be to ensure that those investment firms will be able to influence & coordinate the behavior of tech firms to reduce competitive dynamics.

Anti-Pollution of the universe

Space governance

As we take one small step for man, our giant leap for humanity leaves footprints of toxicity that we justify as ‘negative externalities’. There are currently 20,000 catalogued objects comprised of rocket shards, collision debris and inactive satellites which cause major traffic risks in orbit around our planet whilst also likely polluting the universe. As Boeing, OneWeb, SpaceX etc increase their launches, we similarly add to the congestion and space collision probabilities. (read as disasters waiting to happen). There are currently NO debris removal methods. If we’ve learnt anything from our current micro history of mankind on Earth – it’s that the nature/universe around us is important since we’re intricately linked and that there are costs to our polluting behaviour in the pursuit of ‘territory/energy etc’. Hence when we’re playing at the macro cosmic level – it is even more imperative that we get this framework/relationship/thought process right.

Nuclear Funding Shortfall

Nuclear Risk

There has been a significant shortfall in nuclear risk funding. The most effective elements of this could be covered by the fund.

Superforecasting team

Global catastrophic risks

We know that top forecasters exist, but few are currently employed to predict around long term risks. These forecasters should be supported by developers to help maximise their accuracy and output. Multiple organisations could employ 100s or 1000s of top forecasters to analyse developing situations and suggest outcomes that are most likely to resolve them for the interests of all consciousness.

Build LMIC university capacity

Economic growth, Empowering exceptional people

Universities in LMICs often have limited access to funding. Additional funding could enable many good outcomes including:

  • Greater  opportunities for exceptional people born in LMICs
  • Better and more influential academic contributions from people in LMICs, which would increase the diversity of backgrounds of people contributing to global academia, and perhaps uncover key errors and blindspots in Western academic thought.
  • Boost economic growth in LMICs
  • Boost epistemic standards in LMICs
  • Help improve LMIC capacity to undersand and plan for catastrophes, e.g. from pandemics and climate change. 

Funding could be focussed on issues of concern to EAs, such as pandemics, or could be unrestricted to boost overall university capacity. As well as funding universities, funds could be provided for networks, independent labs, access to journals, travel and conferences, spinout companies etc. 

Increasing Earth’s probability of survival

Space governance

Currently as we transition from a Kardashev Type 0 to Type 1 civilization, our probability of encountering/alerting other civilisations increases exponentially. This is somewhat ironic that we may fall upon our impetus. Citing dark forest theory that the end game is such that ‘lacking assurances, the safety option for any species is to annihilate other life forms before they have a chance to do the same’, means that humanity is immediately on the defensive. (Applying a chronological framework and assuming linearity of time) As of such – we should fund ways to increase our probability of survival (by either deterrence mechanisms, signalling non threat or camouflage) such that we may evolve uninterrupted. (This also assumes we don’t kill ourselves first, the probability of which is sadly also non zero) 

Just throwing out crazy suggestions (I’m sensing that’s what the thematic is here) would be something like hyper gravity generation device that bends observable light emitted from our planet, so much so that when observed – we would look like a blackhole.

Combatting DeepFake

Epistemic Institutions, Artificial Intelligence


As AI advances –Numerous high quality deepfake videos/images are being produced at an alarmingly increasing rate. Delving into the question ‘What happens when we can’t trust our eyes and ears anymore?’, This immediately raises obvious signals that it will affect many industries such as journalism, military, celebrities, government etc. Proactively funding a superior ML anti-deepfake bot for commercial use is  important such that images/videos can be properly verified. The end game will likely come down to some degree of superior computing power since both are ML based algos – hence the advantage here would be first mover and/or altruistic (think along the same example of free antivirus software) in nature. 

Targeted practical statistical training

Economic Growth, Values and Reflective Processes

"Human cognition is characterized by cognitive biases, which systematically lead to errors in judgment: errors that can potentially be catastrophic (e.g., overconfidence as a cause of war). For example, a strong case can be made that Russia's invasion of Ukraine has been an irrational decision of Putin, a consequence of which is potential nuclear war. Overconfidence is a cause of wars and of underpreparation for catastrophes (e.g., pandemics, as illustrated by the COVID-19 pandemic).

One way to reduce detrimental and potentially catastrophic decisions is to provide people with statistical training that can help empower beneficial decision-making via correct calibration of beliefs. (Statistical training to keep track of the mean past payoff/observation can be helpful in a general sense; see my paper on the evolution of human cognitive biases and implications.) At the moment, statistical training is provided to a very small percentage of people, and most provisions of statistical training are not laser-focused on the improvement of practical learning/decision-making capabilities, but for other indir... (read more)

Mental health treatment to prevent anthropogenic catastrophic/existential risks

Biorisk and Recovery from Catastrophe

Issues of mental health can be very harmful to the well-being of the self and others. The degree to which this harm can occur can, when combined with technology, even result in catastrophic/existential risks. (The Russian invasion of Ukraine, the cause of which may be the mental state of Putin, can plausibly lead to nuclear war. Another example is engineered pandemics.) Given the disproportionately anthropogenic skew of catastrophic/existential risks, research/funding/advocacy for mental health treatment (general or targeted) may help prevent such risks.

Reminds me of some of the proposals here: https://forum.effectivealtruism.org/posts/LpkXtFXdsRd4rG8Kb/reducing-long-term-risks-from-malevolent-actors

We therefore consider interventions to reduce the expected influence of malevolent humans on the long-term future.

  • The development of manipulation-proof measures of malevolence seems valuable, since they could be used to screen for malevolent humans in high-impact settings, such as heads of government or CEOs. (More)
  • We also explore possible future technologies that may offer unprecedented leverage to mitigate against malevolent traits. (More)
  • Selecting against psychopathic and sadistic tendencies in genetically enhanced, highly intelligent humans might be particularly important. However, risks of unintended negative consequences must be handled with extreme caution. (More)
1
Peter S. Park
Yes, I think these proposals together could be especially high-impact, since people who pass screening may develop issues of mental health down the line.

Rule of Law Fund

Values and Reflective Processes and Economic Growth

A strong rule of law helps ensure equity, human rights, property rights, contract enforcement, and due process. Many countries are still developing their legal systems. Between 2010 and 2020 twenty-four different countries ratified a constitution.  The legal systems that evolve today will have a lasting impact on future generations.

This fund would offer funding for organizations and individuals engaged in legal scholarship and litigation that align with  the Future Fund’s guiding principles, with a specific focus on strengthening the rule of law in countries with less developed legal institutions.

Reflection Retreats

Effective Altruism

There are certain points in our life when the decisions we make can greatly affect the trajectory of our lives. This could include deciding what degree to study, graduating or making a major career change. These retreats would bring together a bunch of EAs together (possibly some non-EAs too) to reflect on these decision and start making applications/plans, ect.

AI alignment prize suggestion: Improve our ability to evaluate (and provide training signal for) fuzzy tasks

Artificial Intelligence

There are many tasks that we want AI systems to do, for which performance cannot be evaluated automatically (and thus training signal provision is hard). If we don't make progress on our ability to train systems for such tasks, we might end up in a world full of systems that optimise for that which is easy to measure, rather than what we actually want. One example of such a task is the evaluation of free-form text; there is currently no automated method to evaluate free-form text (with respect to criteria such as usefulness or correctness) that matches human evaluation. The Future Fund could offer prizes for work that takes a task for which the gold-standard of evaluation is humans, and demonstrates an automated evaluation method that matches human evaluation very closely (or work that demonstrates an automated evaluation method to be superior to human evaluation).

Note: This is crucially not the same as "training models to perform well on the task in question". There are a number of technical reasons why what I suggest is easier. Intuitively, evaluating performance is often considerably easier than generating good performance. For example, I can watch a movie and say if it's good, but I can't make a good movie.

EA Programming Bootcamp

Effective Altruism

Providing a programming bootcamp to members of the Effective Altruism community could be a way of assisting struggling community members whilst avoiding the issues inherent with directly providing cash assistence. It could also allow communities members to accelerate their career progression.

Notes: See the comments here for some of the issues with giving cash.

I suspect that the impact of this would be larger than it first appears as a) talented people generally want to be part of a community where people are successful b) if community members are struggling then that takes up the time of other community members who try to help them.

5
Yonatan Cale
I think funding programming bootcamps is a great idea (if anyone needs it) and I intend to fund the first 3 people who'll ask me even just to see how it goes. [This is not a formal commitment because I don't want to think of all the edge cases like a $10k course; but I do currently intend to do it. DM me if you want]
6
Chris Leong
Most proper bootcamps are very expensive, like about that kind of rate I'd guess.
1
Yonatan Cale
Ah, I meant online courses (Sorry, language mistake on my side)

CEA for the developing world

Effective Altruism

The main EA movement building organization, CEA, focuses primarily on talented students in top universities of developed countries. This seems to be due to a combination of geographical and cultural proximity, quantity of English speakers, and ease of finding top talent. However, there is a huge amount of untapped talent in developing countries that may be more easily reached through dedicated organizations optimized for being culturally, linguistically, and geographically close to such talent, such as a CEA for India or Brazil. Such an organization would develop its own goals and strategies tailored to their respective regions, such as prioritizing nationwide prizes over group-by-group support, hiring local EA talent to lead projects, and identifying and partnering with regionally influential universities and institutions. This project would not only contribute to increasing diversity in EA, but also foster organizational competition by allowing different movement building strategies, and better position the EA movement for unexpected geopolitical power shifts.

An ecosystem of organizations to initiate a “Hasty Reflection”

Values and Reflective Processes, Epistemic Institutions, Effective Altruism, Research That Can Help Us Improve

The Long Reflection appears to me to be robustly desirable. It only suffers from being more or less unrealistic depending on how it is construed.

In particular, I feel that two aspects of it are in tension: (1) delaying important, risky, and irreversible decisions until after we’ve arrived at a Long Reflection–facilitated consensus on them, and (2) waiting with the Long Reflection itself until after we’ve achieved existential security.

I would expect, as a prior, that most things happen because of economic or political necessity, which is very hard to influence. Hence the Long Reflection either has to ramp up early enough that we can arrive at consensus conclusions and then engage in the advocacy efforts that’ll be necessary to improve over the default outcomes or else risk that flawed solutions get locked in forever. But the first comes at the risk of diverting resources from existential security. This indicates that there is some optimal trade-off point between existential security and timely conclusions. (From ... (read more)

Scandinavian-like parental leave (25 weeks +) in EA organizations

Leading the way with a policy combating demographic decline, while supporting talent selection and diversity in EA community 

Paid parental leave creates an incentive to have (more) kids - or rather, it takes away part of the large financial incentive not to have kids. My concrete suggestion is to fund Scandinavian-like parental leaves for employees in specified EA organizations. This would open up more access to the large pool of talented family oriented persons. Further, having an unusually beneficial parental leave benefit could inspire other organizations to follow, and thus help combat demographic decline. The idea should be quite easy to pilot, implement and scale, and the results relatively easy to measure.

2
Greg_Colbourn
Good idea, just in terms of talent selection and diversity. Has such parental leave had a noticeable effect on fertility / demographic decline in Scandinavia?
2
Kjersti Moss
Hi, Greg.  Thank you for your question. I'm very interested in exploring this idea further. First, I want to say that I have not done deep research on the topic. But I know some stuff, and I suspect some stuff. I could be wrong. Here are some of my thoughts: 1) My first point is that it is very plausible that paid leave period has a substantial effect on birth rates. I would sort of have a null hypothesis that there is a large effect, rather than zero/small effect.  2) I'm a statistician, and normally don't put too much weight on personal experience. But before all my three pregnancies a thorough analysis with the conclusion "this is doable financially", was very important in my decision-making. This N=1 (or N=3 if counting three kids), partly forms my opinion on the null hypothesis above. My impression is that most responsible adults have similar thought processes before getting pregnant.  Further, after graduating, I was very motivated to have an impactful career. The apparent lack of job security and paid parental leave made sure I was 0% interested in any job at an EA org. Not the end of the world in this specific case, but there are probably a lot of more talented women out there, who also have a 0% interest for the same reasons. 3) I wrote Scandinavian-like because these are countries with generous parental leaves, and I know the setup well, as I'm Norwegian. Again, I have not researched in detail, but all three Scandinavian countries have birth rates well above the average in Europe. I also know France is on top of the birth statistics, and have generous set-ups. If I was French, the headline would maybe point to France instead of Scandinavia :) 4) Apart from (3), it is not straightforward to see a direct effect on leave times and birth rates in Scandinavia. Policies change very slowly over time (might add 1 week every now and then), and changes go hand-in-hand with other policies as free/subsidized kindergardens.  5) It seems to be a bit under-researche

Monitoring Nanotechnologies and APM
Nanotechnologies, and a catastrophic scenario linked to it called “Grey goo”, have received very little attention recently (more information here ), whereas nanotechnologies keep moving forward and that some think that it’s one of the most plausible ways of getting extinct. 

We’d be excited that a person or an organization closely monitors the evolution of the field and produces content on how dangerous it is. Knowing whether there are actionable steps that could be taken now or not would be very valuable for both funders and researchers of the longtermist community.

Quick start kit for new EA orgs

EA ops

Stipe atlas for longtermist orgs. Rather than figuring out the best tools, registrations, and practices for every new org, figure out the best default options and provide an easy interface to start up faster.

7
Dawn Drescher
I just read the the Charity Entrepreneurship handbook How to Launch a High-Impact Nonprofit. That seems to fit the bill. Maybe having country-specific versions of it and versions for longtermist orgs, would be even better.

Campaign to eliminate lead globally

Economic Growth

Lead exposure limits IQ, takes over 1M lives every year and costs Africa alone $130B annually, 4% of GDP: an extraordinary limit on human potential. Most lead exposure is through paint in buildings and toys. The US banned lead paint in 1978 but 60% of countries still permit it. We would like to see ideas for a global policy campaign, perhaps similar to Bloomberg’s $1B tobacco advocacy campaign (estimated to have saved ~30M lives), to push for regulations and industry monitoring.

Epistemic status: The “prize” feels very large but I am not aware of proven interventions for lead regulations. 30 minutes of Googling suggests the only existing implementer (www.leadelimination.org) might be too small for this level of funding so there may not be many applicants. 

Conflict of interest: I work for a small, new non-profit focused on $B giving. We are generally focused on projects with large, existing implementers so have not pursued lead elimination policy beyond initial light research

Research institute focused on civilizational lock-in

Values and Reflective Processes, Economic Growth, Space Governance, Effective Altruism

One source of long-term risks and potential levers to positively shape the future is the possibility that certain values or social structures get locked in, such as via global totalitarianism, self-replicating colonies, or widespread dominance of a single set of values. Though organizations exist dedicated to work on risks of human extinction, we would like to see an academic or independent institute focused on other events that could have an impact on the order of millions of years or more. Are such events plausible, and which ones should be of most interest and concern? Such an institute might be similar in structure to FHI, GPI, or CSER, drawing on the social sciences, history, philosophy, and mathematics.

[anonymous]15
0
0

Nonprofit Growth Research Think Tank/Consultancy

EA Ops, Effective Altruism


Most EA organisations and projects will be faced (at several times during their organisational lifecycle) with changes to their organisations due the growth of their teams.
If handled poorly a team can grow with many "growing pains" such as processes, policies, financial systems, (project) management and organisational/team structures that are not fit to the new status quo.

We'd love to see an organization that guides other EA organisations on their path to growth by identifying the right strategies and blind spots to manage the change phase in a period of growth.

Prevent stable global totalitarian regimes through uncensorable broadcasts

Great Power Relations, Epistemic Institutions

Human civilization may get caught in a stable global totalitarian regime. Current and past totalitarian regimes have struggled with influences from the outside. So it may be critical to make sure now that future global totalitarian regimes will also have influences from the outside.

North Korea strikes me as a great example of a totalitarian regime straight out of 1984. Its systematic oppression of its citizens is so sophisticated that I could well imagine a world-wide regime of this sort to be stable for a very long time. Even as it exists today, it’s remarkably stable.

The main source of instability is that there’s a world all around North Korea, and especially right to its south, that works so much better in terms of welfare, justice, prospecity, growth, and various moral preferences that are widely shared in the rest of the world.

There may be other sources of instability – for example, I don’t currently understand why North Korea’s currency is inflated to worthlessness – but if not, then we, today, are to a hypothetical future global totalitarian state what the r... (read more)

2
MaxRa
Interesting idea. Have you thought more about sources of instability and weighed them? Would be interested. Others that come to mind are: - North Korean citizens must be fairly unhappy about a lot of the government and wouldn't take much to support a coup against the government - the military leadership is never perfectly aligned with the government and historically seems ready to coup under certain circumstances - having successors that can sustain autocratic rule
4
Dawn Drescher
I’ve written this article about human rights in North Korea. Some parts are probably outdated now, but others are not, and the general lessons hold, I think. 1. All but very few of the citizens are isolated from all information from the outside, so that they have no way to know that the rest of world isn’t actually envious of the prosperity of North Korea and they aren’t under a constant threat from the US, and the south isn’t just US-occupied territory, etc. The only things that can weaken this information monopoly are phone networks from China that extend a bit across the border, leaflets from South Korea, and similar influences from the outside. But they are localized because people are not allowed to move freely within the country. The information monopoly of the government is probably fairly complete a bit further away from the borders. But note that I haven’t been following this closely in the past 5 years. They also have this very powerful system in place where everyone is forced to snitch on everyone else if they learn that someone else knows something that they shouldn’t know or else you and your whole family can go to prison or concentration camp. The snitching is also systematically, hierarchically organized so that there are always overseers for small groups of citizens, and those overseers have their own overseers and so on, so that everyone can efficiently be monitored 24/7. A big exception to that is all the “corruption” and the gray markets. They’ve basically become the real economy of the country. But those are mostly based on Chinese currency, Chinese phones and networks, etc. So again I think black markets would be easier to prevent if there were no outside influences. 2. Without outside forces to defend against, you can concentrate completely on using the military as a mechanism of oppression as opposed to giving it any real power. Almost everyone in NK is in the military but that’s just to keep them busy and to have them bu
1
Peter S. Park
It's plausible that compared to a stable authoritarian nuclear state, an unstable or couped authoritarian nuclear state could be even worse (in worst-case scenario and even potentially in expected value).  For a worst-case scenario, consider that if a popular uprising is on the verge of ousting Kim Jong Un, he may desperately nuke who-know's-where or order an artillery strike on Seoul.  Also, if you believe these high-access defectors' interviews, most North Korean soldiers genuinely believe that they can win a war against the U.S. and South Korea. This means that even if there is a palace coup rather than a popular uprising, it's plausible that an irrational general rises to power and starts an irrational nuclear war with the intent to win. So I think it's plausible that prevention is an entirely different beast than policy regarding already existing stable, authoritarian, and armed states.

Facilitating relocation

Economic growth, Effective altruism

People are over-averse to moving, even if it moving leads to much better opportunities (e.g., when a volcano destroyed a fraction of nearby houses, their inhabitants who were forced to move ended up better off on earnings and education, conditional on being young; see this paper). Research and incentivization can help reduce this over-aversion. 

It is plausible that even EAs underconsider relocation.If so, it means a lot of impactful value may be achieved by convincing and facilitating  EAs' relocation to high-impact career opportunities.

5
Jackson Wagner
Personally I believe that we should go even further, and look into using assurance contracts to help create "affinity cities" and zoom-towns based on common interests -- we should create new EA hubs in well-chosen parts of the USA, then when people move there we can experiment with various kinds of community support (childcare, etc) and exciting new forms of community governance/decisionmaking (maybe all the EAs who use a coworking space pay a fee that gets spent on community-improvement projects as decided by a quadratic-funding process). Besides the direct effect of creating new, well-functioning EA community hubs in a variety of useful locations, I think that supporting "affinity cities" in general (making them easier for other groups to start, providing a best-practices template of what they can be, etc) would have powerful effects for creating "governance competition" (cities and towns trying to improve and reform themselves in order to sell themselves as a zoom-town destination) and encouraging more cultural/legal/institutional experimentation which has positive externalities for the whole society (since everyone benefits from adopting the fruits of the most successful experiments). I have numerous additional thoughts on this subject, which unfortunately this comment is too small to contain.  Hopefully it'll become a Forum post soon.  In the meantime, just facilitating individual moves like you're saying would probably be helpful, although it would be strange to have an independent group working solely on this.  Better perhaps to build a culture where large EA organizations especially willing to help their employees with moving.  (IMO they are already trying to do this to some extent, for instance many EA orgs try to have the ability to easily hire internationally.)  This would be similar to how many EA orgs make a special effort to compensate people for time spent applying for EA jobs -- getting paid for time spent on a job application is much more common i
1
Peter S. Park
Thanks for the great big-picture suggestions! Some of these are quite ambitious (in a good way!) and I think this is the level of out-of-the-box thinking needed on this issue.  This idea goes hand-in-hand with a previous post "Facilitate U.S. voters' relocation to swing states." For a project aiming to facilitate relocation to well-chosen parts of the US, it could be additionally impactful to consider geographic voting power as well, depending on the scale of the project.

EA/AI Hiring Round

Effective Altruism/ AI Safety

 Meet with a variety of organisations and design an short set of questions to best predict good candidates for roles. Allow anyone to take this test every 3 months and apply for a broad range of positions eg all EA ops roles in their city or all AI safety roles. Hire more, higher quality candidates.

4
Eevee🔹
Candidates could also be matched with orgs using an algorithm like the one used by the National Resident Matching Program.

Funding private versions of Longtermist Political Institutions to lay groundwork for government versions

Some of the seemingly most promising and tractable ways to reduce short-termist incentives for legislators are Posterity Impact Assessments (PIA) and Futures Assemblies (see Tyler John's work). But, it isn't clear just how PIAs would actually  work, e.g. what would qualify as an appropriate triggering mechanism, what evaluatory approaches would be employed to judge policies, how far into the future policies can be evaluated. It seems like it would be relatively inexpensive to fund an organization to do PIAs in order to build a framework which a potential in-government research institute could adopt instead of having to start from scratch. The precedent set by this organization seems like it would also contribute to reducing the difficulty of advocating for longtermist agency/research institutes within government. 

Similarly, it would be reasonably affordable to run a trial Futures Assembly wherein a representative sample of a country's population is formed to deliberate over how and to what extent policy makers should consider the interests of future persons/generations. This would provide a precedent for potential government funded versions as well as a democratically legitimate advocate for longtermist policy decisions. 

Basically, EAs could lay the groundwork for some of the most promising/feasible longtermist political institutions without first needing to get legislation passed. 

Movement-building targeted at existential-risk-relevant fields' international scientific communities

Biorisk, Artificial Intelligence

Scientists of the Manhattan Project built the first nuclear bombs, the development and use of which normalized nuclear proliferation. Contrast this with bioweapons, which in principle could also have been normalized if not for the advocacy of scientists like Matthew Meselson, which led to a lasting international agreement to not develop bioweapons (Biological Weapons Convention).

Targeted efforts to build the movement of reducing catastrophic/existential risks (and longtermism in general) specifically in the international scientific communities of fields that are highly relevant to certain existential risks, whose lasting cooperation would be crucial for the non-realization of these risks, could potentially be very impactful. Some potential approaches include funding of fellowships/grants/collaboration opportunities, creating scientific societies/conferences, and organizing advocacy/outreach/petitions.

Towards Better Epistemology in Medicine

Epistemic Institutions, Values and Reflective Processes

Medicine is a field subject to an incentive landscape that can, among other issues, encourage pathological risk aversion in treatment and research, which holds back patients getting the care
with the greatest expected value to them and limits our ability as a society to adapt
to new and changing health issues such as global pandemics. Medical professionals are often trained in a narrow set of epistemic norms that lead to  slow updates on new evidence,  overreliance on individual decisionmaking, and difficulty communicating about complex tradeoffs. The unavoidable closeness to moral and ethical issues, as well as difficulties in reasoning about decisions that hold lives directly in the balance, exacerbate the problem.

We're interested in projects that address these problems, perhaps including the following:
- Literature and media that promotes truth-seeking and expected-value-thinking norms 
in medicine, whether explicit in non-fiction or training material, or in fictional settings
- Resources that seek to aggregate medical evidence relevant to a specific condition or
clinical appli... (read more)

[This comment is no longer endorsed by its author]Reply
3
Jackson Wagner
I think this is quite important insofar as: * It could help change the existing academic culture of overly-restrictive "bioethics" around public health issues like pandemics to think more rationally about when to approve things like rapid tests and vaccines, when to impose mandates and travel bans versus not, etc. * It might lead to broader reforms and readjustments of focus, leading to a faster pace of developing medicines (ultimately saving many QALYs), reductions in healthcare cost, more progress in understanding aging, etc. One reason not to focus on this intervention is if you thought that general epistemology-improving efforts across academia would work well, and there's no particular reason to target medicine/bioethics/etc first.

AI safety university groups

Artificial Intelligence

Leading computer science universities appear to be a promising place to increase interest in working to address existential risk from AI, especially among the undergraduate and graduate student body. In Q1 2022, EA student groups at Oxford, MIT, Georgia Tech, and other universities have had strong success with AI safety community-building through activities such as facilitating the semester-long AGI Safety Fundamentals program locally, hosting high-profile AI safety researchers for virtual guest speaker events, and running a research paper reading group. We'd also like to see student groups which engage students with opportunities to develop relevant skills and which connect them with mentors to work on AI safety projects, with the goal of empowering students to work full-time on AI safety. We'd be happy to fund students to run AI safety community-building activities alongside their studies or to take a gap semester, or to sponsor other people to support an EA group at leading university in building up the AI safety community.

 

Some additional comments on why I think AI safety clubs are promising:

  • For those unfamiliar, the AGI Sa
... (read more)
2
MaxRa
Thanks for sharing this idea, super exciting to me that there is so much traction for getting junior CS people excited about AI Safety. I'd love to see much more of this happen and will likely (70%?) try to spend > a day thinking about this in the next month. If you have more ideas or pointers to look into, would highly appreciate it.

EA Crisis Fund:

Effective Altruism/X-risk

The EA Crisis Fund would respond to crisises around the world such as the current crisis of Ukranian refugees. This would help develop the capabilities of EA to respond to novel situations on short timelines, provide great publicity and build connections and credibility with government. This would increase the chance that EA would have a seat at the table in important discussions.

Potential Downside: It may be hard to respond to these crisises in a way that builds credibility without burning a lot of money.

8
Jackson Wagner
I think if we are just jumping into the same highly-salient crises as everybody else (Ukraine today, Afghanistan yesterday, Black Lives Matter, Covid, etc), we burn a lot of money quickly at only middling effectiveness (even if we try to identify specific "most effective" interventions in each crisis, like providing oxygen tanks to Indian hospitals during their covid surge) and don't even get a huge amount of publicity because everybody else is also playing that same game (see: Elon Musk giving starlinks to Ukraine, etc). This idea maybe works better if we are trying to respond to other crises elsewhere in the world that everyone else isn't already going bananas over -- like doing famine/disaster relief in countries that aren't getting headlines, or doing pandemic early-response stuff before the world realizes it's a problem, or having some kind of "Pivotal Action Fund" on hair-trigger alert to attempt a response to the potential emergence of transformative AGI capabilities.  I'm not sure what specific approaches such a fund would use to reliably improve response times above the current situation (which is presumably "OpenPhil has the ability to spend a lot of money fast if they all start really freaking out about an emerging crisis"), but I'd certainly be interested to hear someone explore this idea.
4
IanDavidMoss
I think the experience of the FRAPPE donor circle, which formed in response to the first COVID wave in spring 2020, is relevant. We found that it didn't take that much money or time for us to be able to 1) get access to high-quality, often non-public information about the crisis and how it was unfolding and 2) find strong giving opportunities that not enough other people were paying attention to. I like Chris's idea because the combination of high salience + fast-moving environment is often a good one for finding high-leverage opportunities, but it's easier to intervene effectively and take on a leadership role when you have gone to the trouble of setting up some infrastructure for it in advance.

Mentors/tutors for AI safety:

AI Safety

Many people want to contribute to AI safety, but they may not be able to get up to the level where they would be able to conduct useful research. On the other hand, given time, many of these people could probably become knowledgeable enough about on a particular agenda in order to mentor potential researchers pursuing this agenda. These mentors could help people understand the reasons for and against pursuing a particular agenda, help people navigate the content that has been written on that topic, address common misconceptions and help people who are confused about a particular point.

Academic AI Safety Journal
Start an Academic Journal for AI Safety Research

Problem: There isn’t one. There should be. It would boost prestige and attract more talent to the field.

Solution: Fund someone to start one.

This has come up a few times before and is controversial.

Pros: 

  • more incentive for academics to work on pure safety without shoehorning their work
  • higher status
  • better peer review / less groupthink

Cons: 

  • risks putting safety into an isolated ghetto. Currently a lot of safety stuff is published in the best conferences
  • Journals matter 100x less than conferences in ML
  • I think academics are a minority in AIS at the moment (weighted by my subjective sense of importance anyway)

 

FWIW I take the first con to be decisive against it. Higher status takes a long time to build, and better peer review is (sadly) a mirage.

1
Logan Riggs
You can still have a conference for AI safety specifically and present at both conferences, with a caveat. From NeurIPS: > Can I submit work that is in submission to, has been accepted to, or has been published in a non-archival venue (e.g. arXiv or a workshop without any official proceedings)? Answer: Yes, as long as this does not violate the other venue's policy on dual submissions (if it has one). The AI Safety conference couldn't have an official proceeding. This would still be great for networking and disseminating ideas, which is definitely worth it.
2
Dawn Drescher
Another option may be a conference (that forms more of a Schelling point in the field than all the existing ones). These seem to be more popular in the wider field. But both solutions also have the risk that fewer people outside AI safety may read the AI safety papers.

A better overview of the effective altruism community

Effective Altruism

The effective altruism movement has grown large enough that it has become hard for any individual to have a good overview of ongoing projects and existing organizations. There is currently no central repository on what is happening across different causes and parts of the movement, which means many opportunities for coordination may be left on the table. We would like to see more initiatives like the yearly EA survey and a more detailed version of Ben Todd’s recent post that research and provide an overview of what is happening across the effective altruism movement.

Increase diversity with more ‘medium term’ plans to enable participation when travel is required

Community Building and Diversity, Values and Reflective Processes

I’m new here and it seems like many opportunities are planned with short notice. This can work well for people with lots of flexibility, but may discourage participation from people who are mid-career/working and people with families. I propose that organizations within EA encourage diversity by lengthening some planning horizons. Funding a stable hub with enough runway to have a 6 month planning horizon would be helpful for professionals and parents like my family.

Enlightenment at scale (provocative title :-) )

Values and Reflective Processes (?), X-risk (?)

A strong meditation practice promises enticing benefits to the meditator---less suffering, more control over ones attention and awareness, more insight, more equanimity. Brahmavihara practice promises the cultivation of loving-kindness, compassion, and empathetic joy. The world would be a much better place if everybody suffered less, had more equanimity, and felt strong compassion and empathy with other beings. But meditation is hard! Becoming a skilled meditator, and reaping these benefits, requires probably thousands of hours of dedicated practice. Most people will just not put in this amount of effort. But maybe it doesn't need to be this way. The field of meditation teaching seems underdeveloped, and innovative methods that make use of technology (e.g. neurofeedback) seem largely unexplored. We are interested in supporting scalable solutions that bring the benefits of meditation to many people.

Note:

  • I don't actually know if meditation really has these benefits; this would needed to be established first (there should be quite some research on this by now). It seems plausible to me that m
... (read more)
2
Dawn Drescher
Are there high-quality safety trials for different meditation practices? I’ve heard of a variety of really bad side effects, usually from very intense, very goal-oriented meditative practice. The Dark Night that Daniel Ingram describes, the profligacy that Scott Alexander warned of, more inefficient perception that Holly Elmore experienced, etc. I have no idea how common those are and whether one is generally safe against them if one only meditates very casually… It would be good to have more certainty about that, especially since a lot of my friends are casual meditators.

Optimal 90-second pitches for EA/longtermism

Effective altruism

Longtermism is nuanced; a full discussion requires a large amount of time. It is possible that more people than currently may be interested in learning more about the movement if they are presented with a short but compelling pitch that is suited for the quickness of many people's lifestyle. (I've stated a spontaneous and very suboptimal pitch for EA on at least one occasion, which I regret.) 

Optimized 90-second or so pitches may potentially help the movement's outreach. Persuasive pitches (each focused on each of a myriad of topics/angles that the listener may be interested in) can be selected by community contests/focus groups and posted online, both for viewing and for informing movement builders' efforts. 

3
Peter S. Park
Addendum: Found out that this is just a special case of James Ozden's 'Refining EA communications and messaging'

Monitoring and advocacy to make Zoonotic Risk Prediction projects safer

Biorisk and recovery from catastrophe

Following COVID-19, a great deal of funding is becoming available for "Zoonotic Risk Prediction" projects, which intend to broadly sample wildlife pathogens, map their evolutionary space for pandemic potential, and publish rank-ordered lists of the riskiest pathogens. Such work is of dubious biodefence value, presents a direct risk of accidental release in the field and lab, and the resulting information is a clear biosecurity infohazard.

We would be excited to fund projects to collect, monitor, and report on the activities of these projects. ZRP projects have multiple components- field sampling, computational modelling, and lab characterization - each of which carry distinct risks and leaves an information trail. Monitoring and reporting on open source information associated with ZRP projects could disincentivize the riskiest aspects of this work, target resources for event surveillance and early warning of accidental release, and provide material for advocacy efforts.

There is some overlap with portions of the BWC project, but I think this is best tackled as a separate body of work/by a different team (due to radically different OPSEC, deception, and scrutiny profiles). I've thought about this a fair bit and am happy to discuss offline.

Better Reporting on Other Countries' Perspectives

Epistemic Institutions

(Refinement of better news)

It's very hard for a regular person to understand what the Russian or Chinese or Turkish perspectives on events are from reading Western media. It would be valuable to have a high-quality mainstream news media source that takes special effort to make sure that this is explored, including by having on-staff anthropologists. This would increase understanding between countries and reduce the chance of Great Power Conflict.

4
Dawn Drescher
I wonder whether Larissa MacFarquhar (author of Strangers Drowning) may be someone to talk to about this. She managed to understand Julia Wise so well that I learned new things about myself from reading her chapter in the book. The only other people who can do that are close friends of mine who’ve known me for years. Maybe Larissa is just similar to Julia and so had this level of insight, but maybe she’s also exceptionally gifted at perspective-taking.

Agent Foundations and Philosophy Engagement Fund:

AI Safety

Agent Foundations research may potentially be important for AI Safety, but currently it has received very little engagement from the philosophical community. This fund would offer funding and or scholarships for people who want to engage with these ideas in an academic philosophical context. This project aims to improve clarity about whether this research actually worthwhile and, if so, to help make progress on these problems.

EA Founders Camp

Effective altruism, empowering exceptional people

The EA community is scaling up, and funding ambitious new projects. To support continued growth of new organisations and projects, we would be excited to fund an organisation to run EA Founders Camps. These events would provide an exciting, sparky environment for (1) Potential founders to meet co-founders, (2) Founders to hear about and generate great ideas for impactful projects and organisations, (3) Founders to get key training tailored to their project area, (4) Founders to build a support network of other new and existing founders, (5) Founders to connect with funders and advisers.

Regulating AI consciousness.

Artificial intelligence,  Values and reflective process

The probability that AIs will be capable of conscious processing in the incoming decades is not negligible. With the right information dynamics, some artificial cognitive architecture could support conscious experiences. The global neural workspace is an example of a leading theory of consciousness compatible with this view. Furthermore, if it turns out that conscious processing improves learning efficiency then building AI capable of consciousness might become an effective path toward more generally capable AI. Building conscious AIs would have crucial ethical implications given their high expected population. To decrease the chance of bad moral outcomes we could follow two broad strategies. First, we could fund policy projects aiming to work with regulators to ban or slow down research that poses a substantial risk to building conscious AI. Regulations slowing the arrival of conscious AIs could be in place until we gain more moral clarity and a solid understanding of machine consciousness. For example, philosopher Thomas Metzinger advocated a moratorium on synthetic phenomenology in ... (read more)

[anonymous]13
0
0

Vetting and matchmaking organization of consultants and contractors for EA founders
Empowering Exceptional People, Effective Altruism


Founders of new projects, charities and other EA-aligned organisations can have an extremely high impact. These individuals tend to suffer more from issues such as overwhelm, burnout, etc. which can easily lead them to have much less impact both short and long-term. A potential intervention against this is decreasing the decision-making overload by helping them outsource some of their decision-making.


We'd love to see an organization that offers vetting and matchmaking for independent consultants and contractors in several relevant areas of decision-making for these people so they can tap into knowledge and expertise faster with less effort and cognitive load.
This service can be considered and expansion of this idea by aviv.

Open-source intelligence agency

Great Power Relations

Create an organization that will collect and analyze open-source intelligence information on critical topics (e.g. US nuclear arsenal, more examples below) and publish it online.

Many documents on US nuclear arsenal and military activities were obtained through Freedom of Information Act. Still, they were never analyzed properly because it is a lot of tedious work that journalists do not have the capacity or incentive to do. Standard open-source intelligence gathering methods can provide even more information. As a result, there is only a limited public understanding of important sources of x-risk.

Possible subjects of investigation:

  1. State of nuclear arsenals of US and Russia.
  2. Military developments of artificial intelligence
  3. Propaganda and hacking capabilities of Russia and China.
  4. State of AI Armsraces, both between states and between companies.
  5. Monitoring activities of secret services of both Russian and USA. (For example, to better estimate the capabilities of GRU, NSA, and others).
  6. (bioweapons has its own comment)

Scaling successful policies

Biorisk and Recovery from Catastrophe, Economic Growth

Information flow across institutions (including national governments) is far from optimal, and there could be large gains in simply scaling what already works in some places. We’d love to see an organization that takes a prioritized approach to researching which policies are currently in place to address major global issues, identifying which of these are most promising to bring to other institutions and geographies, and then bringing these to the institutions and geographies where they are most needed.

Reduce meat consumption

Biorisk, Moral circle expansion

Research and efforts to reduce broad meat consumption would help moral circle expansion, pandemic prevention, and climate change mitigation. Perhaps messaging from the pandemic-prevention angle (in addition to the climate change angle and the moral circle expansion angle) may help. 

Platform Democracy Institutions

Artificial Intelligence (Governance), Epistemic Institutions, Values and Reflective Processes, Great Power Relations

Facebook/Meta, YouTube/Google, and other platforms make incredibly impactful decisions about the communications of billions. Better choices  can significantly impact geopolitics, pandemic response,  the incentives on politicians and journalists, etc. Right now, those decisions are primarily in the hands of corporate CEO’s—and heavily influenced by pressure from partisan and authoritarian governments aiming to entrench their own power. There is an alternative: platform democracy. In the past decade, a new suite of democratic processes have been shown to be surprisingly effective at navigating challenging and controversial issues, from nuclear power policy in South Korea to abortion in Ireland. 

Such processes have been tested around the world, overcome the pitfalls of elections and referendums, and can work at platform scale. They enable the creation of independent ‘people’s mandates’ for platform policies—something invaluable for the impacted populations, well-meaning governments which are unable to act on speech, and ... (read more)

Polis lobbying

Political

Pol.is is a tool for mapping coalitions (mentioned in this 80,000 Hours podcast).  Rather than running standard polls on issues, large Polis pols could be run, as Taiwan does. These would seek to build solutions which hold broad support before taking them to lobbyists.

Backup communication systems

Biorisk and Recovery from Catastrophe

In the event of GCRs, conflicts or disasters, communication systems are key to sensemake and coordinate effectively. They prevent chaos and further escalation of conflicts. Today, there are many threats to the global communication infrastucutre including EMPs, widespread cyber attacks,  and solar flares.

Metaculus Competitor

Forecasting

Prediction markets don't incentivise long term questions and the Good Judgement Open has slow question creation. This leaves Metaculus as the only place to forecast questions over long time horizons. This is too important a problem to have a single organisation solving. At least one more forecasting organisation should exist to try and build the infrastructure necessary to take forecasts,  improve individual forecasting, display track records and make 5 - 1000 year forecasts.

Securing offices and schools against SARS-3

Biorisk

The COVID-19 pandemic has demonstrated failures of our scientific, political, and epistemic institutions, but also of our physical structures. We believe that accurate and high-quality designs of offices and schools to be secure against pathogen spread of airborne viruses can be a) directly useful, b) potentially generalize well to future pandemics, and c) provide the necessary training ground for building more robust and ambitious projects in the future, including large-scale civilizational refuges.

We picked offices and schools to limit the threat model and surface area, but we're in theory excited about designs that can contain pathogen spread in any well-trafficked built environment.

2
Jackson Wagner
Is there a way to get more leverage on this?  Maybe: - Research new sterilization tech (like shining UV-C light horizontally across the ceiling in a way that cleans the air but doesn't harm people) so that buildings can be retrofitted more easily, without redoing the whole HVAC system?  This would count under FTX's project idea #8. - Lobbying for better air-filtration systems to be made a requirement for schools and offices as a matter of government budgets (for schools) and regulation (for offices)?  I'm sure we could swing a state or local ballot proposition in a covid-cautious and wildfire-plagued place like California.  
2
Linch
I think we're bottlenecked more on really good designs than on the politics, but I'm not sure. I also vaguely have this cached view that a lot of whether built-environment innovations are used in practice depends on things that look more like building codes than office politics, but this is a pretty ill-formed view that I have low confidence in. I guess that I sort of believe all three should be done in a sane world, and which things we ought to prioritize in practice will depend on a combination of "POV of the universe" modeling and personal fit considerations of whoever wants to implement any of these considerations.

Consulting on best practices around info hazards

Epistemic Institutions, Effective Altruism, Research That Can Help Us Improve

Information about ways to influence the long-term future can in some cases give rise to information hazards, where true information can cause harm. Typical examples concern research into existential risks, such as around potential powerful weapons or algorithms prone to misuse. Other risks exist, however, and may also be especially important for longtermists. For example, better understanding of ways social structures and values can get locked in may help powerful actors achieve deeply misguided objectives. 

We would like to support an organization that can develop a set of best practices and consult with important institutions, companies, and longtermist organizations on how best to manage information hazards. We would like to see work to help organizations think about the tradeoffs in sharing information. How common are info hazards? Are there ways to eliminate or minimize downsides? Is it typically the case that the downsides to information sharing are much smaller than upsides or vice versa?

Comprehensive, personalized, open source simulation engine for public policy reforms

Epistemic Institutions, Economic Growth, Values and Reflective Processes

Policy researchers apply quantitative modeling to estimate the impacts of immigration reform on GDP, child benefits on fertility, safety net reform on poverty, carbon pricing on emissions, and other policies. But these analyses are typically narrow, impersonal, inflexible, and closed-source, and the public can rarely access the models that produce them.

We'd like to see a general simulation engine—built with open source code and freely available to researchers and the public—to estimate the impact of a wide variety of public policy reforms on a wide variety of outcomes, using a wide variety of customizable parameters and assumptions. Such a simulation engine could power analyses like those above, while opening up policy analysis to more intricate reforms, presented as a technology product that estimates impacts on society and one's own household.

A common technology layer for public policy analysis would promote empiricism across institutions from government to think tanks to the media. Exposing households to society-wide and pers... (read more)

Create an organization doing literature reviews and research on demand

Values and Reflective Processes,  Effective Altruism

Create a research organization that will offer literature reviews and research to other EA organizations. They will focus on questions that are not theory-heavy and can be approached by a generalists without previous deep knowledge of the field. Previous examples of such research are publications of AI Impacts or literature reviews by Luke Muehlhauser.
Besides research itself, this is useful also for:
- it frees up the time of senior researchers
- It can be a good training place for junior researchers.
- it may enable a larger infusion of valuable ideas from academia.

4
PeterSlattery
I think that this is a good idea and something that READI could be interested in supporting. We have extensive experience doing reviews as research consultant  and in providing related training (both as volunteers and professionals). One related idea that some of us are exploring, is developing a sort of 'micro-course and credential' to i) train EAs to do reviews and ii) curate teams of credential junior researcher who can to support the undertaking of literature reviews under the supervision an expert.
3
Jakob
Would this be another organization like Rethink Priorities, or is it different from what they are doing? (Note: I don't think this space is crowded yet, so even if it is another organization doing the same things, it could still be very helpful!)

Creating more EA relevant credentials
Movement building

EA wants to equip young people with knowledge and motivation to improve the long-term future by providing high quality online educational resources for anyone in the world to learn about effective altruism and longtermism. Most young people follow established education paths (e.g., school, university, and professional courses) and seek related credentials during this time. There are relatively few credentialed courses or activities which provide exposure to EA ideals and core capabilities. We would therefore like to fund more of these. For instance, these might include talent based scholarships (e.g., a ‘rising social impact star award’), cause related Olympiads (e.g.,  AI safety), MOOCs/university courses (e.g., on causes or, key skill sets, with an EA tie-in), and EA themed essay writing competitions (.e.g., asking high school students to write about 'the most effective ways to improve the world’ and giving awards to 

the best ones).

New EA incubation and scaling funds and organisations
Movement building, coordination, coincidence of wants problems, & scaling

Charity entrepreneurship, Y Combinator, Rocket Internet and similar have had notable and disproportionate economic and social impacts and accelerated the dissemination of innovative ideas. The EA community has also called for more founders. We would therefore like to support EA and social impact funds that initiate, or scale relevant initiatives such as charities (e.g., tax-deductible EA charity funds,  long term future fund equivalents or research institutes) 

[anonymous]12
0
0

Job application support for underrepresented groups
Increasing diversity in EA, Effective Altruism


Underrepresented groups usually face additional (or exacerbated) challenges in job applications: language barriers, impostor-syndrome, smaller networks, etc. that affect their application success. There are organisations within the EA ecosystem that provide career coaching but none provides dedicated, on demand support with job applications. 
 

We'd love to see an organisation that provides ongoing support to people from underrepresented groups in job applications including: finding the right opportunities, preparing application documents, preparing for interviews, etc. so they are more likely to land high impact roles.

Economic growth

Work with developing countries to buy an area of land to form an EA special economic area. This can be a place where EA can congregate and innovate in IT and other fields. It can also be a place where EA can demonstrate new policies, technologies and pioneer new ways of thinking.

EA could expand on this idea to build communities in remote places that are likely to survive extinction events. It will provide a good opportunity to test technology that will be used in any space colonies.

Generous prizes to attract young top talent to EA in big countries

Effective altruism

Prizes are a straightforward way to attract top talent to engage with EA ideas. They also require relatively low human capital or expertise and therefore are conceivably scalable for different countries. Through a nationwide selection process optimized for raw talent, ability to get things done, and altruistic alignment, an EA prize could quickly make the movement become well-known and prestigious in big countries. High school graduates and early university students would probably be the best target audience. The prize could come with a few strings attached, such as participating in a two-week-long EA fellowship, or with more intense commitments, such as working for a year on an EA-aligned project. Brazil and India are probably the best fit, considering their openness to Western ideas and philanthropic investment (in comparison to China and Russia). Other candidates may include the Philippines, where EA groups have been relatively successful, Indonesia, Argentina, Nigeria, and Mexico.

Fund/Create training for mental health workers

Effective Altruism

A limiting reagent in health care in Canada right now is that there aren’t enough psychologists/psychiatrists/ mental health workers. People don’t have access to these services and end up in the emergency department and crashing the health system in other ways. Mental health is fundamental for participation in societal roles, and highly conscientious people are at risk and children are waiting years for assessments (like for ADHD) which can change the course of their lives.

Psychiatry is one of the least well paid medical specialties, it takes many years to train psychologists and psychiatrists.

I propose looking at funding the training of the mental health workforce, as well as lobbying to have mental health services to be included as essential health care services.

Website for coordinating independent donors and applicants for funding

Empowering exceptional people, effective altruism

At EAG London 2021, many attendees indicated in their profiles that they were looking for donation opportunities. Donation autonomy is important to many prospective donors, and increasing the range of potential funding sources is important to those applying for funding. A curated website which allows applicants to post requests for funding and allows potential donors to browse those requests and offer to fully or partially fund applicants, seems like an effective solution.

Nuclear arms reduction to lower AI risk

Artificial Intelligence and Great Power Relations

In addition to being an existential risk in their own right, the continued existence of large numbers of launch-ready nuclear weapons also bears on risks from transformative AI. Existing launch-ready nuclear weapon systems could be manipulated or leveraged by a powerful AI to further its goals if it decided to behave adversarially towards humans. We think understanding the dynamics of and policy responses to this topic are under-researched and would benefit from further investigation.

4
aogara
Strongly agree with this. There are only a handful of weapons that threaten catastrophe to Earth’s population of 8 billion. When we think about how AI could cause an existential catastrophe, our first impulse shouldn’t be to think of “new weapons we can’t even imagine yet”. We should secure ourselves against the known credible existential threats first. Wrote up some thoughts about doing this as a career path here: https://forum.effectivealtruism.org/posts/7ZZpWPq5iqkLMmt25/aidan-o-gara-s-shortform?commentId=rnM3FAHtBpymBsdT7
2
Greg_Colbourn
On the flip side, you could make part of your 'pivotal act' be the neutralisation of all nuclear weapons.

Pilot emergency geoengineering solutions for catastrophic climate change

Research That Can Help Us Improve

Toby Ord puts the risk of runaway climate change causing the extinction of humanity by 2100 at 1/1000, a staggering expected loss. Emergency solutions, such as seeding oceans with carbon-absorbing algae or creating more reflective clouds, may be our last chance to prevent catastrophic warming but are extraordinarily operationally complex and may have unforeseen negative side-effects. Governments are highly unlikely to invest in massive geoengineering solutions until the last minute, at which point they may be rushed in execution and cause significant collateral damage. We’d like to fund people who can:

  • Identify and pilot at large scale top geoengineering initiatives over the next 5-10 years to develop operational lessons. E.g. promote algae growth in a large, private lake, launch a small cluster of mirrors into space
  • Develop advanced supercomputer models, potentially with input from the above pilots, of the potential negative side-effects of geoengineering solutions
  • Identify and pilot harm-mitigation responses for geoengineering solutions

Epistemic status: there seems to be rea... (read more)

3
Kirsten
I thought China has already done some low-key geoengineering? https://80000hours.org/podcast/episodes/kelly-wanser-climate-interventions/
1
Rory Fenton
Thanks for sharing!    My initial sense is that China's method is focused on controlling rainfall, which might mitigate some of the effects of climate change (e.g. reduce drought in some areas, reduce hurricane strength) but not actually prevent it. The ideas I had in mind were more emergency approaches to actually stopping climate change either by rapidly removing carbon (e.g. algae in oceans) or reducing solar radiation absorbs on the Earth's surface (making clouds/oceans more reflective, space mirrors). 

Incremental Institutional Review Board Reform

Epistemic Institutions, Values and Reflective Process

Institutional Review Boards (IRBs) regulate biomedical and social science research. In addition to slowing and deterring life-saving biomedical research, IRBs interfere with controversial but useful social science research, eg, Scott Atran was deterred from studying Jihadi terrorists; Mark Kleiman was deterred from studying the California prison system, and a Florida State University IRB cited public controversy as a reason to deter research. We would like to see a group focused on advocating for plausible reforms to IRBs that allow more social science research to be performed. Some plausible examples:

  1. Prof. Omri Ben-Shahar’s proposal to replace exempt IRB reviews with an electronic checklist or
  2.  Zachary Schrag’s proposal (from Ethical Imperialism) that Congress remove social science research from OHRP jurisdiction by amending the National Research Act of 1974. 

Concrete steps to these goals could be: 

  1. sponsoring a prize for the first university that allowed use of Prof. Omri Ben-Shahar’s electronic checklist tool;
  2.  setting up a journal for “Deterred Social Science Resea
... (read more)

Longtermism movement-building/election/appointment efforts, targeted at federal and state governments

Effective altruism

Increasing knowledge of and alignment with longtermism in government by targeted movement-building and facilitating the election/appointment of sympathetic people (and of close friends and family of sympathetic people) could potentially be very impactful. If longtermism/EA becomes a social norm in, say, Congress or the Washington 'blob', we could benefit from the stickiness of this social norm.

Studying stimulants' and anti-depressants' long-term effects on productivity and health in healthy people (e.g. Modafinil, Adderall, and Wellbutrin)

Economic Growth, Effective Altruism

Is it beneficial or harmful for long-term productivity to take Modafinil, Adderall, Wellbutrin, or other stimulants on a regular basis as a healthy person (some people speculate that it might make you less productive on days where you're not taking it)? If it's beneficial, what's the effect size? What frequency hits the best trade-off between building up tolerance vs short-term productivity gains? What are the long-term health effects? Does it affect longevity?


Some people think that taking stimulants regularly provides a large net boost to productivity. If true, that would mean we could relatively cheaply increase the productivity of the world and thereby increase economic growth. In particular, it could also increase the productivity of the EA community (which might be unusually willing to act on such information), including AI and biorisk researchers.

My very superficial impression is that many academics avoid researching the use of drugs in healthy people and that there is a bias against taking medic... (read more)

Sub-extinction event drills, games, exercises

Civilizational resilience to catastrophes

Someone should build up expertise and produce educational materials / run workshops on questions like 

  1. Nuclear attacks on several cities in a 1000 mile radius of you, including one within 100 miles. What is your first move? 
  2. Reports of a bioweapon in the water supply of your city. What do you do? 
  3. You're a survivor of an industrial-revolution-erasing event. What chunks of knowledge from science can be useful to you? After survival, what are the steps to rebuilding? 
  4. 6 billion people died and the remaining billion are uniformly distributed throughout the planet's former population centers. How can you build up robustness of basic survival, food and water production, shelter, etc.?
  5. (for the IT folks) 5 years after number 4, basic needs are largely met, and scavengers have filled a garage with old laptops and computer parts. Can you begin rebuilding the internet to connect with other clusters around the world? 

Differentially distributing these materials/workshops to people who live in geographical areas likely to survive at all could help rebuilding efforts in worlds where massive sub-extinction events occur. 

Centralising Information on EA/AI Safety

Effective Altruism, AI Safety

There are many list of opportunities available in EA/AI Safety and many lists of what organisations exist. Unfortunately these lists tend to get outdated. It would be extremely valuable to have a single list that is up to date and filterable according to various criteria. This would require someone being paid to maintain these part-time.

Another opportunity for centralisation would be to create an EA link shortener with pretty URLs. So for example, you'd be able to type in ea.guide/careers to see information on careers or ea.guide/forum to jump to the forum.

Notes: I own the URL ea.guide so I'd be able to donate it.

Physical AI Safety 

Drawing from work done in the former Soviet Union to improve safety in their bioweapons and nuclear facilities (e.g. free consultations and install of engineering safety measures, at-cost upgrades of infrastructure such as ventilation and storage facilities, etc), developing a standard set of physical/infrastructure technologies to help monitor AI Development labs/hardware and provide physical failsafes in the event of unexpectedly rapid takeoff (e.g., a FOOM scenario). Although unlikely, some standard guidelines modifying current best-practices for data center safety (e.g., restrictions on devices, physical air gaps between critical systems and the broader world, extensive onsite power monitoring and backup generators) could be critical to prevent anxiety over both physical and digital security from encouraging risk-taking behaviors by AI Development programs (Such as rushing builds, hiding locations, inappropriate dual-use or shared facilities which decrease control over data flows). In particular, physical low-tech hardware such as low-voltage switches have already provided demonstrable benefit in safeguarding high-tech, high-risk activity (See the Goldsb... (read more)

[anonymous]11
0
0

Acquire and repurpose new AI startups for AI safety

Artificial intelligence

As ML performance has recently improved there is a new wave of startups coming. Some are composed of top talent, carefully engineered infrastructure, a promising product, well-coordinated teams, with existing workflows and management capacity. All of these are bottlenecks for AI safety R&D.

It should be possible to acquire some appropriate startups and middle-sized companies. Examples include HuggingFace, AI21, Cohere, and smaller, newer startups. The idea is to repurpose the mission of some select companies to align them more closely with socially beneficial and safety-oriented R&D. This is sometimes feasible since their missions are often broad, still in flux, and their product could benefit from improving safety and alignment.

Trying this could have very high information value. If it works, it has enormous potential upside as many new AI startups are being created now that could be acquired in the future. It could potentially  more than double the size of the AI alignment R&D.

Paying existing employees to do safety R&D seems easier than paying academics. Academics often like to follow the... (read more)

2
MaxRa
Thanks, I think that's a really interesting and potentially great idea. I'd encourage you to post it as a short stand-alone post, I'd be interested in hearing other people's thoughts.

A center applying epistemic best practices to predicting & evaluating AI progress

Artificial  Intelligence and  Epistemic Institutions

Forecasting and evaluating AI progress is difficult and important. Current work in this area is distributed  across multiple organizations or individual researchers, not all of whom possess (a) the technical expertise, (b) knowledge & skill in applying epistemic best practices,  and (c) institutional legitimacy  (or otherwise suffer from cultural constraints). Activities of the center could include providing services to AI groups (e.g. offering superforecasting training or prediction services), producing bottom-line reports on "How capable is AI system X?", hosting adversarial collaborations, pointing out deficiencies in academic AI evaluations, and generally pioneering "analytic tradecraft" for AI progress.

Tradable impact certificates

Effective Altruism, Research That Can Help Us Improve, Economic Growth

Issuing and trading impact certificates can popularize and normalize impact investment and profitable strategic research among the world's economic influencers. Then, economic growth will have an approximately good direction, only the relative popularization of impact certificates management/incentivization would remain.

Better understanding the needs of organisational leaders
Coincidence of wants problems

In EA, organisational leaders and potential workers often don't have good information about each other’s needs and offerings (See EA needs consultancies). The same is true for researchers who might like to do research for organisations but don't know what to do. We would like to fund work to help to resolve this. This could involve collecting advanced market commitments for funders (e.g., org group x would pay up to x for y hours of design time next year, on average).  It could involved identifying unknowns for key decision makers in EA in relevant areas (e.g., instructional decision-making, longtermism, or animal welfare) which could be used to develop a research agendas and kickstart research.
 

Organization to push for mandatory liability insurance for dual-use research

Biorisk and Recovery from Catastrophe

Owen Cotton-Barratt for the Global Priorities Project in 2015:

Research produces large benefits. In some cases it may also pose novel risks, for instance work on potential pandemic pathogens. There is widespread agreement that such ‘dual use research of concern’ poses challenges for regulation.

There is a convincing case that we should avoid research with large risks if we can obtain the benefits just as effectively with safer approaches. However, there do not currently exist natural mechanisms to enforce such decisions. Government analysis of the risk of different branches of research is a possible mechanism, but it must be performed anew for each risk area, and may be open to political distortion and accusations of bias.

We propose that all laboratories performing dual-use research with potentially catastrophic consequences should be required by law to hold insurance against damaging consequences of their research.

This market-based approach would force researcher institutions to internalise some of the externalities and thereby:

Encourage university departments and priva

... (read more)
2
Dawn Drescher
The (late) Global Priorities Project produced a long list of policy interventions and found that none of them were feasible at that time and place (UK in 2015), but maybe some of them can be adapted to other times or places where they are feasible. Niel Bowerman’s article “Research note: Good policy ideas that won’t happen (yet)” from 2015 gives an overview.

A Project Candor for Global Catastrophic Risks

Biorisk and Recovery from Catastrophe, Values and Reflective Processes, Effective Altruism

This is a proposal to fund a large-scale public communications project on global catastrophic risks (GCRs), modeled on the Eisenhower administration's Project Candor. Project Candor was a Cold War  public relations campaign to "inform the public of the realities of the 'Age of Peril'" (see Unclassified 1953 Memo from Eisenhower Library). Policymakers were concerned that the public did not yet understand that the threats from nuclear weapons and the Soviet Union had inaugurated a new era in human history: the Age of Peril. Today, at the precipice, the Age of Peril continues with possible risks from engineered pandemics, thermonuclear exchange, great power war, and more. Voting behavior and public discourse, however, do not seem attuned to these risks. A new privately-funded Project Candor would communicate to the public the nature of the threats, their probabilities, and what we can do about them. This proposal is related to "a fund for movies and documentaries" and "new publications on the most pressing issues," but differs in that it would be a unified and coordinated campaign across multiple media. 

A social media platform with better incentives

Epistemic Institutions, Values and Reflective Processes

Social media has arguably become a major way in which people consume information and develop their values, and the most popular platforms are far from optimally set up to bring people closer to truthfulness or altruistic ends. We’d love to see experiments with social media platforms that provide more pro-social incentives and yet have the potential to reach a large audience.

Eliminate all mosquito-borne viruses by permanently immunizing mosquitoes 

Biorisk and Recovery from Catastrophe

Billions of people are at risk from mosquito-borne viruses, including the threat of new viruses emerging. Over a century of large-scale attempts to eradicate mosquitoes as virus vectors has changed little: there could be significant value in demonstrating large-scale, permanent vector control for both general deployment and rapid response to novel viruses. Recent research has shown that infecting mosquitoes with Wolbachia, a bacterium, out-competes viruses (including dengue, yellow fever and Zika), preventing the virus from replicating within the insect, essentially immunizing it. The bacterium passes to future generations by infecting mosquito eggs, allowing a small release of immunized mosquitoes to gradually and permanently immunize an entire population of mosquitoes. We are interested in proposals for taking this technology to massive scale, with a particular focus on rapid deployment in the case of novel mosquito-borne viruses. 

Epistemic status: Wolbachia impact on dengue fever has been demonstrated in a large RCT and about 10 city-level pilots. Impact on ot... (read more)

Increasing social norms of moral circle expansion/cooperation

Moral circle expansion

International cooperation on existential risks and other impactful issues is largely downstream of social norms of, for example, whether foreigners are part of one's moral circle. Research and efforts to encourage social norms of moral circle expansion and cooperation to include out-group members could potentially be very impactful, especially in relevant countries (e.g., US and China) and among relevant decision-makers.

Movement-building/research/pipeline for content creators/influencers

Effective altruism

Content creators/influencers have (if popular) a lot of outreach potential and earning-to-give potential. We should investigate the possibility of investing in movement-building or a pipeline into this field. Practical research on how to be a successful influencer is also likely to be broadly applicable for movement-building in general.

7
Jackson Wagner
Rather than a pipeline for turning EAs (of which there are few) into media creators and celebrity influencers, it might be wiser to go the other way, and try to specifically target media creators and celebrity influencers for conversion to EA.  In my view, the quickest path to something like a high-quality youtube documentary series about EA probably looks more like "find an existing youtube studio with some folks who are interested in EA" than it does "get a group of EAs together and create a media studio".  Although the quickest path of all probably involves a mix of both strategies -- like 2-3 committed EAs with experience in media getting funding and hiring a bunch of other people already working in media to help them build the project. I've been talking about documentaries/videos because there seem to be a number of EA efforts currently to create media studios or etc.  But a broader, 80K-style effort to build the EA pipeline so we can attract and absorb more media people into the movement also seems worthwhile.
3
Peter S. Park
"find an existing youtube studio with some folks who are interested in EA"-> This sounds very doable and potentially quite impactful. I personally enjoy watching Kurzgesagt and they have done EA-relevant videos in the past (e.g., meat consumption). "But a broader, 80K-style effort to build the EA pipeline so we can attract and absorb more media people into the movement also seems worthwhile." -> I agree!

Burying caches of basic machinery needed to rebuild civilisation from scratch

Recovery from Catastophe

Should the worst happen, and a global catastrophe happens, we want to be able to help survivors rebuild civilisation as quickly and efficiently as possible. To this end, burying caches of machinery that can be used to bootstrap development is a useful part of a civilisation recovery toolkit. Such a cache could be in the form of a shipping container filled with heavy machines of open source design, such as a wind turbine, an engine, a tractor with back hoe, an oven, basic computers and CNC fabricators, etc. Written instructions would also be included of course! Along with a selection of useful books. First we aim to put together a prototype of such a cache and test it in various locations with people of various skill levels, to see how well they fare at "rebuilding" in simulated catastrophe scenarios. Learning from this, we will iterate the design until at least 10% of simulations are successful (to what is judged to be a reasonable level). We ultimately aim to bury 10,000 such caches at strategic locations around the world. Some will be in well known locations (for the case of sudde... (read more)

2
Greg_Colbourn
(I've edited the last part re locations after some feedback in this post (worth a read!))  

Targeted social media advertising to give away high-value books

Effective Altruism, Values and Reflective Processes, Epistemic Institutions

Books are a high-fidelity means of spreading ideas. We think that high-value books are those that promote the safeguarding and flourishing of humanity and all sentient life, using evidence and reason. Many of the most valuable books have come out of the Effective Altruism (EA) movement over the last decade. We are keen for more people who want to maximize the good they do to read them. Offering those most likely to be interested in EA ideas free high-value books via targeted adverts on social media could be a highly cost effective means of growing the EA movement in a values-preserving manner. Examples of target demographics are people interested in charity and volunteering, technology, or veg*anism. Examples of books that could be offered are The Life You Can Save, Doing Good Better, The Precipice, Human Compatible, The End of Animal Farming. Perhaps a list of books could be offered, with people being allowed to chose any one.

4
MaxRa
One related idea might be to offer the books with a heavy discount. Historically, I'm much more likely to read a book if it pops up on my kindle like this: 10€ 0.99€, compared to books that are given away for free. Maybe book vendors are open to accept a subsidy to lower the price of EA books?
2
Greg_Colbourn
This was inspired by Ryan Carey's books in Library idea, and the trend of EA book giveaways to various groups (such as those attending EA Cambridge's AGI Safety Fundamentals course).

DNA banks and backup of Svalbard Global Seed Vault

Biorisk and Recovery from Catastrophe

Arguably, the most important information that the world has generated is the diversity of codes for life. Technologies are available to allow all these to be stored quickly and at low cost in DNA banks. Seed banks currently provide security for the world’s food supply. In the event of a catastrophe, it may be important to have multiple seed banks for redundancy.
 

Redefine humanity & assisting its transition

Artificial intelligence, values and reflective processes

As humanity inevitably evolves into coexistence with AI – the adage “if a man will not work, he shall not eat” needs to be redefined. Apart from AI’s early displacement effects already apparent (cue autonomous driving/trucking industry etc), humanity’s productivity function will continue rising due to the intrinsic nature of AI (consider 3D printing normal/lux goods at economies of scale), so much so that even plentitude becomes a potential problem. (In the usual then followed citation of ‘what about the African kids’ – kindly note this is a separate distribution problem) Ultimately – we should be contributing towards smoothing the AI transition curve and managing initial displacement by AI followed by proactively managing integration. 

AI alignment: Evaluate the extent to which large language models have natural abstractions

Artificial Intelligence

The natural abstraction hypothesis is the hypothesis that neural networks will learn abstractions very similar to human concepts because these concepts are a better decomposition of reality than the alternatives. If it were true in practice, it would imply that large NNs (and large LMs in particular, due to being trained on natural language) would learn faithful models of human values, as well as bound the difficulty of translating between the model and human ontologies in ELK, avoiding the hard case of ELK in practice. If it turns out that the natural abstraction hypothesis is true at relevant scales, this would allow us to sidestep a large part of the alignment problem, and if it is false then this allows us to know to avoid a class of approaches that would be doomed to fail. 

We'd like to see work towards gathering evidence on whether natural abstractions holds in practice and how this scales with model size, with a focus on interpretability of model latents, and experiments in toy environments that test whether human simulators are favored in practice. Work towar... (read more)

Refinement of idea #33, "A fund for movies and documentaries":

I'd like to see filmmakers (including screenwriters and directors) working on EA-inspired films collaborate with social scientists and other subject-matter experts to ensure that their films realistically depict EA issues (such as x-risks) and social dynamics. These collaborations can help filmmakers avoid pitfalls like those committed by Don't Look Up and The Ministry for the Future.[1]

  1. ^

    From this review: "But while here and there an offhand reference to some reluctant group or other is made, they are, in Ministry, always feckless. The initial disaster undermines India’s Hindu nationalist party, rather than strengthening it. Further disasters are met with turns to socialism. The anti-fossil fuel terrorism that is portrayed (and both criticized and seen as necessary by varying characters) does not provoke anti-environmental terrorism in response. One particular striking example is about two-thirds of the way through the novel, when a small American town is evacuated in the name of half-Earth. While not welcomed, this evacuation is accepted in a way that is all but impossible to imagine, at least while we, looking up from

... (read more)

Accelerating Accelerators

Economic Growth

Y Combinator has had one of the largest impacts on GDP of any institution in history. We are interested in funding efforts to replicate that success across different geographies, sectors (e.g. healthcare, financial services), or corporate form (e.g. not-for-profit vs. for-profit). 

2
Nathan Young
I'd like research alongside this to try and ascertain how GDP affects existential risk.
3
Greg_Colbourn
See this (by one of the Future Fund team!)

Salary Negotiation Service:

Effective Altruism

This service could negotiate salaries on behalf of EAs or others who would then commit a proportion of the extra to charity. This would increase the amount of money going to EA causes, promote Effective Altruism and draw people deeper into the community. Given the number of EAs who are working at high-paying tech companies this would likely be profitable.

(I remembered hearing this idea from someone else a few years back, but I can't remember who it was, unfortunately, so I can't give them credit unless they name themselves)

Risks: Might be expensive to find someone with the skills to do this and this might outweigh the money raised.

7
Jan-Willem
Hi Chris! We run this on a recurring base with Training For Good! We already had a few dozens of people on the program and we are currently measuring the impact. See https://www.trainingforgood.com/salary-negotiation
3
Chris Leong
I was suggesting an actual service and not just training.

Ambitious Altruistic Software Engineering Efforts

Values and Reflective Processes, Effective Altruism

There is a long list of altruistic software projects waiting to be built, with various worthy goals such as improving forecasting, improving groups' ability to intelligently coordinate, or improving the quality of research and social-media conversations.

[anonymous]10
0
0

Biorisk and information hazard workshops for iGEM competitors

Biorisk and Recovery from Catastrophe, Empowering Exceptional People

iGEM competitions are interdisciplinary synthetic biology competitions for students. They bring together the best and brightest university students with a considerable interest in synthetic biology. They already have knowledge and skills in bioengineering and many of them will likely choose it as a career path and will be very good at it. Educating them on biorisks and especially information hazards would therefore be a great contribution to safeguarding. They could also be introduced to EA ideas and rationalist approaches in general, bringing talented young people on board.

2
Tessa A 🔸
You might be interested to know that iGEM (disclosure: my employer) just published a blog post about infohazards. We currently offer biorisk workshops for teams; this year we plan to offer a general workshop on risk awareness, a workshop specifically on dual-use, and potentially some others. We don't have anything on general EA / rationality, though we do share biosecurity job and training opportunities with our alumni network.

Screen and record all DNA synthesis 
Biorisk and Recovery from Catastrophe

Screening all DNA synthesis orders for potentially serious hazards would reduce the risk that a dangerous biological agent is engineered and released. Robustly recording what DNA is synthesized (necessarily in an encrypted fashion) would allow labs to prove that they had not engineered an agent causing an outbreak. We are interested in funding work to solve technical, political and incentive problems related to securing DNA synthesis.

 

Meta note: there are already some cool EA-aligned projects related to this, such as SecureDNA from the MIT Media Lab and Common Mechanism to Prevent Illicit Gene Synthesis from NTI/IBBIS. Also, this one is not an original idea of mine to an even greater extent than the others I've posted.

Group psychology in space

Space governance

When human colonies are established in outer space, their relationship with Earth will be very important for their well-being. Initially, they’re likely to be dependent on Earth. Like settler colonies on Earth, they may grow to desire independence over time. Drawing on history and research on social group identities from social psychology, researchers should attempt to understand the kind of group identities likely to arise in independent colonies. As colonies grow they’ll inevitably form independent group identities, but depending on relationships with social groups back home, these identities could support links with Earth or create antagonistic relationships with them. Attitudes on Earth might also vary from supportive, exclusionary, or even prejudiced. Better understanding intergroup relations between Earth powers and their settler colonies off-world could help us develop equitable governance structures that promote peace and cooperation between groups.

4
Alex D 🔸
Would mostly apply to bunkers too!

Lobbying architects of the future

Values and Reflective Processes, Effective Altruism

Advocacy often focuses on changing politics, but the most important decisions about the future of civilization may be made in domains that receive relatively less attention. Examples include the reward functions of generally intelligent algorithms that eventually get scaled up, the design of the first space colonies, and the structure of virtual reality. We would like to see one or more organizations focused on getting the right values considered by influential decision-makers at institutions like NASA and Google. We would be excited about targeted outreach to promote consideration of aligned artificial intelligence, existential risks, the interests of future generations, and nonhuman (both animal and digital) minds. The nature of this work could take various forms, but some potential strategies are prestigious conferences in important industries, retreats including a small number of highly-influential professionals, or shareholder activism.

EA ops: "Immigration Tech" 

I have an idea for a cloud based, AI-powered SaaS platform to help governments handle immigration. Think KYC meets immigration

Today the immigration process is disjointed and fragmented amongst different countries and in most cases it's cumbersome, overly bureaucratic. That means that difficulties for immigrants, particularly in clear Human Rights cases, as well as for countries, who may be losing out on highly skilled migrants.

The idea is a platform that connects between potential immigrants and potential host countries. Instead of an immigrant applying individually to a number of countries, he would upload his relevant documentation to the platform that will then be shared with his countries of choice. Another model could be for interested countries to directly reach out to the potential immigrant of their own accord.

Part of the work of the platform would be to perform the relevant KYC work to authenticate the request as legitimate - thereby saving time and resources for national immigration departments, particularly when a request is lodged to multiple countries. 

Obviously the idea is still in it's early stages and there are a number of detail... (read more)

1
Avi Lewis
Basically, the aim here is twofold: 1. Skilled migrants. Enable host countries to perform a reverse-lookup to attract skilled migrants with a background in say Tech, STEM or IT. And vice verca. Support skilled  migrants in their search for a new home environment that can foster their growth and development. An influx of academic and entrepreneurial immigrants can be a boost to the economies of their newly adoptive countries,  and can lead to a increase scientific advancement 2. Human Right Cases. All too often these fall through the cracks. Long wait times, particularly in danger zones. A principle aim of this platform would be to help find a new home country for those that need it most

Representation of future generations within major institutions

Values and Reflective Processes, Epistemic Institutions

We think at least part of the issues facing us today would be better handled if there was less political short-termism, and if there were more incentives for major political and non-political institutions to take into account the interests of future generations. One way to address this is to establish explicit representation of future generations in these institutions through strategic advocacy, which can be done in many ways and has been piloted in the past few decades.

Normalizing regular wear of PPE

Biorisk

Containing a potential pandemic is extremely high-impact. If a high proportion of people regularly wore PPE, this could make the difference in determining whether or not the outbreak is stopped before it becomes a pandemic. Regularly wearing masks is much more doable than regularly wearing hazmat suits, although the political polarization of masks in certain countries is a barrier. Even so, preventing a fraction of future pandemics (which can on expectation be achieved by regular mask-wearing in a fraction of the world's countries) is still quite high impact. Applying the theory of social norms and of prestige may help normalize the regular wear of PPE. Convincing and publicizing prestigious individuals' regular mask-wearing and associating regular mask-wearing to morality may potentially be helpful on this front (in America, this may only work in certain types of communities) .

Targeting movement-building efforts at top universities' career offices

Effective altruism

Wouldn't it be great if top universities' career offices were aligned with EA and with longtermism? Maybe they can use material from 80,000 hours in their help of their universities' students. An ambitious endgame is that all top universities' career offices are aligned with EA/longtermism, or at least highly aware of the paradigm and of resources like 80,000 hours, so that they can directly convince and/or facilitate students' pursuit of high-impact career options.

2
PeterSlattery
I like this idea. However, it might be hard to change existing career advice organisations. I therefore wonder if setting up and funding competitors would be better. These competitors could be very affordable and prestigious career advice organisations with EA affiliated founders and members. The aim would be to help as many high ability students as possible who are seeking advice and use the engagement and influence to prompt ethical and impactful career decisions and where possible/appropriate.

Pragmatic forecasting training

Epistemic institutions

There is a big jump between reading Superforecasting and actually doing forecasting, especially at work. One problem is that the book is written as a popular book, and so doesn't cover the specifics you need - e.g. what techniques should you use to combine data to get a base rate? It would be useful to have something more textbooky which teaches specific techniques and gives lots of worked examples and exercises. Furthermore, there are many additional challenges of implementing forecasting in a policy or funder environment such as:

  • Decisionmaking is often messy and depends on answers to vague questions. 
  • There is often a lot of time pressure that makes adding a forecasting process (or even just learning forecasting) difficult. 
  • There are stakeholders that may need to be convinced of the value of forecasts. 
  • How do you implement a forecasting system across a team such that you will keep adjusting your forecasts and come back and check how you did in the future?

It would be valuable to have a consultancy helping organisations such as funders and government departments implement forecasting in a real-world context. This consultancy could then over time build up a course or textbook that teaches what they have learned to a wider audience. 

Targeted facilitation of high-impact career pivots for ex-academics

Effective altruism

Effective altruists/longtermists have targeted their movement-building efforts to young people (undergraduate and high-school students), an effective strategy given that young people are more likely to be in the process of career exploration and investments in them will be long-lasting.

 Another effective movement-building strategy may be to help Ph.D. graduates, postdocs, etc. who are pivoting out of academia. Ex-academics are likely to have difficult-to-obtain, and often impactful/generalizable skills, and are likely undervalued by the hypercompetitive academic job market (due to academics' strong, social-norm-based preference for academic jobs and consequent oversupply). Ex-academics are likely to be in the process of career exploration. Targeted outreach, fellowships, and careering coaching by student organizations and EA movement-building experts may help direct more of these ex-academics to high-impact career pivots. 

Causal microfoundations for behavioral science

Artificial Intelligence, Values and Reflective Processes

The science of human behavior is afflicted by a replication crisis. By some estimates, over half of the empirical literature does not replicate. A significant cause of this problem is undertheorization. Without a cumulative theoretical framework from which to work, researchers often lack meaningful hypotheses to test, and so instead default to their personal, often culturally biased folk intuitions. Their resulting interpretations of studies’ data thus frequently fail to replicate and generalize (See the seminal paper of Michael Muthukrishna and my advisor Joe Henrich.)

Finding the correct causal microfoundations for behavioral science can provide a deeper understanding of precisely when we can extrapolate empirical findings out-of-sample. This could be especially helpful for making externally valid predictions in historically unprecedented situations (e.g., regarding emergent technologies or anthropogenic catastrophic/existential risks), for which much of the relevant data required for empirically estimating policy counterfactuals may not yet exist.

One area where the correct causal... (read more)

Space Policy Lab

Space Governance, Epistemic Institutions

Human activity in space is intensifying with the growing challenge of space debris, the deployment of satellite mega-constellations, and the prospects of asteroid mining and long-term colonisation raising unique challenges to a vital yet neglected domain. Current space governance - the laws, rules, norms and institutions that structure interactions in space - is falls far short of meeting these challenges. A Space Policy Lab would research governance frameworks analyse policy issues shape expert discourse, and engage in advocacy for effective regulatory frameworks. We would like to see a Lab bringing together applied researchers, academia and societal stakeholders within a dynamic collaborative & transdisciplinary environment through undertaking policy experiments to identify levers for improving space governance.

AI alignment prize suggestion: Demonstrate a true sandwiching project

Artificial Intelligence

Sandwiching projects are a concrete way for how to make progress on aligning narrowly superhuman models. They “sandwich” the model in between one set of humans which is less capable than it and another set of humans which is more capable than it at the fuzzy task in question, and b) figure out how to help the less-capable set of humans reproduce the judgments of the more-capable set of humans. For example, first fine-tune a coding model to write short functions solving simple puzzles using demonstrations and feedback collected from expert software engineers. Then try to match this performance using some process that can be implemented by people who don’t know how to code and/or couldn’t solve the puzzles themselves.

Importantly, there are many ways to attack a sandwiching project that are slightly cheating. The most challenging version of a sandwiching project would need to make sure that no information whatsoever from the more-capable set of humans is used in the training process. The Future Fund could offer prizes for demonstrations of sandwiching projects on various levels of impressiveness and generality of the employed method.

Refinement of project idea #22, Prediction Markets

 

Add: "In particular, we'd like to see prediction platforms that do all of the following three: use real money, are very easy to use, allow very easy creation of markets.

Masters Degrees for Movement Building:

AI Safety

Many people want to contribute to AI safety and they may have strong technical abilities, but not yet be in a position to be able to contribute to research. Some of these people might also have experience in movement building. It might be worthwhile to pick Masters of AI programs that are highly ranked and pay for a pair of AI Safety movement builders to study there so that they can promote the idea among the school, whilst upskilling at the same time. (This could work for other cause areas like biosecurity)

Risks: Masters degrees are very expensive.

5
Peter S. Park
Maybe only tangentially related, but a master's in passing (quitting mid-Ph.D.) is free. In fact, one receives a Ph.D. stipend while completing the degree.   Addendum: This option can in theory be utilized by (1) helping EAs/longtermists apply to Ph.D. programs (perhaps in non-technical related fields rather than technical fields) and (2) convincing and facilitating mid-Ph.D. students looking to make career pivots from research to movement building.  

EA Hotel / CEEALAR except at EA Hubs

Effective Altruism

CEEALAR is currently located in Blackpool, UK. It would be a lot more attractive if it were in e.g. Oxford, the Bay Area, or London. This would allow guests to network with local EAs (as well as other smart people, of which there are plenty in all of the above cities). In as far as budget is less of a constraint now and in as far as EA funders are already financing trips to such cities for select individuals (for conferences and otherwise), it seems an EA Hotel would similarly be justified on the same grounds. (E.g. intercontinental flights can sometimes be more expensive than one month's rent in those cities)

Research into the dual-use risks of asteroid safety

Space Governance

There is a small base rate of asteroids/comets hitting the Earth naturally. There are efforts out there to deflect/destroy asteroids if they were about to hit Earth. However, based on the relative risk of anthropogenic vs natural risk, we think that getting better at manipulating space objects is dual-use as it would allow malevolent actors to weaponize asteroids, and that this risk could be orders of magnitudes larger. We want to see research on what kinds of asteroid defense techniques ar... (read more)

Creating materials for alignment onboarding

Artificial Intelligence

At present, the pipeline from AI capabilities researcher to AI alignment researcher is not very user friendly. While there are a few people like Rob Miles and Richard Ngo who have produced excellent onboarding materials, this niche is still fairly underserved compared to onboarding in many other fields. Creating more materials for a field has the advantage that because there are different formats that different people find more helpful, having more increases the likelihood that something wor... (read more)

Machine olfaction for disease detection

Biorisk and Recovery from Catastrophe

Dogs can be trained to recognize the smell of Covid-19 and many other diseases. However, this takes a lot of time. It might be possible in the very near future to build robotic noses (machine olfaction), that work as well as a dog's. This would mean that once one neural net has been trained to recognize a new pathogen, the software could easily be distributed around the globe. Sensors in public places could then pick up in real time whether someone infectious was close by. This wou... (read more)

Cheap, lifesaving treatments

Epistemic institutions; Artificial Intelligence; Economic Growth; Effective Altruism; Research That Will Help Us Improve

Hundreds of existing, low-cost, and widely available generic drugs could be repurposed as effective treatments for additional indications. Yet this major opportunity to improve outcomes for patients suffering from cancer and other diseases while lowering healthcare costs is being ignored due to a market failure. We are interested in funding innovative solutions for bringing repurposed generic drugs to widesprea... (read more)

High Quality Outward-Facing Communications Organization

This creates a new communications organization that deeply understands outside media and attitudes, and reports events to the community. The organization will expertly provide content and services tailored to EAs and their projects on demand. This organization is a servant, an expression of the community and respects Truth. Carefully created, this organization should be invaluable as EA grows many times and into new domains and competencies.

Imagine a new megaproject. How do we talk about a giant n... (read more)

2
Charles He
Background/context: Early in EA, some events created lasting adverse narratives, such as those around earning to give. It seems like these narratives started by small, initial, events and could have been avoided. More recently, there have been several articles and minor media events that have presented narratives against EA or certain values in EA. Some people have expressed that their work has been made more difficult by them.  If you believe these narratives represent real issues, they should be investigated and acted on. If you don’t, a high quality communication strategy should be implemented (including inaction). Insufficient competence in communications harms projects and people. Media events tend to be lumpy and unpredictable. It’s unclear what will happen as many more projects and efforts are made by EAs. It seems like several senior EAs serve as ad-hoc comms or PR leaders. In some sense, this is great and the ideal. But reliance on a few people is bad. These leaders have other skills and it’s unlikely  outside communications is their comparative advantage. Future events could create much more pressure and complexity and burnout seems possible. Expert services, subordinate to these efforts, would be good (the same EA you like now would "front-woman", but be supported by a team).

Converting key EA research outputs into academic publications

Conceptual dissemination


Academic publications are considered to be significantly more credible than other types of publications. However, many EA aligned organisations such as Rethink Priorities produce valuable research that is never published. To help address this, we would like to fund academic publication support organisations, to help organisation which are unaffiliated with universities to get ethics approval, write grants, produce academic research outputs etc. 

Developing GCR scenario response teams and plans
Global catastrophic risks 

As Covid-19 demonstrated, groups are unable to efficiently mobilise and coordinate to deal with potential Global Catastrophic Risks (GCRs) or large scale events without prior preparation. This leads to extensive inefficiencies, risks and social costs. Organisations address such unpreparedness by simulating key risks and training to handle them. We would similarly like to fund relevant institutions and organisations teams to simulate GCR related outcomes (e.g, nuclear attacks, wars or pandemic outbreaks) to develop and practice responses and disseminate best practice.

Funding the AI alignment institute, a Manhattan project scale for AI alignment.

Artificial intelligence

Aligning AI with human interests could be very hard. The current growth in AI alignment research might be insufficient to align AI. To speed up alignment research, we want to fund an ambitious institute attracting hundreds to thousands of researchers and engineers to work full-time on aligning AI. The institute would enable these researchers to work with computing resources competitive with top AI industries. We could also slow down risky AI capability res... (read more)

2
MaxRa
I think this is a pretty interesting idea, though one would need to think much more about it. One feedback I found useful when I pitched a very related idea was that the Manhattan Project might not be the ideal framing as it's so intertwined with offensive military applications of technology.

A think tank to investigate the game theory of ethics

Values and Reflective Processes, Effective Altruism, Research That Can Help Us Improve, Space Governance, Artificial Intelligence

Caspar Oesterheld’s work on Evidential Cooperation in Large Worlds (ECL) shows that some fairly weak assumptions about the shape of the universe are enough to arrive at the conclusion that there is one optimal system of ethics: the compromise between all the preferences of all agents who cooperate with each other acausally. That would solve ethics for all practical purposes. It... (read more)

1
Jim Buhler
ECL recommends that agents maximize a compromise utility function averaging their own and those of the agents that action-correlate with them (their "copies").  The compromise between me and my copies would look different from the compromise between you and your copies, right? So I could "solve ethics" for myself, but not for you, and vice versa. Ethics could be "solved" for everyone if all agents in the multiverse were action-correlated with each other to the exact same degree, which appears exceedingly unlikely. Do I miss something? (Not a criticism of your proposal. I'm just trying to refine my  understanding of ECL) :)  
4
Dawn Drescher
Thanks for the comment! I think that’s a misunderstanding because trading with copies of oneself wouldn’t do anything since you already want the same thing. The compromise between you would be the same as what you want individually. But with ECL you instead employ the concept of “superrationality,” which Douglas Hofstadter, Gary Drescher, and others have already looked into in isolation. You have now learned of superrationality, and others out there have perhaps also figured it out (or will in the future). Superrationality is now the thing that you have in common and that allows you to coordinate our decisions without communicating.  That coordination relies a lot on Schelling points, on extrapolation from the things that we see around us, from general considerations when it comes to what sorts of agents will consider superrationality to be worth their while (some brands of consequentialists surely), etc. I’ve mentioned some real-world examples of ECL for coordinating within and between communities like EA in this article.
1
Jim Buhler
Thanks for the reply! :) By "copies", I meant "agents which action-correlate with you" (i.e., those which will cooperate if you cooperate), not "agents sharing your values". Sorry for the confusion. Do you think all agents thinking superrationaly action-correlate?  This seems like a very strong claim to me. My impression is that the agents with a decision-algorithm similar enough to mine to (significantly) action-correlate with me is a very small subset of all superrationalists .  As your post suggests, even your past-self doesn't fully action-correlate with you (although you don't need "full correlation" for cooperation to be worthwhile, of course). In a one-shot prisoner's dilemma, would you cooperate with anyone who agrees that superrationality is the way to go? In his paper on ECL, Caspar Oesterheld  says (section 2, p.9): “I will tend to make arguments from similarity of decision algorithms rather than from common rationality, because I hold these to be more rigorous and more applicable whenever there is not authority to tell my collaborators and me about our common rationality.” However, he also often uses "the agents with a decision-algorithm similar enough to mine to (significantly) action-correlate with me"   and "all superrationalists " interchangeably, which confuses me a lot.  
2
Dawn Drescher
  Yes, but by implication not assumption. (Also no, not perfectly at least, because we’ll all always have some empirical uncertainty.) Superrationalists want to compromise with each other (if they have the right aggregative-consequentialist mindset), so they try to infer what everyone else wants (in some immediate, pre-superrationality sense), calculate the compromise that follows from that, determine what actions that compromise implies for the context in which they find themselves (resources and whatnot), and then act accordingly. These final acts can be very different depending on their contexts, but the compromise goals from which they follow correlate to the extent to which they were able to correctly infer what everyone wants (including bargaining solutions etc.). Yes. Hmm, it’s been a couple years since I read the paper, so not sure how that is meant… But I suppose either the decision algorithm is similar (1) because it goes through the superrationality step, or the decision algorithm has to be a bit similar (2) in order for people to consider superrationality in the first place. You need to subscribe to non-causal DTs or maybe have indexical uncertainty of some sort. It might be something that religious people and EAs come up with but that seems weird to most other people. (I think Calvinists have these EDT leanings, so maybe they’d embrace superrationality too? No idea.) I think superrationality breaks down in many earth-bound cases because too many people here would consider it weird, like the whole CDT crowd probably, unless they are aware of their indexical uncertainty, but that’s also still considered a bit weird.
1
Jim Buhler
Oh interesting! Ok so I guess there are two possibilities. 1) Either by “supperrationalists”, you mean something stronger than “agents taking acausal dependences into account in PD-like situations”, which I thought was roughly Caspar’s definition in his paper. And then, I'd be even more confused. 2) Or you really think that taking acausal dependences into account is, by itself, sufficient to create a significant correlation in two decision-algorithms. In that case, how do you explain that I would defect against you and exploit you in one-shot PD (very sorry, I just don’t believe we correlate ^^), despite being completely on board with supperrationality? How is that not a proof that common supperrationality is insufficient? (Btw, happy to jump on a call to talk about this if you’d prefer that over writing.)
2
Dawn Drescher
I think it’s closer to 2, and the clearer term to use is probably “superrational cooperator,” but I suppose that’s probably meant by “superrationalist”? Unclear. But “superrational cooperator” is clearer about (1) knowing about superrationality and (2) wanting to reap the gains from trade from superrationality. Condition 2 can be false because people use CDT or because they have very local or easily satisfied values and don’t care about distant or additional stuff. So just as in all the thought experiments where EDT gets richer than CDT, your own behavior is the only evidence you have about what others are likely to predict about you. The multiverse part probably smooths that out a bit, so your own behavior gives you evidence of increasing or decreasing gains from trade as the fraction of agents in the multiverse that you think cooperate with you increases or decreases. I think it would be “hard” to try to occupy that Goldilocks zone where you maximize the number of agents who wrongly believe that you’ll cooperate while you’re really defecting, because you’d have to simultaneously believe that you’re the sort of agent that cooperates despite actually defecting, which should give you evidence that you’re wrong about what reference class you’re likely to be put in. There may be agents like that out there, but even if that’s the case, they won’t have control over it. The way this will probably be factored in is that superrational cooperators will expect a slightly lower cooperation incidence to agents in reference classes of agents that are empirically very likely to cooperate while not being physically forced to cooperate because being in that reference class makes defection more profitable up to the point where it actually changes the assumptions others are likely to make about the reference class that have enabled the effect in the first place. That could mean that for any given reference class of agent who are able to defect, cooperation “densities” over 99% or s

Stratospheric cleaning to mitigate nuclear winters

Recovery from Catastrophes

Proposals to recover from a nuclear winter have primarily focused on providing alternative means of food production until agriculture recovers. A complementary strategy would be to develop technologies to remove stratospheric soot, which could reduce the duration and severity of the nuclear winter if used soon after nuclear strikes while smoke remains concentrated above a relatively small geographic area. Stratospheric cleaning could also prove useful in the event of supervolcano e... (read more)

You may be interested in this. I considered some pretty speculative things to prevent or mollify a supervolcanic eruption, but the volume of the stratosphere is so enormous that I think cleaning it would be very challenging.

1
gavintaylor
Yeah, I haven't looked into this much but I think goal would be getting as much soot as possible before it spread out across the whole stratosphere. For instance, dumping coagulant into the rising smoke plume so that it got carried up with the smoke could be a good option if one can respond while a city fire is still burning, as the coagulant is then going to get mixed in with most of the soot. IIRC from Robock's paper it also takes a while (weeks/months) for the soot to completely spread out and self-loft into the upper stratosphere, so that gives more time to respond while it's still fairly concentrated around the sources. Determining what an effective response would be at that stage is kind of the aim of the project - one suggestion would be to send up stratospheric weather balloons with high-voltage electrostatic fields (not 100% sure but I expect soot aerosol would be charged and could be electrostatically attracted) under areas of dense soot.
4
Jakob
A potential complementary strategy to this one, could be research into putting out large-scale wildfires (though I'm not sure about the feasibility of this - are anyone aware of existing research on this?)

'Bunker' survival research grants

Biorisk and Recovery from Catastrophe

Grants for investigation of what skills and tools/materials would be needed in ideal emergency kits to improve chances of survival/health. For example, what should you have in your bunker? Training in basic medical skills - like wilderness first aid, how to keep people mentally well under these conditions, which micronutrients should be stocked, PPE. The greater the proportion of the population that has these things on hand, may increase chances of survival.

EA-themed Superhero Graphic Novel / Shounen Anime / K Drama

Effective Altruism Meta, Community Building

I really like to think about that Superman fanfic where he tried to aim for 'most good'. Many existing superhero stories could be rewritten so the main protagonists tries to maximize their impact. I know non-fiction movies/documentaries were mentioned but I think the 3 types of media I mentioned have the potential to become really popular (are consumed by vast number of teenagers and (young) adults globally. It's a risk (it could be a flop), but I think one we could take. I am pretty confident a big enough budget can 'buy quality' so it would be better than average story.

2
Dawn Drescher
Anecdotally, a bunch of my friends are fans of or enjoy superhero fiction along the lines of Marvel, so this could aim at just the right demographic. Or it could aim at an already over-represented demographic.

Eliminate disease-bearing mosquitos (originally suggested by David Manheim)

Malaria

Act on the long-running plan to design and release mosquitos to outcompete those which spread malaria thereby avoiding infection.

8
Alex D 🔸
Suggestion - start with a focus on eradicating Aedes mosquitoes (aegypti, albopictus, and maybe japonicus) from the Western hemisphere. These species are invasive/non-native to the Americas (so "ecological risks" arguments against are more tenuous), cause a tremendous burden of illness (Zika, Dengue, Yellow Fever, Chikungunya, ...), and have been subject to previous eradication efforts (so there's precedent). There isn't particularly a "biorisk/GCBR" angle to this problem, but such projects being executed by a team that was very biosecurity-aware seems wise since effective tools would include some theoretically dual-use biotech. Projects could include a mix of advocacy, strategic research, tool development, and execution.

Approval Voting in the UK

Politics

The Centre for Election Science has done good work pushing approval voting in the US. In the UK there aren't ballot initiatives, but both political parties could allow approval voting in their constituencies. If they did then it would be easier to push at a national level.

2
Dawn Drescher
Ranked choice voting seems to be another top contender. I think I came away liking it more back in the day but I forgot all the details.

(Per Nick's note, reposting)

 Replication funding and publication

Epistemic Institutions 

The replication crisis is a foundational problem in (social) science. We are interested in funding publications, registries, and other funds focused on ensuring that trials and experiments are replicable by other scientists.

Advocacy for [metascience, land-use reform, clean energy technologies, or other individual planks of the progress studies platform]

Economic growth, Epistemic institutions

You already list high-skill immigration advocacy, pandemic-prevention breakthroughs, and a variety of institutional-innovation topics; why not the rest of the "abundance agenda"?  (I already listed general/high-level philosophical research, but here I am suggesting specific sub-areas.)

Land use, construction costs, "yimby", etc. -- Has it gotten more difficult for civilization to build... (read more)

Legalization of MDMA & psychedelics to reduce trauma and cluster headaches

Values and Reflective Processes, Empowering Exceptional People

Millions of people have PTSD that causes massive suffering. 

MDMA and psychedelics are being legalized in the U.S., and there are both non-profit and for-profit organizations working in this space. Making sure everyone who wants it has access, via more legalization, and subsidization, would reduce the amount of trauma, which could have knock-on benefits not just for them but the people they interact with.

... (read more)

EA storytelling

Research That Can Help Us Improve, Values and Reflective Processes, Effective Altruism

The stronger the stories that EA tells are, the more people will be convinced to do something about EA in their own lives. We’re interested in funding people with a proven track record in storytelling, including generating viral content, to create EA stories that could reach millions of people.

(Potentially extends existing Project Ideas ‘A fund for movies and documentaries’ and 'Critiquing our approach'.)

Project ideas from this page that are relevant to this idea:

EA-themed Superhero Graphic Novel / Shounen Anime / K Drama (jknowak)

Research into why people don't like EA

Research That Can Help Us Improve

Many people have heard of EA and weren’t convinced. We want to understand why, so that we can find approaches to convince them. If we can win more people over to EA, we can directly increase the impact that EA has in the world.

We’re excited to fund proposals to research why people do and don’t like EA, and the approaches that are most effective in winning people over to EA.

(Potentially extends existing Project Idea 'Critiquing our approach'.)

2
PeterSlattery
I have had similar ideas about this. Your idea also potentially relates/work well with my 'EA brand assessment/public survey idea.

Find good ways to distribute books to people with high potential

Epistemic Institutions, Effective Altruism

This project has two parts:
1) find people with high potential, especially students.
2) find a good way to distribute books on world problems to them.

Ad 1: Examples:
- students in low and medium income countries may have a higher demand for English books
- participants on STEM olympiads
- people with SAT scores > x
- students in selective schools

Ad 2: It is important to do it in a nice, non-preaching way.
One possible implementation is a book club that sends out a book every two months, with regular online meetups for its readers.

Create and curate educational materials on EA-related topics

 Effective Altruism

EA Fellowship and EA Handbook took existing resources and curated them into a good introduction to EA. Do something similar with different formats and subjects.
I.e., create: 
- Fellowships
- Reading lists.
- record existing courses in academia
- and so on

With a goals to:
- make it easy to take up new fields.

In fields like:
- Rationality
- Bioweapons
- Forecasting
- and so on.

EA Berkeley Hostel

Effective Altruism

Every week, EAs pass through Berkeley and someone needs to pay around $200 a night to house them or scramble to find a couch they can crash on. This becomes increasingly complicated when someone finds a trial run offer and needs to stay another week than expected or even find a job offer and suddenly need to rush to find housing. Currently, there exists NO hostel (or even a hotel room that costs less than a couple hundred bucks) even close to Berkeley, much less an EA hostel. A hostel in Berkeley would allow flexibility ... (read more)

EA services consultancy network or organisation (early draft)
Movement building and resolving coordination problems


Considerable need for support for small projects on tech, design etc. Many effective charities lacking key ingredients for improvements. Many good ideas never get off the ground due to lack of technical expertise. Can do surveys of movement leaders and also scale up as needed when there is more demand. Incl:
Tech support organisation
Associated media and PR services for EA organisations to publicise work via media
Content creation for SEO and medi... (read more)

4
Chris Leong
Altruistic Agency cover the tech portion of this, but providing other services could be valuable as well.
2
PeterSlattery
Yes, I agree. I think that they and similar organisation should be well funded once validated as being useful. Right now, Altruistic Agency  are helping READI to build a new website. I also have several other EA projects that I am going to ask for help with.

EA Micro Schools

Effective Altruism

We would be excited to fund projects that make it easier to start up an EA-aligned, accredited private school.

As EA matures, there will be more and more parents. Kids of self-identified EAs are likely to be smart and neurodivergent, and may struggle with the default schooling system. They're also likely to grow into future adult EAs. Remote work options will free up location choice, and there could be major community-building gains if parents can easily find their ideal school in an EA hub.

Variation: develop an EA stream o... (read more)

Creating more EA aligned journals or conferences
Movement building 

Academic publications are considered to be significantly more credible than other types of publications. However, the academic publication system is highly misaligned with key EA values (e.g., efficiency and intellectual novelty/impartiality). We would therefore like to encourage initiatives to start, influence or acquire influential academic journals or conferences to enable EA to have better academic impacts towards our desired outcomes.


Just FYI,  here is copy explaining a relate... (read more)

1
monadica
Hi Peter, very awesome idea, I am working on this kind of project, it would be nice to talk with you

Better recruitment and talent scouting networks
Movement building, coordination, coincidence of wants problems

Decentralised social good communities face significant coordination problems: Many talented social actors and influencers are either unaware of key knowledge or unable to find a clear fit for their skills. This is particularly true in less developed countries, where relevant networks are relatively nascent. To address this, we’d like to support the work that develops the global network of recruiters and talent scouts. For instance, these organisatio... (read more)

6
WilliamKiely
Recruitment agencies for EA jobs Empowering Exceptional People, Effective Altruism There are hundreds of organizations in the effective altruism ecosystem and even more high-impact job openings. Additionally, there are new organizations and projects we’d like to fund that need to recruit talent in order to establish founding teams and grow. Many of these often lack adequate resources to do proper recruiting. As such, we’d be excited to fund EA-aligned recruitment agencies to help meet these hiring needs by matching talented job-seekers with high-impact roles based on their skills and personal fit. ---------------- (Also submitted via the Google Form.) Other very similar ideas: Lauren Reid’s Headhunter Office idea and aviv’s Operations and Execution Support for Impact idea.

EA community housing network
Movement building & coordination 

Social movement building requires key members of the community to have regular rewarding interactions. To catalyse social movement building, we would like to establish more EA organisations and institutions across the world more travel arrangements between them. For instance, this could be modelled on approaches such as the “International House” student accommodation, which provides cheap accommodation for students and works to instil cosmopolitan values. 

...

A late update is that I ... (read more)

Intellectual coaching

Empowering exceptional people, Effective Altruism

Many people with the potential to do good research and writing work hit blockers that are a complex mix of psychological blockers and intellectual issues. For example uncertainty and fear around what to work on, lack of confidence in one's ability. It's difficult to find someone to help address this kind of problem. Therapists and mainstream coaches don't have a good understanding of research and EA work. But within EA most of the coaching available is focussed on career choice or produc... (read more)

Bonuses/prizes/support for critically situated or talented workers

Empowering Exceptional People

Work that advances society should be rewarded and compensated at fair market value. Unfortunately,  rewards are often incommensurate, delayed or altogether unrealized. We'd be excited to see a funding process that 1) identifies work that’s under appreciated by or insulated from the market and 2) provides incentives for workers/teams to stay put and complete said work.

EA often focuses on building new organizations to solve problems, but talented people are al... (read more)

EA to create an incubator to fund social enterprises with a high social return on investment.

This will help improve the visibility of the EA brand. It will also help connect ideas to improve the world with capital.

Safety of comprehensive AI services

Artificial Intelligence

I imagine that comprehensive AI services (CAIS) could face similar problems to intelligence agencies. Ideally, an intelligence agency would only hire those people who are maximally trusted, but then they could hire hardly anyone. Instead they split the information that any one person can see such that (1) that person can’t do much harm with the one piece of the full picture that they have and (2) if it leaks or the person exploits their knowledge in illegitimate ways, the higher-ups can trace the le... (read more)

Solve Type 2 Diabetes

Biorisk and Recovery from Catastrophe

Type 2 Diabetes, caused by insulin resistance, is one of the top 10 causes of disability (DALYs) and also is root cause for ischemic heart disease and stroke, which are also in the top 10. People with diabetes are immune compromised and have worse outcomes from infection (as we saw in Covid). Several treatments to reverse diabetes are known, and there are groups like Virta Health doing good work in this space, but some treatments are prohibitively expensive (like GLP-1 agonists). Prevention and nutr... (read more)

2
Lukas_Gloor
Inspired by this proposal, researching the claim that seed oils may be responsible for many "diseases of civilization" (contamination theory of obesity). Probably(?) not true but highly important and actionable if true.
1
Mohamed Labadi
How about treating diabetes with fasting? I know many people who used fasting to cure diabetes..
1
Lauren Reid
Yes, I am very pro-fasting (not medical advice, just opinion). The Obesity Code by Jason Fung is a really good description of why this works and I have given copies to colleagues to convince them. People often need support to start fasting - I have a dream of a retreat with these supports, like bone broth.

Continuous sampling for high-risk laboratories
Biorisk and Recovery from Catastrophe

We would be excited to fund efforts to test laboratory monitoring systems that would provide data for biosafety and biosurveillance. The 1979 Sverdlovsk anthrax leak happened because a clogged air filter had been removed from the bioweapons laboratory's exhaust pipe and no one informed the night shift manager. What if, by default, ventilation ducts in high-containment laboratories were monitored to detect escaping pathogens? Establishing a practice of continuous sampling wou... (read more)

4
Alex D 🔸
Add-on: for natural epidemics, there are a number of “event-based surveillance systems” that monitor news, social media, and other sources for weak signals of potential emergencies. WHO, PAHO, and many national governments run such systems, and there are a few private ones (one of which I run). One could set up such a system focussing exclusively on the regions immediately surrounding high containment labs. There are only ~60 BSL-4 labs, so you could conceivably monitor each of these regions quite closely without an impossibly large team. Direct monitoring would be much better, but this might be a useful adjunct.

Creative Arms Control

Biorisk and Recovery from Catastrophe

This is a proposal to fund research efforts on "creative arms control," or non-treaty-based international governance mechanisms. Traditional arms control -- formal treaty-based international agreements -- has fallen out of favor among some states, to the extent that some prominent policymakers have asked whether we've reached "The End of Arms Control."[1] Treaties are difficult to negotiate and may be poorly suited to some fast-moving issues like autonomous weapons, synthetic biology, and cyber... (read more)

Look for UFOs

Space Governance

In recent years, there has been an upsurge in reports by the military on sightings of UFOs including detecting the same object with multiple modalities at once (examples: 12). 

Avi Loeb proposes to create a network of high-resolution sensors, (just as the military have). But compared to the military their results will not be classified and can be openly analyzed by scientists. The cost of doing this is in the order of millions of dollars. 

Knowing if there are aliens has many consequences, including for the ... (read more)

Write encyclopedias (esp. Wikipedia), then translate them (esp. to Russian and Chinese)

Epistemic Institutions

Create a team of people who will write articles on Wikipedia on subjects related to EA. Why this is important is described here

Besides writing articles on English Wikipedia, they can also:

  1. Create good illustrations (somehow, medical articles tend to have much beter pictures than other areas)
  2. Translate these articles to other languages (especially Russian and Chinese)
  3. Topics that are not notable enough for Wikipedia can be described in a sep
... (read more)
3
MaxRa
I really like the idea, especially regarding the idea of improving understanding between the West and China. Unfortunately, I think Wikipedia won't work because Wikipedia has very strict norms against non-volunteer contributions. There are Chinese alternatives, but IIRC they are under relatively tight ideological control. 

Researching valence for AI alignment

Artificial Intelligence, Values and Reflective Processes

In psychology, valence refers to the attractiveness, neutrality, or aversiveness of subjective experience. Improving our understanding of valence and its principal components could have large implications for how we approach AI alignment. For example, determining the extent to which valence is an intrinsic property of reality could provide computer-legible targets to align AI towards. This could be investigated experimentally: the relationship between experiences and their neural correlates & subjective reports could be mapped out across a large sample of subjects and cultural contexts.

6
Greg_Colbourn
I've been wondering whether AGI independently discovering valence realism could be a "get out clause" for alignment. Maybe this could even happen in a convergent manner with natural abstraction?

Researching the relationship between subjective well-being and political stability

Great Power Relations, Values and Reflective Processes

Early research has found a strong association between a society's political stability and the reported subjective well-being of its population. Political stability appears to be a major existential risk factor. Better understanding this relationship, perhaps by investigating natural experiments and running controlled experiments, could inform our views of appropriate policy-making and intervention points.

Prevent community drainage due to value drift

Effective Altruism, Movement building

Most Effective Altruists are still young and will have the greates impact with their careers (and spend the greatest amounts of money) in several decades. However, people also change a lot and for some this leads to a decrease of engagement or even full drop-out.  Since there is evidence, that drop out rates might be up to 30% throughout the career of higly engaged EAs, this is some serious loss of high impact work and well directed money. 

Ways of tackling this prob... (read more)

6
Kirsten
I find most discussion about discouraging value drift pretty distasteful. I don't have any reason to believe my future self's values will be worse than my current self's, so I don't want to be uncooperative with her or significantly constrain her options. I'm especially uncomfortable with the implication that becoming less involved with EA means someone's values have gotten worse.
2
Gavin
What about the predictable effect of becoming less open-minded and tolerant as we age? Sure, there's a sense in which I don't know that that state is worse than my current one. But it seems worse, and that seems enough to worry about it.
4
Kirsten
Becoming less open-minded seems like a classic case of a healthy explore/exploit over a lifetime. I'm less open-minded about a lot of things than I was a decade ago and I don't think that's a bad thing. I wouldn't worry about a change of the same magnitude again over the next decade. Edit: For me open-mindedness to isn't a moral value, it's just a means to an end. People who intrinsically value open-mindedness might be much more nervous about becoming less open-minded! That would be totally reasonable. Edit 2: There's something very ironic about using "older people become less open-minded" as a rationale for "I should commit myself to one social movement for the rest of my life".
2
Gavin
We may be talking past each other, because what I mean by open-mindedness I mean seems extremely instrumentally valuable on all kinds of views: If I become less impartial, less open to evidence, and less willing to adapt to good changes in the world, this ought to concern me.  (I actually don't know how reliable the above results about old-age conservatism are, so discount the above to the extent you don't trust those studies.) @ Edit 2: I'm not OP and don't intend this as an argument for committing to EA 4eva. Instead it's an example of value drift which concerns me, independent of where the social movement lands.

Operations and Execution Support for Impact

Empowering Exceptional People, Effective Altruism

The skill of running operations for building and growing a non-profit organization is often very different from doing the "core work" of that org. Figuring out operational details can suck energy away from the core work, leaving many promising people deciding not to start new orgs even when it is appropriate and necessary for scaling  impact. We'd like to see an organization that could provide a sort of recruiting and matchmaking service which identifies promis... (read more)

Optimal strategies for existential security

Research That Can Help Us Improve

If we don't achieve existential security (a persistent state of negligible x-risk), an existential catastrophe is destined to happen at some point, wiping out humanity's longterm potential. Despite the incredible importance of achieving existential security, there is a lack of a consensus within the EA community on how best to do so, which is partly down to a lack of high-quality, in-depth research on this question. Instead, most research has focused on reducing specific existentia... (read more)

Credible expert Q&A forums

Epistemic institutions

Decisionmakers (e.g. funders and policymakers) tend to use a mixture of desk research, interviews with experts, and workshops with experts to inform their decisions. Online forums where questions can be asked of experts could be a useful part of this process. Forums are useful compared with desk research as information can be sought that may not be covered in existing sources. They are useful compared with interviews and workshops as they require less organisational overhead to get expert input and what i... (read more)

2
IanDavidMoss
This sounds a bit like the EA Librarian?
2
Nathan Young
I'd like that on this forum tbh

(Per Nick's note, reposting)

Longitudinal studies

Epistemic Institutions; Economic Growth

We are interested in funding long-term, large-scale data collection efforts. One of the most valuable research tools in social science is the collection of cross-sectional data over time, whether on educational outcomes, political attitudes and affiliations, health access, and outcomes. We are interested in funding research projects that intend to collect data over twenty years. The projects require significant funding to ensure follow-up data collection.

Airdrop for EA Forum karma holders

Empowering Exceptional People, Effective Altruism

Take a snapshot from some time in the past (e.g. date of OP), and award $100 for each karma point to all EA Forum holders. This could be extended and scaled as appropriate to the AI Alignment Forum and perhaps r/EffectiveAltruism and other places as seen fit. As a one-off, this can't be gamed. It might encourage more participation going forward, but it should be made clear that there should be no expectation of a repeat. Ideally, the money would be no strings attached. It wo... (read more)

I have 6611 karma and if y'all gave me $600k no strings attached, I'm not gonna lie I would buy a really nice house.

And now an extra $1.5k worth of house on top of that!

3
Greg_Colbourn
I appreciate the honesty. [Note the rest of this is not directed at Khorton; more to the people upvoting her comment]. But I'm disheartened by the fact that this comment has got high karma. It looks pretty bad from an outside perspective that such selfish use of a windfall is celebrated by effective altruists. And also from an inside perspective - it makes me wonder how altruistic most EAs actually are. I mean, I hope most of us would at least give the standard GWWC 10% away (and maybe that is implicit, but it isn't to an outsider reading this -- and a lot of outsiders probably are reading this given the attention that the FTX Future Fund is getting). Where are the comments saying "I'd fund X", "..start Y", "..do independent research on Z"!? Maybe it's just that no one is taking this seriously -- and I get it, it was meant partly as an amusing play on the crypto airdrop phenomenon -- but it's still a bit sad to see such cynicism around altruism being promoted on the EA Forum. If EAs can't be expected to do EA things with large unexpected windfalls without there being strings attached, then I question the integrity of the movement. You might argue that EA is no longer funding constrained (so therefore it's fine to be selfish), but funding saturation is not evenly distributed.
9
alex lawsen
Khorton buying a nice house and meeting her GWWC pledge seem perfectly compatible, and suggesting that her planning to do this casts significant doubt on the integrity of the movement seems both over the top and unkind, and I don't think the 'I'm directing my complaining at upvoters not khorton' does much to mitigate that.
2
Greg_Colbourn
For the record, I'm not saying that "house + GWWC pledge"  is lacking in integrity, I'm saying that "house" alone is (for an EA) (and that's what it looks like to an outsider who won't know about Khorton taking the GWWC pledge).
4
Kirsten
I doubt the people who upvoted this comment are encouraging me (although maybe they are!). I think it's more likely that they think it was a valuable piece of information.
4
Greg_Colbourn
I guess I'm reading more into it. To me it looks something like: "Haha, Greg is so naive to think that rank and file EAs can be trusted to do good things if we give them free money, no strings attached. See, this is the kind of thing we should expect." Possibly with the additional: "And why not? EA is no longer funding constrained, and there isn't much that non-expert, small-to-medium donors can do with money now" [both quotes would come with more courteous, careful phrasing, and caveats, in real life of course. I've written it how I have because I'm somewhat emotionally invested; my apologies.].  And outsiders looking on might be thinking "See, these so-called 'effective altruists' are no different than the rest of us when it really comes down to it. The most upvoted comment on a thread about an airdrop is one about spending the cash on a house!"

There are some serious incentives issues here where the EAF users with the most karma (and thus most incentive to gain from this proposal) are also the ones with most strong upvote power. :O

3
Yonatan Cale
Inspired by this, I am reading all the suggestions from the bottom (from least-karma)
2
Greg_Colbourn
Yes. FTX: please  try to ignore the karma on the proposal comment when considering it!

Slightly disappointed that this has ended up on negative karma. I think it's at least triggered some somewhat fruitful discussion. I do think a broad-based retroactive funding of public good in the EA community would be good; especially in terms of it's knock-on effects for the next generation of projects. Mediation of this via crypto and impact certificates seems promising, even if a direct airdrop based on an imprecise metric such as EA Forum karma isn't the way to go.

4
Jackson Wagner
Even putting aside Nuno's list of pretty serious issues preventing karma from correlating well with impact, I think $100 is way too high a value for current-day EA karma points (maybe it could be appropriate for Karma points earned years ago in the Forum's infancy). If one point of karma was worth on average more than $100 donated to EA charities, then posting on the EA forum would be so preposterously effective that my 1300 karma points accrued this year would be worth ~$130,000 to the movement, massively outweighing any donations I could hope to make to EA charities,  also seemingly outweighing the impact of many other forms of direct work (since most EA salaries are lower than $130K/year) and equivalent to saving more than 25 lives just by commenting.  It would also imply that CEA is massively underinvesting in support for the Forum.   On the other hand, if karma points were worth only $1 of donations to EA charities, then everyone would be completely wasting their time here (depending on how long it takes you to write comments, conceivably doing less good than you could do by donating 10% of your income after working an extra hour at minimum wage, etc), and CEA would be massively overinvesting by spending more money on supporting the Forum than the value it actually produces. Realistically I think Karma points are probably worth $20-$30 "on average".  But the average is dragged upwards by a small number of extremely valuable posts.  From an inside-view perspective, I think my participation on the Forum has been decently helpful to folks, but I probably haven't discovered any totally revolutionary insights that will become foundational for EA causes going forward.  So I figure if folks like me want to try to quantify their Forum contributions despite all those valid objections I linked, they should figure each karma point to be worth ~$10.
4
Greg_Colbourn
Interesting analysis. The airdrop wouldn't need to be based on the estimated value of karma points though. I was thinking of it more in terms of a mechanism for decentralising (grant making) power in the EA movement. $100 was chosen to make the sums allocated to people significant in a way that $10 probably wouldn't be (e.g. if it was $10, most people wouldn't really get enough to fund or start new projects, quit their job and do independent research, etc). Nuño's list probably means that there should be some attempt to apply adjustments to scores. But this does open a can of worms. Are there any other promising proxies for EA impact that could be used for an airdrop?
4
Jackson Wagner
Maybe instead of airdropping something that can be directly exchanged for cash (in which case many people would just buy a house with their $600K), we airdrop a resource that is somehow restricted such that it has to be a donation?  A Forum-Karma-based airdrop seems like it would be an awesome way to kick off an impact certificates program -- people could use their KarmaCoin to invest in impact certificates, with the promise that if you invest wisely, down the road the certificates for the most impactful projects might get bought by a mega-donor like OpenPhil, and that's how you'd ultimately get a cash payout.
2
Greg_Colbourn
Sounds good! I wonder what loopholes could emerge though? Most cryptos end up with a market value even if they don't intend to have one. I suppose KarmaCoin could be timelocked somehow. It makes it more difficult to trade, but people can still make IOU contracts.
3
Yonatan Cale
I'd be afraid of playing around with the karma system. I think the EA Forum / Lesswrong might become the high-quality-discussion-social-media of the future, and I wouldn't make changes to the karma system without at least considering how the change impact that vision

It wouldn't be a change. It's a one-off reward for past activity (a retro-active funding of public good as it were :))

2
Greg_Colbourn
Ok, so LessWrong are actually doing this(!) - but for a week going forward from April Fool's Day - rather than retroactively, and for $1/karma point (rather than $100).
2
Dawn Drescher
Note that there are some forum users who have posted highly upvoted posts and comments under different pseudonymous accounts. :-)
4
Greg_Colbourn
Yes. Perhaps we need to add Metamask support to the Forum :)

Sponsoring Debates on Future Fund Issues

Effective Altruism

The Fund Future could run debates on these issues with high-level debaters (ie. World Champions or finalists) receiving significant compensation to take part. One format which would be particularly exciting would involve prominent academics giving the opening speeches for both sides and debaters taking the debate from there (for example, imagine Bostrom and Peter Singer debating how much we should focus on x-risks from AI vs. the present day). The debates would be recorded and prominently advertised... (read more)

2
Greg_Colbourn
See also: introduce important people to the most important ideas by way of having seminars they are paid a "speaker's fee" to attend (more).
4
Kirsten
I am inherently suspicious of paid seminars and would personally downgrade the credibility of any ideas I heard in a paid seminar (even if I went to get the money!)
2
Greg_Colbourn
Would you feel the same way about a conference you were paid a fee for speaking at? One way of averting this could be to give participants an amount of money to allocate to a charity of their choice, instead of paying them (like on celebrity game shows).
2
Kirsten
If I'm paid to speak, that's not suspicious; if I'm paid to listen (in any way), that's suspicious. Edit: Actually now that I work for government, being paid to speak is a little suspicious, and I am required to decline and report paid speaking invitations! Because it's an easy cover for bribery. But in general I don't think it's suspicious.
2
Greg_Colbourn
Ok, yes in my proposal I say "it should be made clear that the fee is equivalent to a “speakers fee” and people shouldn’t feel obliged to “tow the party line”, but rather speak their opinions freely." There would be some listening involved too though. I also say "In addition to (or in place of) the fee, there could be prestige incentives like having a celebrity (or someone highly respected/venerated by the particular group) on the panel or moderating, or hosting it at a famous/prestigious venue". But maybe this would also arouse suspicion.

Happy Altruist Hotel

I have submitted this idea: creating a Happy Altruist Hotel.
My project idea focusses on improving the wellbeing of effective altruists by creating a center for that. I am thinking of a physical location, preferably in a nature environment. I will call it (for now) " Happy Altruist Hotel". The way I see it, the happy altruist hotel is a place where all kinds of programs, workshops, retreats and trainings will be organized for (aspiring) effective altruists.

The happy altruist hotel will be a place where EA's come together for inspiration,... (read more)

Research on how to minimize the risk of false alarm nuclear launches

Effective Altruism

Preventing false alarm nuclear launches (as Petrov did) via research on the relevant game theory, technological improvements, and organization theory, and disseminating and implementing this research, could potentially be very impactful.

Organising/sponsoring Hackathons

Epistemic institutions, empowering exceptional people

Many highly skilled programmers are lured into the private sector either to work for prestigious companies or found a startup, often with little positive impact. We’d like to see these people instead working for or starting their own EA aligned organisations.

To encourage this, we’d be excited to fund an organisation that involves themselves with programming hackathons, to scout for highly creative and skilled individuals and groups. This could mean sponsoring existing hackathons or running their own.

Prestigious forecasting tournaments for students

Epistemic institutions, empowering exceptional people

To scale up forecasting efforts, we will need a large body of excellent forecasters to recruit from. Forecasting is a skill that improves over time, and it takes time to build a track record to distinguish excellent forecasters from the rest - particularly on long-term questions. Additionally, forecasting builds generally useful research and rationality skills, and supports model-building and detailed understanding of question topics. Therefore, getting stu... (read more)

Solving institutional dysfunction

Values and Reflective Processes

Thousands of institutions have potential to do more good, but are hampered by dysfunctions such as excess bureaucracy, internal politics, and misalignment of the values they and their employees hold with their actions. Often these dysfunctions are well-known by their employees, but still persist.

We're excited to fund proposals to study institutional dysfunction and investigate solutions, as well as tools to monitor dysfunctions that lead to poor EA outcomes, and to empower employees to solve t... (read more)

Fund publicization of scientific datasets

Epistemic institutions
 

Scientific research has made huge strides in the last 10 years towards more openness and data sharing. But it is still common for scientists to keep some data proprietary for some length of time, particularly large datasets that cost millions of dollars to collect like, for instance, fMRI datasets in neuroscience. More funding for open science could pay scientists when their data is actually used by third parties, further incentivizing them to make data not only accessible but useable. Op... (read more)

Buying and building products and services that influence culture

Movement building

Mass media producers such as news services, computer games and books and movies studios, etc, heavily influence culture. Culture in turn creates and influences norms for collective values (e.g., trust in various groups and institutions) and behaviours (e.g., prosocial or antisocial behaviour). Collective values and behaviour then influence social outcomes. We'd therefore welcome work to build or acquire mass media producers and use these to promote relevant values and behaviou... (read more)

EA movement building evaluation support
Movement building

Effective social movement building requires us to understand what is working well and why. However, there is very limited information on how to track EA groups performance, and on how different approached perform on in achieving key outcomes. We would like to support work to address this, for instance, to help with standardisation of EA group metrics and the creation of simple tracking systems (e.g., distribution of a single sheet and related data visualisation program for tracking attendees across all groups). 
 

Understanding public awareness and opinion of key EA values, the EA movement, and/or key organisations
Movement building & conceptual dissemination 

What the public thinks of EA is quite relevant to many key outcomes, including movement building. We would therefore like to fund work to understand public trends on topics such as key values (e.g., longtermism, cosmopolitanism or resource maximisation), attitudes towards activist movement 'brands' (e.g., EA, vegan activism, extinction rebellion), and awareness and attitude towards key EA organisations ... (read more)

On-demand Software Engineering Support for Academic AI Safety Labs

AI safety work, e.g. in RL and NLP, involves both theoretical and engineering work, but academic training and infrastructure does not optimize for engineering. An independent non-profit could cover this shortcoming by providing software engineers (SWE) as contractors, code-reviewers, and mentors to academics working on AI safety. AI safety research is often well funded, but even grant-rich professors are bottlenecked by university salary rules and professor hours which makes hiring competent... (read more)

3
aogara
Really like the idea. Would be very interested in working on projects like this if anyone’s looking for collaborators.
[anonymous]7
0
0

Leadership and management auditing
Effective Altruism


It is uncertain at what cost to employees' well-being EA organisations achieve impact. A sustainable ecosystems of EA organizations that has long-term impact should have a foundation of evidence-based leadership and management that doesn't harm employees or volunteers (or at least tries to avoid this).
We'd love to see an organisation that evaluates the leadership and management practices of EA organisations and its effect on the well-being of their employees at all levels of the organisation as well as make recommendations for improvement.

2
MaxRa
I really like this idea. My tentative impression is that  * management quality has low hanging room for improvement at more than half of EA orgs * management quality is very important * you can probably find a non-EA consultancy that could understand EA culture, collect best practices and support picking most low-hanging fruits

Establish a virtual EA co-working space in the metaverse or on another platform to allow EA's from every country to meet and create new ideas together.

Making AI alignment research among the most lucrative career path in the world.

AI alignment

Having the most productive researchers in AI alignment would increase our chances to develop competitive aligned models and agents. As of now, the most lucrative careers tend to be in top AI companies. They attract many bright graduate students and researchers. We want this to change and enable AI alignment research to become the most attractive career choice for excellent junior and senior engineers and researchers. We are willing to fund AI alignment workers with wages higher than top AI companies' standards. For example, wages could start around 250k$/year and grow with productivity and experience.

A few people have mentioned retroactive public goods funding. I'd suggest broadening the scope a bit:

Better funding models for altruistic projects

Effective Altruism, Research That Will Help Us Improve

Market-oriented funding models are often a poor fit for altruistic projects due to the free-rider problem. On the other hand, traditional philanthropy is limited by available funding and uncertainty about how to best allocate it. Various mechanisms have been proposed to address these problems, including certificates of impact, mutual matching, quadratic fundin

... (read more)

Givewell for AI alignment  

Artificial intelligence

When choosing where to donate to have the largest positive impact on AI alignment, the current best resource appears to be Larks annual literature review and charity comparison on the EA/LW forums. Those posts are very high-quality but they’re only published once a year and are ultimately the views of one person. A frequently updated donation recommendation resource contributed to by various experts would improve the volume and coordination of donations to AI alignment organisations and projects.

T... (read more)

Research scholarships / funding for self-study 

Empowering exceptional people

The value of a full-time researcher in some of the most impactful cause areas has been estimated as being between several hundred thousand to several million dollars per year, and research progress is now seen by most as the largest bottleneck to improving the odds of good outcomes in these areas. Widespread provision of scholarships / funding for self-study could enable far more potential researchers to gain the necessary experience, knowledge, skills and qualifications to ma... (read more)

Quantify the overall suffering from different conditions, and determine whether there's misallocation of resources in biomedical research.

I suspect there's a big gap between the distribution of resources allocated to the study of different diseases and what people actually suffer from the most. Among other factors that lead to non-optimal allocation, I'd guess that life-threatening diseases are overstudied whereas conditions that may really harm people's well-being, but are not deadly, are understudied. For example, I'd guess that chronic pain is understud... (read more)

2
EdoArad
Related: Cochrane's series of papers on waste in science and Global Priorities Project's investigation into the cost-effectiveness of medical research

A think tank to develop proof of stake for international conflicts

Artificial Intelligence, Great Power Relations, Space Governance

International conflicts pose a risk already, and that’ll only get worse when AI arms races start among countries. Yet, establishing a central world government is hard and bears the risk that it may be taken over by a dictator.

Currently we’re implementing an algorithm that puts at stake the lives of millions of citizens, and where almost anyone can slash the stake of almost anyone else. Instead we could put a lot of weath at stak... (read more)

Research into reducing general info-hazards

Biorisk

Researching and diseminating knowledge on how to generally reduce info-hazards could potentially be very impactful. An ambitious goal would be to have an info-hazard section in the training of journal editors, department chairs, and biotech CEOs in relevant scientific fields (although perhaps such a training would also be an info-hazard!)

5
Tessa A 🔸
yeah, to expand upon this: Best practices for assessment and management of dual-use infohazards Biorisk and Recovery from Catastrophe, Values and Reflective Processes Lots of important and well-intended research, including research into AI alignment and pandemic prevention, generates information which may be hazardous if misused. We would like to better understand how to assess and manage these hazards, and would be interested in funding expert elicitation studies and other empirical work on estimating information risks. We would also be interested in funding work to make organizations, including research labs, publishers and grantmakers, better equipped to handle dual-use through offering training and incentives to follow certain best practices.

Reducing vaccine hesitancy

Biorisk

Even if we have extremely quick development of vaccines for pandemic pathogens, vaccine hesitancy can limit the impact of vaccines. Research and efforts to reduce vaccine hesistancy in general could potentially be high-impact.

Re: Expert polling for everything (already listed on ftxfuturefund.org/projects)

Some questions that I think it would be very valuable to get the answers for:

1. Year with 10% chance of AGI?
2. P(doom|AGI in that year)?
3. What would it take for you to work on AGI Alignment ($ amount, other)?

1 & 2 because I think that, for AGI x-risk timelines, 10% chance (by year X) estimates should be the headline, not 50%.

And 3 should be asked specifically to the topmost intelligent/qualified/capable people in the world, as an initial investigation into this project ide... (read more)

Research to solve global coordination problems

Epistemic Institutions, Values and Reflective Processes

In Scott Alexander's Meditations on Moloch, Scott argues that a number of humanity's major problems (corruption, environmental extraction, arms races, existential risks from emerging technologies, etc) occur because agents are unable to coordinate for a positive global outcome. Our current major coordination mechanisms of free markets, international institutions and democracy are inadequate to solve this problem. Research needs to be done to design better c... (read more)

New academic publishing system

Research that will help us improve, Epistemic Institutions, Empowering Exceptional People

It is well-known that the incentive structure for academic publishing is messed up. Changing publish-or-perish incentives is hard. However, one particular broken thing is that some journals operate on a model where they rent out their prestige to both authors (who pay to have their works accepted) and readers (who pay to read), extracting money from both while providing little value except their brand. This seems like a situation that coul... (read more)

Research on solving wicked problems

Economic growth, Values and Reflective Processes

It seems that many (almost all?) of the outstanding problems we effective altruists wish to solve are wicked problems. A better general understanding on how wicked problems could be solved may potentially be very impactful. This can be done by establishing relevant fellowships, grants, and collaboration opportunities to facilitate research on this topic.

EA follower bounties

EA community building

Offer a fixed rate for subscribers to EA accounts on different platforms. Ask forum users to note all the accounts above a certain size they can think of which they think post quality EA content and remunerate all according to the same standard, per platform. Alternatively only pay midsized accounts of those for whom it's not paid already or on platforms we would like more coverage on.

Regulatory markets of AI safety

Artificial Intelligence

A political think tank to refine and push for regulatory markets of AI safety in as many countries as possible. Jack Clark, Gillian K. Hadfield: “We propose a new model for regulation to achieve AI safety: global regulatory markets. We first sketch the model in general terms and provide an overview of the costs and benefits of this approach. We then demonstrate how the model might work in practice: responding to the risk of adversarial attacks on AI models employed in commercial drones.”

It is probably h... (read more)

5
RyanCarey
It would also be good to offer whistleblower bounties for AI safety and biosafety!

(Per Nick's post, reposting)


Large-scale randomized controlled trials

Values and Reflective Processes; Epistemic institutions; Economic Growth

RCTs are the gold standard in social science research but are frequently too expensive for most researchers to run, particularly in the United States. We are interested in large-scale funding of RCTs that are usually impossible due to a lack of funding.

2
Jackson Wagner
Higher-leverage way to do this might be to lobby for reforms making it easier to gather "Phase 4" data on therapies already in use?  Or reform the FDA in one of various other ways, for instance so they give provisional approval to therapies which have merely been shown to be safe but not necessarily effective?  Or code up some kind of platform that makes it easier for small organizations to run large trials by doing stuff like mailing people supplements and fitbit-like devices without having to jump through a bunch of formidable bureaucratic hoops.
2
Zac Townsend
I think this is all correct! By the way, I was mostly thinking of RCTs in the social sciences -- like randomized school vouchers or the Perry Preschool Experiment -- but it's equally true in the FDA/medical context. 

(Per Nick's note, reposting)

Development of cross-disciplinary talent

Economic Growth, Values and Reflective Processes, Empowering Exceptional People,

The NIH successfully funded the creation of interdisciplinary graduate programs in, for example, computational biology and Ph.D./MD programs. Increasingly, the returns to studying in one discipline, artificially constructed, cannot solve our most pressing problems. We are interested in funding the development of fluent individuals in two or more fields — particularly people with expertise in technology and soci... (read more)

Situational Analysis Agency

Epistemics

When events of great global importance occur, they often have a bearing on EA projects. Sometimes EAs will want to do something in response. Take for example, the invasion of Ukraine, the coronavirus pandemic and supply chains. At the moment, most of the investigation of these issues is conducted on the side by EAs who are busy with other projects. It would be great to have some researchers available to investigate these issues on short notice so that we are better able to navigate these situations.

Research into Goodhart’s Law

Artificial Intelligence, Epistemic Institutions, Values and Reflective Processes, Economic Growth, Space Governance, Effective Altruism, Research That Can Help Us Improve

Goodhart’s Law states: “Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes”, or more simply, "When a measure becomes a target, it ceases to be a good measure.”

The problem of ‘Goodharting’ seems to crop up in many relevant places, including alignment of artificial intelligence and the social and economic... (read more)

Publish an EA-inspired magazine like Time Magazine's "Time for Kids" (TFK)

Empowering Exceptional People, Values and Reflective Processes,  Effective Altruism

Time for Kids has almost 2 million subscribers and has been used by educators for over 25 years to introduce elementary students to issues in science, history and civic engagement, while empowering students to take action and have a positive impact on the world. An EA-oriented magazine could do something similar by introducing students to topics like current pressing issues, relevant career pathwa... (read more)

[anonymous]6
0
0

Accident reporting in biology research labs

Biorisk and Recovery from Catastrophe

Currently, accident reporting is framed as an unpleasant and largely unimportant chore, even though there’s evidence of lab leaks causing massive harm. Encouraging research groups to report their accidents in a fast and thorough way could therefore be very impactful. We could build a reporting system in a variety of ways, this research in itself would be a good thing to fund. A potential system could be to implement insurance policies that require efficient and honest documenta... (read more)

2
Alex D 🔸
To some degree these already exists (eg here's a description of Canada's system), but I'm certain they could be drastically expanded, standardized, synthesized, and otherwise improved.

Just looked at the website and the following probably fits under talent-search / innovative educational experiments. Apologies for the formatting (this is from a private doc of ideas some time ago, and I currently don't have the time to reformat it / I'm also travelling with spotty internet).

Project 1

Title:

Longtermist movement building via "cash transfers" (i.e. grants/fellowships) to talented (high-school) students (from developing countries) to support them to work on the world's most pressing problems.

Idea:

Identify talented (e.g. top 0.01%) high ... (read more)

2
PeterSlattery
I like this! I had a similar idea about curating exceptional people in Third World countries and connecting them to training, resources and networks so that they could create marketplaces that would help to enrich their home countries by creating employment and reduce poverty/inequality.

Yes this sounds plausible. I'm generally excited to think about ways humanity can survive and/or flourish after civilizational collapse and other large-scale disasters.

AI Safety Academic Conference

Technical AI Safety

The idea is to fund and provide logistical/admin support for a reasonably large AI safety conference along the lines of Neurips etc. Academic conferences provide several benefits: 1) Potentially increasing the prestige of an area and boosting the career capital of  people who get accepted papers. 2) Networking and sharing ideas, 3)  Providing feedback on submitted papers and highlighting important/useful papers.  This conference would be unusual in that the work submitted shares approximately t... (read more)

9
MaxRa
As Gavin mentioned somewhere here, one significant downside would be to silo AI Safety work from the broader AI community.

Promote ways that suppress status seeking

Great Power Relations, Economic Growth

Status seeking is associated with massive economic inefficiencies (waste production, economic inequality,..). The zero sum game nature of status seeking also puts a toll on individual well being and consequently on suboptimal ways the societies function.

In the political domain, status seeking can lead to wars (as the recent developments illustrate).

The EA community should invest into institutions/research/solutions leading to diverting from status seeking.

Research raising sanity waterline

It seems that teaching the general public rationality tools may cause more polarisations. That is because many ideas seem to be used primarily for argument-winning instead of truth-seeking.

There is a risk that some ideas will make people less rational when teaching rationality. For example, Eliezer Yudkowsky wrote an article Knowing About Biases Can Hurt People

Scott Alexander uses the term Symmetric and Asymmetric Weapons for a similar idea: Some thinking-tools are more useful for winning arguments tha... (read more)

Assessment companies

Epistemic Institutions, Empowering Exceptional People

Most certification processes (e.g schools & universities) require going through their teaching process in order to be certified. Due to either bad incentives or central planning, their tests are also often very bad at accurately assessing the skills of the assessed. This creates a situation where most certifications (e.g high school diplomas & degrees) aren't as credible and reliable as they should be, and yet people have to go through years of studying to get them because the

... (read more)

Much better narratives of the future and understanding of “Utopia”

Many efforts to discuss “utopia” are unproductive, and often the word is disliked. This is despite most people caring deeply about the future and how it is shaped. Improving communication of the future is important for practical reasons, like improving public understanding of longtermist projects. Also, limited understanding of preferences over even the medium term future could unduly influence work and limit progress toward better outcomes more broadly. This project includes research and de... (read more)

2
Charles He
Credit to this existing content and also this existing content for the idea.

Training Course for Professional AI Ethicists on Longterm Impacts

Artifical Intelligence

Most AI Ethicists focus on the short-term impacts of AI rather than the longer term impacts. Many might be interested in a free professional development course covering this topic. Such a course should cover a variety of perspectives, including that of prominent AI Safety skeptics.

4
MaxRa
Cool idea. I have some worry that a majority of AI ethicists have sufficiently bad epistemics (sth. like fairly strong views, relatively weak understanding of the world outside their discipline, and little skill at honestly/curiously/patiently exploring disagreements) that this would end up being regretable. Would be interested in updates here.  Maybe it's similar to the [bioethicists issue](https://forum.effectivealtruism.org/posts/JwDfKNnmrAcmxtAfJ/the-bioethicists-are-mostly-alright) and I got my impression only from public discussions, & maybe a selection of the weakest ideas? 

Bountied Rationality Website

Effective Altruism

Oftentimes great ideas fail to find funding through a grant because those who come up with a great proposal are not the right people to complete the proposal. An inducement prize platform separates who comes up with ideas (proposers) and those who complete the ideas (bounty hunters), thereby allowing the best ideas to be elevated based on the quality of the idea itself. It also makes it easier to find others working on the same project because there can be a "competitors and collaborators" tab that shows who el... (read more)

Better understanding social movements
Movement building & Conceptual dissemination 

People involved with social movements are important collaborators for the EA movement. However, there is relatively little high quality survey work to understand how these groups differ and overlap. We would therefore like to fund research to regularly survey members of social movements to better understand them. For instance, this could involve understanding i) aggregations of behaviours and attitudes (e.g., what different identities, demographics/geographies/groups... (read more)

Cotton Bot 

Economic growth

Problem: In 2021, a mere 30% of the world’s cotton harvest was gathered by machinery. This means
that over 60% of the 2021 worldwide supply of cotton was harvested using the same methods as
American slaves in the 1850’s. A significant amount of the hand harvesting includes forced labor.

Solution: The integration of existing technologies can provide a modular, robust, swarming team of
small-scale, low-cost harvesters. Thoughtful system design will ensure the harvesters are simple to
operate and maintain while still containing leadi... (read more)

7
Dawn Drescher
I think this is key. If most of the harvest is not forced labor, then the cotton bot may just steal the least terrible employment opportunity from these people and they have to fall back on something more terrible. Then again it can maybe be marketed specifically to the places that use forced labor.
1
marswalker
I also filled out the form, so apologies if this is a double entry! 

Increase the number of STEM-trained people, in EA and in general

Economic growth, Research that can help us improve

Research and efforts to increase the numberof quantitatively skilled people in general, and targeted EA movement-building efforts to them could potentially be very impactful. (e.g., AI alignment research, biorisk research, scientific research in general) Incentivizing STEM education at the school and university levels, facilitating immigration of STEM degree holders, and offering STEM specific guidance via 80,000 Hours and other organizations could potentially be very impactful. 

New non-academic intellectual communities

Empowering exceptional people, Values and reflective Processes

The pathologies of academia are well known, and there are many people who would like to engage with and contribute to research but once they are outside of academia they don't have the structures to do so. Recently there have been some new projects springing up to fill this gap, such as:

... (read more)

Self-Improving Healthcare

Biorisk and Recovery from Catastrophe, Epistemic Institutions, Economic Growth

Our healthcare systems aren't perfect. One underdiscussed part of this is that we learn almost nothing from the vast majority of treatment that happens. I'd love to see systems that learn from the day-to-day process of treating patients, systems that use automatic feedback loops and crowd wisdom to detect and correct mistakes, and that identify, test and incorporate new treatments. It should be possible to do this. Below is my suggestion.

I suggest we allo... (read more)

Responsible AI Incubator

AI Safety

Creating an incubator to encourage new startups to invest in the responsible use of AI (including longer-term safety issues) by making this a requirement of investment. In addition to influencing companies, this could enhance the credibility of the field and help more AI safety researchers to become established.

Downsides: This could accelerate AI timelines, but the fund would only have to offer slightly better terms in order to entice startups to join.

Create a suite of online and in person EA qualifications to help attract new people into the movement and unskill existing members.

The suite of online qualifications could follow a similar model to the Khan academy. Short, interactive courses led by gifted teachers and delivered online. These courses would cover foundational EA materials.

EA could also partner with universities to deliver formal courses in areas such as existential risk or AI safety.

2
PeterSlattery
I had a similar idea here.

Research into increasing the “surface area” of important problems

Artificial Intelligence, Biorisk and Recovery from Catastrophe, Epistemic Institutions, Values and Reflective Processes, Economic Growth, Great Power Relations, Space Governance, Effective Altruism

The idea here is that 80,000 Hours seems to follow an approach along the lines of (1) What are the biggest problems? (2) What are the obvious ways to make progress on these problems? (3) How can we get people to implement these obvious ways?

If we hold the first question constant, we can instead ask:... (read more)

A project to investigate and prioritize project proposals such as all of these

Research That Can Help Us Improve, Effective Altruism, Empowering Exceptional People

Even long lists of project proposals like this one can miss important projects. The proposals (including my own) are also rarely concrete enough to gauge their importance or tractability.

Charity entrepreneurs are currently mostly on their own when it comes to prioritizing between project proposals and making them more concrete. There may be great benefits to specialization and economies of scale h... (read more)

A fast and widely used global database of pandemic prevention data

Biorisk

Speed is of the essence for pandemic prevention when emergence occurs. A fast and widely used global database could potentially be very impactful. It would be great if events like the early discovery of potential pandemic pathogens, doctors' diagnoses of potential pandemic symptoms, etc. regularly and automatically gets uploaded to the database, and high-frequency algorithms can use this database to predict potential pandemic outbreaks faster than people can do.

One Device Per Human

Similar to: https://en.wikipedia.org/wiki/One_Laptop_per_Child

Allowing people from all over the world to vote on global issues.

(this is assuming we have global governance)

Create an independent organization working along the lines of Implementation Support Unit of Biological Weapons Convention

Biorisk and Recovery from Catastrophe

Biological Weapons Convention, forbidding the development of biological weapons, was signed in 1972 by most countries. But compliance is verified by Implementation Support Unit (BCW ISU), with the budget in the range $1-2m and only roughly four employees. At the same time, it seems there is a fair probability that Russia has an active biological weapon development program. 

Cr... (read more)

Reducing risks from laboratory accidents
Biorisk and Recovery from Catastrophe

Some life sciences research, such as gain-of-function work with potential pandemic pathogens, poses serious risks even in the absence of bad actors. What if we could eliminate biological risks from laboratory accidents? We'd like to see work to reduce the likelihood of accidents, such as empirical biosafety research and human factors analysis on  laboratory equipment. We'd also like to see work that reduces the severity of accidents, such as warning systems to inform scientists if a pathogen has not been successfully deactivated and user-friendly lab strains that incorporate modern biocontainment methods.

Replacing Institutional Review Boards with Strict Liability

Biorisk, Epistemic Institutions, Values and Reflective Process

Institutional Review Boards (IRBs) regulate biomedical and social science research. As a result of their risk-averse nature, important biomedical research is slowed or deterred entirely; eg, the UK human challenge trial was delayed by several months because of a protracted ethics review process and an enrollment delay in a thrombolytics trial cost thousands of lives. In the US, a plausible challenge to IRB legality can be mounted on Firs... (read more)

Calculating the cost-effectiveness of research into foundational moral questions

Research That Can Help Us Improve

All actions aiming at improving the world are either implicitly or explicitly founded on a moral theory. However, there are many conflicting moral theories and little consensus regarding which theory, if any, can be considered the correct one (this issue is also known as Moral Uncertainty). Further adding to the confusion are issues such as whom to include as moral agents (animals? AIs?) and Moral Cluelessness. These issues make it extremely dif... (read more)

Reducing amount of time productive people spend doing paperwork

Economic Growth, Research That Can Help Us Improve

One example is productive researchers working in high-impact fields who are forced to write copious paperwork for grants. Another is filing taxes. Funding various approaches to reduce this problem, such as research on optimal streamlining of grant decision processes,  nonprofits/volunteers/crowdsourced advice for helping fill out paperwork like taxes, improving pipelines into lab managers/personal assistants to high-productivity researchers, etc. could potentially be impactful.

Develop organizations like the Institute for Advanced Study, but for longtermism

Effective altruism

The Global Priorities Institute in the UK is one example. It could be very impactful to develop similar research organizations in other locations, such as the US and the EU. (Perhaps they exist already and I just don't know about them!)

Addendum: Even the GPI could be more interdisciplinary like the IAS. e.g., branch out in addition to economics and philosophy.

A public longtermism pledge/petition

Effective altruism

One way to increase the solidarity of EAs and longtermists, and to increase the gravitas with which longtermism is associated, is to have a public pledge or petition that people can sign. Public intellectuals, academic faculty, and prestigious individuals can be recruited to sign and publicly highlighted if they agree to sign. This would facilitate longtermism becoming a social norm. That this could have high impact is demonstrated, for example, by the substantial underconsideration of the risks of nuclear war (from the Russia-Ukraine war) by many public intellectuals' public comments at the moment.

Targeting movement-building efforts at top universities' administration and admissions

Effective altruism

Currently, the admissions officers of top (say, US) universities select and recruit high-potential students (modulo things like Harvard's Z list), and EA thereby uses targeted efforts to persuade and facilitate these high-potential students to go into high-impact careers. Yet, most graduates of top universities still do not do so, and a significant proportion of them go into zero-sum or negative-sum careers due to sticky social norms. 

One solution m... (read more)

Prosocial social platforms

Epistemic institutions, movement-building, economic growth

The existing set of social media platforms is not particularly diverse, and existing platforms also often create negative externalities: reducing productive work hours, plausibly lowering epistemic standards, and increasing signalling/credentialism (by making easily legible credentials more important, and in some cases reducing the dimensionality of competition, e.g. LinkedIn reducing people to their most recent jobs and place of study, again making the competition for cred... (read more)

Extinction-level events outside of biorisk and nuclear catastrophes 

Biorisk and Recovery from Catastrophe

In order to prepare for worst-case catastrophes, we need to anticipate them. Biological weapons and nuclear catastrophes are two well-identified threats to humanity's long term survival, as is climat change. However, there may be emerging risks that are yet to be addressed by policymakers or the EA community. 

We'd be interested in convincing works highlighting credible, large-scale risks that are overlooked by most forecasters and the EA community, as well as any recovery strategies that are applicable.

6
Greg_Colbourn
You've neglected to mention AI! Arguably this is considered the biggest x-risk by the EA community (see The Precipice). Summary table from the book [highlighted by me]: Table 6.1 from The Precipice by Toby Ord I'll also note that in a couple of decades of serious research, no new x-risks have been been identified. But of course it is still worth remaining vigilant to new identified threats.

Tools for improved transmission of tacit knowledge

Biorisk and recovery from catastrophe

Many scientific and technological skills require learning through apprenticeship under a more experienced practitioner, and can't easily be described in writing. If a global catastrophe breaks the transmission of skills from masters to apprentices, it may be difficult to recover those skills. This would make recovery from catastrophe difficult. But there may be ways of improving the recording of these skills, such as through video or methods of observing expert performan... (read more)

Facilitate U.S. voters' relocation to swing states

Values and Reflective Processes

A key difficulty of implementing alternative voting systems which can more effectively aggregate voters' preferences/information (and of implementing beneficial policies or constitutional amendments in general) is political gridlock. The political party that stands to lose power if a voting-system reform passes will vigorously attempt to obstruct it.  The resolution of political gridlock could not only enable large-scale policy solutions to previously intractable societal... (read more)

1
samuel
Peter - great idea, I've been doing some thinking on this as well, will probably send you an email!

An EA Vegan Restaurant Chain:

Effective Altruism

Setting up a vegan restaurant chain associated with Effective Altruism could provide a cost-neutral or even profitable way of providing home bases for EA Societies in major cities. It would also provide opportunites to grow the community by prominently advertising any EA events running at the place.

Downside: This might be seen as cultish. It wouldn't surprise me if there was no-one who was value-aligned who had the relevant skills. That said, we might be able to sign a franchise agreement with an existing restaurant.

(Probably not a good idea, but when brainstorming it is better to share more rather than less)

3
Max Ghenis
Framing it as EA hubs that also happen to serve vegan food could come off as less cultish. The restaurant could also donate 10% of revenue to GiveWell. Edit: Or let the customer select a GiveWell charity to receive 10% of their bill.
2
Yonatan Cale
* Vegans want to live where vegan restaurants exist * Vegan restaurants want to exist where  vegans live Perhaps a place we could add value is in coordination. The rest should happen by itself, theoretically

Leadership development:

Effective Altruism

People who are ambitious are often keen on developing their leadership skills. A program that supported ambitious and altruistic people could both increase people's individual impact and provide a form of EA outreach through sharing EA frames and perspectives. This program would also be useful for developing the leadership skills of people within EA.

A search engine for micro-level data

Macro-level data is easy to find these days. If you want to know the historical GDP of China or carbon emissions of the U.S., you can find the information on many non-profit and for-profit sites via Google.

But suppose you want to quickly look up "people's satisfaction with their daily lives" and "the amount they spend on food," you'd have to read dozens of papers, locate the names of the datasets used, find the places where such survey data is hosted (if it's available at all), create an account on the hosting site, down... (read more)

EA from the ground up

Effective Altruism

Intellectual movements tend to develop by building upon the work of the previous generation and rejecting some of its foundational assumptions. We'd be keen to see an experiment to accelerate this. We'd suggest that the first step would be to identify the assumptions that are underlying EA or specific EA cause areas or specific EA strategies and try to figure out when these break. The project would then focus on those that are most likely to be false, particularly those which would be high impact if false. Efforts wou... (read more)

An EA Space Agency

Effective Altruism, Space Governance

Let’s build an organization which formulates and implements space programs, missions, and systems, which are aimed at the highest-priority things that humanity can be doing in space. There is currently no space organization, public or private, which formulates and implements programs and missions aligned solely with doing the most good, in an impartial and longtermist sense. There are many organizations which do some or much good, such as NASA, ESA, SpaceX, and others, but there is no example today whic... (read more)

Improving Critical Infrastructure

Effective Altruism

Some dams are at risk of collapse, potentially killing hundreds of thousands. The grid system is very vulnerable to electromagnetic pulse attack. Infrastructural upgrades could prevent sudden catastrophes from failure of critical systems our civilization runs on.

Build an Infrastructure Organization for The EA Movement (TEAM) 

Effective Altruism, Empowering Exceptional People

Many high impact organizations in effective altruism have expressed issues with sourcing operations talent which takes time away from the key programs these charities provide, reducing overall impact. An infrastructure organization could provide operational support and build valuable tools that would alleviate the burden from these meta charities and streamline processes across organizations to improve movement coordination. This organizati... (read more)

An EA insurance and finance fund to make it easier for people to fund and take significant personal risk for important social benefits, e.g., due to early career change, founding a startup etc.
Movement building & Helping exceptional people

Risk avoidance is a major reason why people don't change careers or take risks relating to having greater impact. We'd therefore like to see more attempts to establish financial services which can help to reduce risk and promote more rapid and impactful impact amongst exceptional individuals. We note that there may be advantages in combining long term investing initiatives for patient philanthropy with insurance offerings.

Decentralized incentives for resilient public goods after global catastrophic risk

Recovery from Catastrophe

Using cryptoeconomics to bootstrap the incentivization of a resilient grid via which further cryptoeconomic incentives induce the bottom-up production of survival bunkers and other post-catastrophe public goods that could survive GCRs such as nuclear war. Figuring out how to reward people for preparing themselves, moreso if they help others or build critical infrastructure that lasts. 

A Facebook comment I wrote that I am copy-pasting, I will like... (read more)

Find promising candidates for “Cause X” with an iterative forecast-guided thinktank

Epistemic institutions

How likely is it that the EA community is neglecting a cause area that is more pressing than current candidates? We are fairly confident in the importance of the community’s current community areas, but we think it’s still important to keep searching for more candidates. 

We’d be excited to fund organisations attacking this problem in a structured, rigorous way, to reduce the chance that the EA community is missing huge opportunities. 

We propos... (read more)

Help high impact academics spend more time doing research

Empowering exceptional people

Top academic researchers are key drivers of progress in priority areas like biorisk, global priorities research and AI research. Yet even top academics are often unable to spend as much time as they want to on their research. 

We’d be excited to fund an organisation providing centralised services to maximise research time for top academics, while minimising the overheads of setting up these systems for academics. It might focus on:

(1) Funding and negotiating teaching ... (read more)

Modern Public Forums

Values and Reflective Processes, Epistemic Institutions, Effective Altruism

Violence begins when conversations stop. We'd love to see a renaissance of ancient Greek agoras or Roman fora which offered their citizens a public space where they could gather, study and discuss current events as well as everything else that is timelessly important for the future of humanity. In modern times, such places have become increasingly scarce and social media do not constitute a suitable replacement since many critical layers of human communicati... (read more)

An annual reports on cryptocurrency activity and philanthropy
Public influence & attention economy

Lots of money has been invested in cryptocurrency, and it seems likely that this will continue to be the case. The growth in the market has led to many new millionaires, some of whom are quite atypical and young relative to high wealth individuals in other areas. Cryptocurrency philanthropic norms appear to differ from the main population of donors and are not as well established. Thus, identifying, and publicising key trends and opportunities in this area ... (read more)

Better understanding the role of behaviour science and systems thinking in producing key EA outcomes
Social change and movement building

Behaviour and systems change are core to all EA outcomes. We would therefore like to support research to provide a better understanding of the causes of EA relevant behaviour (e.g., career change, donation, involvement in EA or social movement), at both psychological and structural levels. 

---

See this for some examples of ideas potentially relevant to AI governance or safety

Create EA focused communication initiatives
Movement building & Conceptual dissemination 

Optimising EA Movement building and coordination requires confident and effective communicators for compelling and high fidelity conceptual dissemination. To help to improve communication across member of the EA community, we would welcome applications for courses focused on helping EAs to communicate better, for instance, modelled on the toastmasters program or the Dale Carnegie course.


 

Supporting Longitudinal studies of Effective Altruists
Movement building

One significant part of the EA movement is helping individuals to have maximal impact across their lifecycle. However, EA lacks evidence for how different choices, circumstances and lifestyles affect individual impacts. To address this we would like to support longitudinal studies to understand, for instance, how important factors such as age, career, happiness, mental health and actions (e.g., taking pledges, attending events, undergoing career changes etc) interact and change perceived impacts and EA involvements over lifespans and how these differ between current, and former EAs etc.

Research Coordination Projects

Research that can help us improve

At the root of many problems that are being discussed are coordination problems. People are in prisoners' dilemmas, and keep defecting. This is the case in the suggestion to buy a scientific journal: if the universities coordinated they could buy the journal, remove fees, improve editorial policies, and they would be in a far better situation. Since they don't coordinate, they have to pay to access their own research.

Research into this type of coordination problem has revealed two general strat... (read more)

Combined conferences

Effective altruism, Epistemic institutions, Values and reflective processes

Fund teams that have roots in both EA and in other relevant fields and communities to put on conferences that bring together those communities. For example, it could be valuable to put on a conference for EA and RadicalxChange, given that there is a lot of overlap in interests but significant differences in approach. This could help bring in new ideas into EA, especially as a conference is a good way to build relationships and have lengthy, careful discussions. O... (read more)

Audio/video databases of people's experiences of problems

Values and reflective processes, Effective altruism, Research that can help us improve, epistemic institutions

Grantmakers and policymakers are usually far removed from the problems that people face in their daily lives, especially from the problems of people who are more marginalised. Part of the solution to this should be that grantmakers and policymakers make sure to talk to a variety of people to involve them in decisionmaking. However, databases of audio and video interviews with people could als... (read more)

Multilingual web searching and browsing

Effective altruism, epistemic institutions

Despite the capability of automated translation, there is no smooth way to browse the web in multiple languages. It would be useful to have search engines return results from any language, with the results automatically translated into English. When you click on them, you then go to a web page automatically translated into English and can continue browsing in English. This seems important for EA because EA research currently relies primarily on English resources, and this coul... (read more)

A start-up accelerator for pledge-signing EAs and longtermists.

Economic Growth, Effective Altruism, Empowering Exceptional People

Y-combinator/Entrepreneur First meets Founders Pledge. A top-tier start-up accelerator where applicants sign a pledge to donate a significant amount of exit proceeds/profits to doing the most good they can from an effective altruist/longtermist perspective. Build start-ups and network with your value aligned peers!

4
Dawn Drescher
Maybe Founders Pledge itself can be turned into a top-tier startup accelerator? If they’re up for it?

Find a niche to create a subsidized prediction market

Epistemic Institutions

One of the problems of current forecasting is that it isn’t getting attention from decision-makers. One way to jumpstart this is to create a subsidized market in some well-chosen area that will work well and thus publicly prove and legitimize the use of prediction markets.

One suitable idea is Robin Hanson’s idea with fire-CEO market:

Make two subsidized real-money markets on the stock price of each Fortune 500 firm, one market conditional on its CEO stepping down by quarter’s end, an

... (read more)
3
Jakob
One potential niche could be betting markets around outcomes of political events (e.g., betting on outcome metrics such as GDP growth, expected lifespan, GINI coefficient, or carbon emissions; linked to events such as a national election, new regulatory proposals, or the passing of government budgets). Depending on legal restrictions, this market could even ask policy makers or political parties to place bets in these markets, to help the public assess which policy makers have the best epistemics, to hold policy makers accountable, and to incentivize policy makers to invest in better epistemics. (note: this also links to an idea presented in a different comment here -https://forum.effectivealtruism.org/posts/KigFfo4TN7jZTcqNH/the-future-fund-s-project-ideas-competition?commentId=zjvCCNuLEToCQyHdn)

Formulate AI-super-projects that would be both prestigious and socially beneficial


 Artificial Intelligence, Great Power Relations

There are already some signs of race dynamics between the US and China in developing TAI. Arguably, they are at least partly motivated by concerns of national prestige. If race dynamics speed up, it might be beneficial to present a set of prestigious AI-projects that the US and other countries can adopt. These projects should have the following features:

  • Be highly visible and impressive for a wide audience
  • Contribute to safer
... (read more)
3
constructive
Possible downside: Contribute to further speed-up of AI development, possibly leaving less time for alignment research (However, if done correctly, this project only harvests pre-existing dynamics and leads funds to beneficial projects.)

The Petrov Prize for wise decision-making under pressure

Epistemic Institutions, Values and Reflective Processes

On September 26, 1983, Stanislav Petrov singlehandedly averted a nuclear war when he decided to wait for more evidence before reporting an apparent launch of ICBMs aimed at the Soviet Union. The incident was later determined to be a false alarm caused by an equipment malfunction. While Petrov's story is one of the most dramatic examples ever of impactful decision-making under pressure, there are plenty of other people and organizations throughout ... (read more)

4
HaydnBelfield
Note the Future of Life Award, which has been going for the last 5 years -  https://futureoflife.org/future-of-life-award/ Given to Arkhipov, Petrov, Meselson, Foege & Zhranov,  Farman & Solomon & Andersen
2
RyanCarey
Here is a variation on the suggestion - boosting the FLI Award to be more like a Nobel!

Simultaneously reliable and widely trusted media

Epistemic institutions

Eeliable (in the truthseeking sense) media seems to not be widely trusted, and widely trusted media seems to not be reliable. Research and efforts to simultaneously achieve both could potentially be very impactful, for political resolution of a broad range of issues. (Ambitious idea: Can EAs/longtermists establish a media competitor?)

Global Mini-public on AI Policy and Cooperation

Artificial Intelligence (Governance), Epistemic Institutions, Values and Reflective Processes, Great Power Relations

We'd like to fund an organization to institutionalize regular (e.g. yearly) global mini-publics to create recommendations on AI policy and cooperation; ideally in partnership with the  key academic journals (and potentially the UN, major corporations, research instituions, etc.) . Somewhat analogous to globalca.org which focuses on gene editing (https://www.science.org/doi/10.1126/science.ab... (read more)

Influencing culture to align with longtermism/EA

Effective altruism

"Everything is downstream of culture." So, basic research and practical efforts to make culture more aligned with longtermism/EA could potentially be very impactful.

Global cooperation/coordination on existential risks

AI, Biorisk

Negative relationships between, for example, US and China are detrimental to pandemic prevention efforts, to the detriment of all people. Research on and efforts to facilitate fast, effective, and transparent global cooperation/coordination on pandemic prevention can be very impactful. Movement building on the sheer importance of this (especially among the relevant scientists and governmental decision-makers) would be especially impactful. Perhaps pandemic prevention can be "carved out" in U.S.-China relations? This also applies to other existential risks.

Reducing antibiotic resistance

Biorisk

If say a plague bacterium (maybe there are better examples) became resistant to all available antibiotics and started spreading, it could cause a pandemic like the Black Death. Research on how to behaviorally reduce antibiotic use (e.g., reduce meat consumption, convince meat companies to not use antibiotics, reduce overprescription) and how to develop new antibiotics (AI could help), and advocacy of reducing antibiotic use could potentially be high impact.

EA influencers

Effective Altruism

More awareness of EA = more talent and money for EA

Pay A-list influencers, with followings independent of EA, to promote EA content and themes. Concentrate on influencers popular with GenZ.

Risks: lack of message fidelity

Research on predicting talent

Effective altruism, Economic growth, Research that will help us improve

The prediction of which people (e.g., prospective students, prospective employees, people to whom movement-builders target their efforts) are likely to have high potential is extremely important. But it is  plausible that the current way in which these predictions are made are incomplete, cognitively biased, and substantially suboptimal. Research into identifying general or field-specific talent could be very impactful. This can be done by funding fellowships, grants, and collaboration opportunities on the topic.

Broadening statistical education

Economic Growth, Values and Reflective Processes

Human cognition is characterized by cognitive biases, which systematically lead to errors in judgment: errors that can potentially be catastrophic (e.g., overconfidence as a cause of war). For example, a strong case can be made that Russia's invasion of Ukraine has been an irrational decision of Putin, a consequence of which is potential nuclear war. Overconfidence is a cause of wars and of underpreparation for catastrophes (e.g., pandemics, as illustrated by the COVID-19 pande... (read more)

2
jknowak
What I think I'd love to see is one of the below: - statistics bootcamps - statistics tutoring (or more like lack of problems to work on with your tutor, my idea was to try and go through actuary exam questions) - something like Cochrane Training (where you can learn interventions review) but more broad/general?
1
Peter S. Park
Thanks so much for these suggestions! I would also really like to see these projects get implemented. There are already bootcamps for, say, pivoting into data science jobs, but having other specializations of statistics bootcamps (e.g., an accessible life-coach level bootcamp for improving individual decision-making, or a bootcamp specifically for high-impact CEOs or nonprofit heads) could be really cool as well.

International mass movement lobbying against x-risks

Biorisk and Recovery from Catastrophe,  Great Power Relations, Values and Reflective Processes

In recent years, there has been a dramatic growth in grassroots movements concerned about climate change, such as Fridays for Future and Extinction Rebellion. Some evidence implies that these movements might be instrumental in shifting public opinion around a topic, changing dominant narratives, influencing voting behaviour and affecting policymaker beliefs. Yet, there are many more pressing existential risk... (read more)

Risk modelling and preparedness for climate-induced  risks

Research That Will Help Us Improve

Climate change is a risk factor for several threats to the long-term future of humanity. It increases the likelihood of infectious diseases, including novel pathogens. As well as this, it is correlated with increased fragility of states and greater propensity for conflict. Therefore an organisation that models the climate resilience of social, health and political systems, and subsequently seeks to strengthen and improve their preparedness, may reduce the likelihood of significant threats to humanity’s long-term future

1. Longitudinal studies

Epistemic Institutions; Economic Growth

We are interested in funding long-term, large-scale data collection efforts. One of the most valuable research tools in social science is the collection of cross-sectional data over time, whether on educational outcomes, political attitudes and affiliations, health access, and outcomes. We are interested in funding research projects that intend to collect data over twenty years. The projects require significant funding to ensure follow-up data collection. 

2. Replication funding and publicat... (read more)

Thanks so much for all of these ideas! Would you be up for submitting these as separate comments so that people can upvote them separately? We're interested in knowing what the forum thinks of the ideas people present.

4
TW123
Some of this has been said in threads above, but I don't think that upvotes are a very good way of knowing what the forum thinks. People are definitely not reading this whole thread and the first posts they see will likely get all of their attention. On top of that, I do not expect forum karma to be a good indicator of much even in the best case. People tend to upvote what they can understand and what is interesting and useful to them. I suspect what the average EA forum user finds useful and interesting is probably only loosely related with what a large EA grantmaker should fund. For instance, in general good writing is a very good way to get upvotes, but that doesn't correlate much with the strength of the ideas presented.
1
Zac Townsend
Apologies. I tried. The forum definitely thinks I'm spamming it with fourteen comments, but we'll see how it goes. 
2
Nathan Young
You have to pause for about 30s between comments

Making Public Information More Public

Access to public information is hampered by arcane systems and government roadblocks that prevent people from getting direct access to data. Federal court records are behind a government paywall. Filing and keeping up with a freedom of information act requests requires herculean dedication. Government data are sometimes timed to be released when people are least likely to focus on it. This is but just a few examples of the barriers placed between what should be public information and the actual public. As a result, unad... (read more)

Research on Competitive Sovereignties

Governance, New Institutions, Economic Growth

The current world order is locked in stasis and status quo bias. Enabling the creation of new jurisdictions, whether via charter cities, special economic zones, or outright creation of new lands such as seasteading, could allow more competition between countries to attract subscriber-citizens, increasing welfare.

It would also behoove us to think about standards for international interoperability in a world where '1000 nations bloom'. Greater decentralization of power could in... (read more)

Materials Informatics

For centuries, we have identified some of the most important materials in modern society by chance. Some of these materials include steel, copper, rubber, etc.

With the current grand challenges of today's world, the discovery and scaling of new advanced materials are necessary to create the impact. (After all, everything around us is materials).

I'd like to see more funding on materials informatics and guidance/regulation in materials informatics so we're not creating any advanced materials or nanomaterials that could cause a catastrophe.

A replication lab or project to replicate and expand key EA research
Movement building & conceptual dissemination

EA outreach and strategy is supported by a growing pool of social psychology research exploring EA related topics (e.g.,  appeals to change dietary or donation choice, understand moral views or related interventions etc). However, much social psychology research doesn’t replicate or varies depending on audience when retested. It's therefore possible that some key findings and theories that guide EA are more or less valid or robust than c... (read more)

Creating EA aligned research labs
Conceptual dissemination 

Academic publications are considered to be significantly more credible than other types of publications. Many academics with outsized impacts lead publication labs. These hire many junior researchers to help maximise the return on the knowledge and experience of a more senior researcher. We would like to support attempts to found and scale up academic research labs aligned with relevant cause areas. 

Synthesis book fund/prize

Senior academics or practitioners have the accumulated experience and knowledge to be able to write grand syntheses of their subjects, or to put forward grand theories, without those just being wild speculation.  This fund would proactively support and/or retroactively reward work of this type.  To make this kind of work more likely, the fund could seek out academics that seem in a particularly good place to create a work of this type and encourage them to do this. In addition, the fund could support the writing of both a... (read more)

2
Dawn Drescher
Great! Some of them may need (or benefit from) ghostwriters. I don’t know how easy it is to find good ghostwriters for a given subject, but that could be another problem that such an organization could solve for them.

Credence Weighted Citation Metrics

Epistemic Institutions

Citation metrics (total citations, h-index, g-index, etc.) are intended to estimate a researcher's contribution to a field. However, if false claims get cited more then true claims (Serra-Garcia and Gneezy 2021), these citation metrics are clearly not fit for purpose.

I suggest modifying these citation metrics by weighing each paper by the probability that it will replicate. If each paper has citations and probability of replicating , we can modify each formula as follows: instead of measuring t... (read more)

Funding AI policy proposals to slow down high-risk AI capability research.

AI alignment, AI policy

We want AI alignment research to catch up and surpass AI capability research. Among others, AI capability research requires a friendly political environment. We would be interested in funding AI policy proposals that would increase the chance of obtaining effective regulations slowing down highly risky AI capability R&D. For example, some regulations could impose large language models to pass a thorough safety audit before deployment or scaling in parameter... (read more)

4
Chris Leong
One worry is that redtape increases the chance that someone who doesn't care about regulation can frontrun the first team to AGI.
1
Guillaume Corlouer
Yes. To reduce that risk we could aim for an international agreement on banning high-risk AI capability research but might not be satisfying. I have the impression that very few people (if any) are working on that flavor of regulations and could be useful to explore it more. Ideally, if we could simply coordinate to not produce direct work on producing generally capable AI until we figure out  safety it could be an important win.

Global (baseline) Education Curriculum

Getting people aligned, avoiding division, humans on this planet are in the same team.

By creating a basic program and common understanding it will be much easier to implement any of the global policies required to handle climate change.

Some of the proposed subjects:

  • Literacy, numeracy
  • English. Alternatively: Latin and Esperanto are not really competitors, Chinese too difficult
  • Health, human body, food, nutrition
  • Nature, earth sciences, environment
  • Making, engineering, tinkering 
  • Communication, relationships, culture, to
... (read more)

Research on solving the wicked problem of underinvestment into interdisciplinary research

Economic Growth, Research That Can Help Us Improve

"Interdisciplinary research is widely considered a hothouse for innovation, and the only plausible approach to complex problems such as climate change," but are systematically underfunded and underconsidered (Bromham et al., 2016). Thinking of this problem as a wicked problem and researching how to systematically solve it (at the university, department, publication journal, and grant agency levels) could potentially be impactful.

A better open-source human-legible world-model, to be incorporated into future ML interpretability systems

Artificial intelligence

[UPDATE 3 MONTHS LATER: Better description and justification is now available in Section 15.2.2.1 here.]

It is probable that future powerful AGI systems will involve a learning algorithm that builds a common-sense world-model in the form of a giant unlabeled black-box data structure—after all, something like this is true in both modern machine learning and (I claim) human brains. Improving our ability, as humans, to look inside an... (read more)

Incubator Incubator

Effective Altruism

Effective Altruism needs more incubators. Why not have an incubator to incubate them?

Risks: We end up with too many incubators.

(This is my least serious proposal)

(This is a refinement of Yonatan Cale's proposal)

Limited Scope Impact Purchase:

Various cause areas incl. AI Safety and Effective Altruism

The biggest challenge with impact purchases is that the market for selling is usually much larger than the market for buying. This project would limit the scope of the purchase to particular people to ensure a) that impact sellers were aware of the impact purchase's existence when they decided to pursue that project* and b) to address this market imbalance and therefore increase people's odds that they are paid and hence ... (read more)

Thanks for running this competition, looks like there are plenty of great ideas to choose from!
I submitted my entry on improving human intelligence through non-invasive brain stimulation through the Google form, it said my entry was recorded but I got no email confirmation.

Has anyone else submitted through the Google Form, and did they also get no email confirmation?

Does anyone know when the winners of the competition will be announced? 

Just came to stay that this ideas competition really turned me on - I loved it. I hope this becomes an ongoing community ‘suggestion box’, perhaps monitored once a month.

I understand that one could write a blog post with an idea, but I think this is an even better low barrier way of getting ideas quickly.

Personally, this competition helped me realize that I have a different lens that many EAs, and that my ideas and skills could be valued. Thank you.

Funds for study efficient logistic or run a logistic company

We can see logistics is one of the bottlenecks for goods and services. It makes uneven distribution of resources. Especially in pandemic and lockdowns, not enough delivery guys leads to shortage of foods. In outbreak area, there's food shortages while a surplus in other regions. Digitalisation can help information transfer in a speedy and cheap way. But what about real products delivery? It's something more than autonomous vehicles. Human beings is a fragile part of the procedure. Now, in hk,... (read more)

Reframe U.S. college EA chapters as an alternative to Greek life

Values and Reflective Processes, Empowering Exceptional People, Effective Altruism

Following the model of Alpha Phi Omega, the largest coed service fraternity in the U.S. with ~335 chapters and 400,000 alumni, reframing EA chapters as social organizations may help with recruitment and retention. It could also encourage a broader range of activities for chapters to run throughout the year including things like hosting workshops for other students on how to think about careers, hosting film scree... (read more)

[fairly unsure, would be interested in thoughts]

Facilitate global cooperation via economic relationships and shared ownership

Values and Reflective Processes

We live in an economically connected world that is characterized by mutually beneficial trades. On top of that, countries are generally heavily invested in diverse financial securities of other countries. This way, economic progress in one country is generally to the benefit of the whole international community. Consequently there are strong incentives for peaceful coexistence, internalization of proble... (read more)

Making significant improvements to the EA wiki (last minute submission)

See this for a range of ideas for improving the EA wiki which could be funded. I'd suggest that all changes made to the wiki should also be replicated and linked across the EA ecosystem and onto normal Wikipedia. 

A living 'cause prioritisation flowchart' /Better visualisation template or graphic design copy for EA communicators (quick submission)
[Inspired by this comment]

EA has many aims and a complex causal logic behind these aims. Visualisation helps to explain this better. Flow charts are one established way we do this. These could be used effectively in many communication settings but there is a coordination problem as most individual actors who need such a chart also lack sufficient expected ROI or experience to create one. We would therefore welcome more work... (read more)

Systemic change marginal cost-effectiveness program estimation and evaluation

Effective Altruism, Research That Can Help Us Improve, Artificial Intelligence

Instead of focusing on single, the ones which are measurable and highlighted by academia, outcomes, one can focus on advancing systemic change (institutionalizing safe positive systems) by selecting programs with the highest (and lowest) marginal cost-effectiveness, considering impact costs development. Then, impact can be increased by 1) advising resource shifts from low to high cost-effectiveness progr... (read more)

Effects of humanitarian development on peace and conflict

Great Power Relations, Values and Reflective Processes, Effective Altruism, Biorisk and Recovery from Catastrophe

Is it not that conflict stems from suboptimal institutions, such as those which value aggression and disregard, for the lack of better alternatives known, so can be prevented by general humanitarian development? It can be that when people are more able to contribute and benefit from others' upskilling rather than competing for scarce resources because it is challenging to increase efficien... (read more)

Wellbeing determinants' understanding

Research That Will Help Us Improve

Without understanding the fundamentals of individuals' wellbeing, you cannot build institutions based in and optimizing for wellbeing, even if you have a lot of attention and prediction capacity: you do not know what to advocate for or research. So, you should fund a team of neuroscientists, sociologists, and anthropologists, to provide an interdisciplinary interperspective understanding of what, fundamentally, makes individuals happy. This should be understood fundamentally (e. g. safe... (read more)

Facilitate interdisciplinarity in governmental applications of social science

Values and Reflective Processes, Economic Growth

At the moment, governmental applications of social science (where, for example, economists who use the paradigm of methodological individualism are disproportionately represented) could benefit from drawing on other fields of social science that can fill potential blind spots. The theory of social norms is a particularly relevant example. Also, behavioral scientists and psychologists could potentially be very helpful in improving the... (read more)

Using the EA survey to answer key research questions. 
Research & movement building

We would like to support work by EA researchers to preregister hypothesises and measures to test (with ethics approval) i) in the EA survey (maybe as a non-mandatory final section) and ii) with the public to compare the results. For instance, this could to explore how different demographics, personality types and identities (e.g., identification as social justice activist/climate change activist) interact with different moral views or arguments for key EA behaviours ... (read more)

Collective financing for  EA products
Movement building, coordination, coincidence of wants problems

As shown by crowdfunding platforms, collective financing has many benefits. For instance, it allows individuals to collectively fund projects that they could not fund as individuals and for projects to start and scale when they would not otherwise exist. We would therefore like to fund projects to support collective financing with the EA community. For instance, this could involve allowing individuals to commit to providing a project or service (e.g., a ... (read more)

EA-oriented research search engines

Effective altruism

EA researchers and people in similar roles such as grantmakers and policy analysts face a difficult search challenge. They are often trying to find high-quality resources that synthesise expert consensus in fields that are unfamiliar to them. Google often returns results that are too low-quality and popularly-oriented, but google scholar returns results that are too specific or which are only tangentally related to EA/policy/grantmaker interests. An improved search engine would return quality synthesis r... (read more)

Lobby big tech companies to create AI Safety departments to monitor the growth of machine learning technology and implement proactive risk mitigation.

Incentivize researchers to prioritize paradigm shifts rather than incremental advances

Economic growth, Research That Can Help Us Improve

There's a plausible case that societal under-innovation is one of the largest causes (if not the largest cause) of people's suboptimal well-being. For example, scientific research could be less risk-averse/incremental and more pro-moonshots. Interdiscplinary research on how to achieve society's full innovation potential, and movement-building targeted at universities, scientific journals, and grant agencies to incentivize scientific moonshots could potentially be very impactful.
 

Research to determine what human cultures minimize the risks of major catastrophes

Great Power Relations, Values and Reflective Processes, Artificial Intelligence

I posit that human cultures differ and that there’s a chance that some cultures are more likely to punish in minor ways and more likely to adapt to new situations peacefully while other may be more likely to wage wars. This may be completely wrong.

But if it is now, we could investigate what processes can be used to foster the sort of culture that is less likely to immanentize global catastrophes, a... (read more)

Authoritative Statements of EA Views

Epistemic Institutions

In academia, law, and government, it would be helpful to have citeable statements of EA relevant views presented in an authoritative and unbiased manner. Having such material available lends gravitas to proposals that help address related problems and provides greater justification in taking those views for granted.

(This is a variation on 'Expert polling for everything' focused on providing authority of views to non-experts. The Cambridge Declaration on Consciousness is a good example.)

Scoring scientific fields
 

Epistemic Institutions
Some fields of science are uncontroversially more reliable than others.  Physics is more reliable than theoretical sociology, for example. But other fields aren't that easy to score. Should you believe the claims of a random sleep research paper? Or a paper from personality psychology? Efficacy is just as important, as a scientific field with low efficacy is probably not worth engaging with at all. 

A scientific field can be evaluated by giving it a score along one or more dimensions, where a lo... (read more)

Making Impactful Science More Reputable

There are two things that matter in science: reputation and funding. While there is more and more funding available for mission-driven science, we’d be excited to see projects that would try to increase the reputation of impactful science. We think that increasing the reputation of impactful work could over time increase substantially the amount of research done on most things that society care about.

Some of the ways we could provide more reputation to impactful research:

  • Awarding prizes to past and present researchers
... (read more)

Ethics Education

Values and Reflective Processes

Over the next century, leaders will likely have to make increasingly high-stakes ethical decisions. In democratic societies, large numbers of people may play a role in making those decisions. And yet, ethics is seldom thoroughly taught in most educational curricula. While it may be covered briefly in secondary school and is covered in detail at university for those who attend and choose to study it, many accomplished people do not have even a superficial understanding of the most important ethical theories and... (read more)

Experimental Wargames for Great Power War and Biological Warfare

Biorisk and Recovery from Catastrophe, Epistemic Institutions

This is a proposal to fund a series of "experimental wargames," on great power war and biological warfare. Wargames have long been a standard tool of think tanks, the military, and the academic IR world since the early Cold War. Until recently, however, these games were largely used to uncover unknown unknowns  and help with scenario planning. Most such games continue to be unscientific exercises. Recent work on "experimental wa... (read more)

Normalize broad ownership of hazmat suit (and of N-day supply of non-perishable food and water)

Biorisk

If everyone either wore a hazmat suit all the time or stayed at home for 14 days (especially in the early stages of the COVID-19 pandemic), the pandemic would have been over. Normalize, fund, and advocate for broad ownership of hazmat suits and of non-perishable food and water,  for preventing future pandemics. This may be more feasible in developing countries than developed countries, but in principle foreign aid/EA can make it feasible for developed countries as well.

2
Greg_Colbourn
This would only work for pandemics if literally everyone in the world did it at the same time. I think we'd probably need effective global governance for that (that itself isn't an x-risk in terms of authoritarianism or permanently curtailing humanity's flourishing).

Building in reciprocal altruism into exercise, via a nonprofit with a mobile app

Effective altruism

Regular exercise likely has a very large positive impact on health and well-being. A lot of Americans do not do sufficient regular exercise, which is probably a major reason for suboptimal quality of life and subsequently suboptimal productivity.

One reason why people don't like regular exercise from going to the gym is that it feels artificial or unpleasant, and feels like a waste of time and energy. In a sense, this viewpoint is correct; moving heavy objects ... (read more)

Research on predicting interest in EA/longtermism

Effective altruism, Research that will help us improve

In order to help movement-builders better target their efforts, research on how to identify people who are more likely than average to be receptive to EA/longtermism could be quite impactful. Facilitating this research in the behavioral sciences can be done by funding fellowships, grants, and collaboration opportunities on the topic.

3
Dawn Drescher
Lucius Caviola and colleagues are working on this. Doesn’t mean that there shouldn’t be more efforts like that or that they don’t need help. :-)

Wikipedia research/infrastructure/support

Epistemics

Wikipedia is a hugely valuable public resource. Internally however, there are slow processes and aging mechanisms, as in many institutions. Run a research and lobbying organisation to help wikipedia maximise its value to the world.

Internal market for (EA) recruitment
Effective Altruism Operations, Economic Growth

Open source tool that would allow companies/orgs to set up internal (prediction) markets where all employees could bet on which candidate would be the best fit and be awarded points/real money for every month they stayed at the company. 

2
Nathan Young
You would want to run markets on who would stay, i think, since that's the resolution criteria.
1
jknowak
Yes, that too, but what I was thinking is that the votes on "whom to hire" could be used then (if you voted on the winning candidate) as shares of bonus paid out monthly.

(Per Nick's post, reposting)

Practitioner research

All

Universities are primarily filled with professors trained in similar ways. Although universities sometimes have “professors of the practice,” these positions are often reserved for folks nearing retirement. We are interested in funding ways for practitioners to spend time conducting and publishing “research” informed by their lived real-world experiences.

(Per Nick's note, reposting)

 Cross-university research

Values and Reflective Processes, Research That Will Help Us Improve, Epistemic Institutions, Empowering Exceptional People  

Since 1978, more than 30 scientists supported by the Howard Hughes Medical Institute have won the Nobel prize in medicine. We are interested in funding other cross-institutional collections of researchers and financial support beyond the biosciences, focusing on economic growth, public policy, and general social sciences.

Social sector infrastructure

Values and Reflective Processes, Empowering Exceptional People

If an entrepreneur starts or runs a for-profit company, there is a range of software and other infrastructure to help you run your business: explainer guides, AWS, Salesforce.com, etc.  Similar infrastructure for not-for-profits and other NGOs exist, particularly cross-border. We are interested in finding a new generation of infrastructure that supports the creation and maintenance of the social sector. This could look like a next-generation low-cost fiscal sponsor or an accounting system focused on NFP accounting and filing 990s, anything that makes it easier to start and run institutions. 

1
Yonatan Cale
Monday.com recently founded a social-impact team which is trying to help charities in ways that (1) use technology, and (2) are scalable (lots of charities can enjoy a single thing that Monday builds). If you have ideas, let me know, I know someone in their team
2
Zac Townsend
Would be happy to help, but they might be farther along than my thinking either way. I just know a ton of people who have tried to get fiscal sponsors and it's a pain (and expensive!). 

Effective Altruism Promotional Materials

Effective Altruism

We are looking to invest in the production of high-quality materials for promoting Effective Altruism and Effective Altruism cause areas including posters, brochures and booklets. Effective Altruism is heavily focused on the fidelity of transmission, so these materials should be designed to avoid low-quality transmission. This could be achieved by distributing materials that promote opportunities for deeper engagement or by designing materials very carefully. Such an organisation would likely conduct studies and focus groups to understand the effectiveness of the material being distributed and whether it is maintaining its fidelty.

Historical investigation on the relation between incremental improvements and paradigm shifts

Artificial Intelligence

One major question that heavily influences the choice of alignment research directions is the degree to which incremental improvements are necessary for major paradigm shifts. As the field of alignment is largely preparadigmatic, there is a high chance that we may require a paradigm shift before we can make substantial progress towards aligning superhuman AI systems, rather than merely incremental improvements. The answer to this question det... (read more)

Antarctic Colony as Civilizational Backup

Recovery from Catastrophe

Antarctica could be a good candidate for a survival colony. It is isolated, making it more likely to survive a nuclear war, pandemic, or roving band of automated killer drones. It is tough, making it easier to double up as a practice space for a Mars colony. Attempting to build and live there at a larger scale than has been done may spur some innovations. One bottleneck here that may likely need resolving is how to get cheaper transportation to Antarctica, which currently relies on flying there or a limited number of specialized boats.

Creating a giving what we can for volunteering time and bequesting (last minute)

Given the success of GWWC we would like to see organisation emerge to seek pledges and build communities around the effective use of resources , but in different ways (e.g., time rather than mone or by bequesting rather than donating) [inspired by this].

EA community's trading bot

Artificial Intelligence, Effective Altruism

If you have the capital to invest while being able to influence the market and you are just aligned with EA, why would you not get a trading bot. EAs who are the world's top experts on AI can code it, possibly using the knowledge of their respective institutions, and of course impact is generated. It saves time just think about it.

We see now that dictatorships slow down the progress of humanity and  can plausibly threaten large-scale nuclear wars. Dictatorships are often toppled from inside with public protests (e.g. Poland 1988-1989, Tunisia 2011) but public protests face the coordination problem. There are many people willing to protest in dictatorships (e.g. Russia), but protesting in large groups is both  more efficient and less risky because law enforcement has the cap on the number of detained.  Idea: develop an app to sign-up for a prospective protest in advanc... (read more)

1
DC
I've thought about this space a good deal. I think this is really dangerous stuff. It must be aligned with the good. Don't call up what you can't put down. "Coordination is also collusion." - Alex Tabarrok

Sad that I missed this! Only saw this the day after it closed.

A service/consultancy that calculates the value of information of research projects

Epistemic Institutions, Research That Can Help Us Improve

When undertaking any research or investigations, we want to know whether it's worth spending money or time on it. There are a lot of research-type projects in EA and the best way to  evaluate and prioritise them is to calculate their value of information (VOI). However, VoI calculations can be complex and we need to build a team of experts that can form a VoI consultancy or service provider.

Examples of use cases:
1... (read more)

Build an intranet for the effective altruism community 

Effective Altruism, Empowering Exceptional People

If effective altruism is going to be "the last social movement the world needs" it will need to operate differently from past movements in order to last longer and reach more people. Given that coordination is a crucial element for success within a distributed global network, a movement intranet could improve coordination on projects, funding and research and build a greater sense of community. An intranet would also help the movement (1) consolidat... (read more)

4
Chris Leong
What's the advantage of an intranet vs. a website with registration?
3
barkbellowroar
 (short answer) more security, more features and the consolidation of a lot of existing but disconnected infrastructure tools... which could strengthen movement coordination, increase collaboration and calibration and sustain longterm engagement with the community.  Just like you can't catch rain with a sieve, you can miss a lot of value with a fragmented ecosystem. (longer answer)  An intranet would subsume under one platform a lot of current tools like... event sign-ons, the forum, EA hub's directory, facebook groups, job/internship boards, the Wiki, various communication channels (twitter, discords, slacks, email etc), surveys and polls, chapter sites, separate application forms, the librarian project and organization newsletters. An intranet can also provide a greater array of features that do not currently exist in the ecosystem including (but not limited to) spaces for sub-group discussions, tiered engagement levels, guided on-boarding for new members, greater analytics and much more. I think the biggest benefit of all is concentrating the online activity of the movement in one place versus the present state of having to check a disorganized collection of websites, blogs, sign-ons and social accounts in order to keep up with what is going on with the community. The majority of our time should be spent on our work and collaboration - not trying to track down important or relevant information, trying to figure out how to get involved and meet people in the movement, and figuring out how to learn, grow and develop as an effective altruist. Given the recent sunsetting of the EA Hub - and their comments that implied CEA may be attempting to develop a larger platform - this idea may be in progress. However, I still wanted to share and spark more discussion on the need for an intranet because I believe it would greatly improve movement coordination and strengthen the sense of community while significantly reducing the workload for meta organizations so they can
3
Chris Leong
If you write a post on this I would read it. Two minor comments: * It's possible to create a central hub platform without making it an intranet * I'm skeptical of the security benefits given how open EA is (vs. a normal company)

Evaluating powerful political groups and people (political parties/activists/…)

values and reflective processes

Currently GiveWell provides people with a guide for effective giving. We could apply a similar model to provide a guide for effective voting and advocacy.

We’d like to see an organisation that evaluates particularly powerful political individuals/groups/parties and advocates for those that align with EA values.

We could evaluate them on things like:

Commitment to using evidence and careful reasoning to work out how to maximise good (particularly long-... (read more)

Align university careers advising incentives with impact

Effective altruism

Students at top universities often have lots of exposure to a limited set of career paths, such as consulting and finance. Many graduates who would be well-suited to high-impact work don’t consider it because they are just unaware of it. Universities have little incentive to improve this state of affairs, as the eventual social impact of graduates is hard to evaluate and has little effect on their alma mater (with some notable exceptions). We would therefore be excited to fund effort... (read more)

Space's preferences and objectives research

Space Governance, Artificial Intelligence, Epistemic Institutions, Values and Reflective Processes, Great Power Relations, Research That Can Help Us Improve

In order to govern space well, one needs to understand its preferences and objectives: for example, that of dark energy and dark matter. These can be then weighted by an AI approved under the veil of ignorance by all entities, and solutions that maximize the weighted sum, while centralizing wellbeing and systemic stability, selected, and supported by any space ... (read more)

Commercial marketing analysis

Artificial Intelligence, Epistemic Institutions, Economic Growth

What tricks to manipulate humans does AI use? For example, why are the glossy balls used increasingly more often in unrelated advertisement? The color gradients to captivate attention (day&night), physical or mental space intrusion narrated as giving one power to defend themselves from such issues or offend others, racial and gender hierarchical power stereotypes in conjunction with images that narrate positive relationships, etc. AI would love it, since analys... (read more)

Blockchain for people to prove their ID. Often in a disaster people's identity documents are lost or taken. This Blockchain will allow people to prove who they are and will also allow direct disaster relief payments to be made via the Blockchain.

Refinement of project idea #8, Pathogen sterilization technology

Add: ‘We’d also be interested in the development of therapeutic techniques that could treat infections using these (e.g. relying on physical principles) or similar approaches.’

Pipeline for podcasts

Effective altruism

 Crowdsourced resources, networks, and grants may help facilitate EAs and longtermists' creation of high-impact, informative podcasts.

Potential Test Case for AGI

Attempt to simulate an artificial general intelligence using Ouijably

Low-odds it works, but I thought if you could put enough people on a spirit board it might exhibit behaviour similar to an oracle-type AGI. This implementation(https://github.com/ably-labs/ouija) means it wouldn't take much organising to attempt. Maybe tweak so participants are predicting the direction the planchette will move rather than relying on the ideomotor effect. I thought the idea would be outside of the rationalist's window of consideration as somethin... (read more)

[comment deleted]8
0
0
[comment deleted]2
0
0
[comment deleted]2
0
0
[comment deleted]2
0
0
Curated and popular this week
Relevant opportunities