The FTX Foundation's Future Fund is a philanthropic fund making grants and investments to ambitious projects in order to improve humanity's long-term prospects.

We have a longlist of project ideas that we’d be excited to help launch. 

We’re now announcing a prize for new project ideas to add to this longlist. If you submit an idea, and we like it enough to add to the website, we’ll pay you a prize of $5,000 (or more in exceptional cases). We’ll also attribute the idea to you on the website (unless you prefer to be anonymous). 

All submissions must be received in the next week, i.e. by Monday, March 7, 2022. 

We are excited about this prize for two main reasons:

  • We would love to add great ideas to our list of projects.
  • We are excited about experimenting with prizes to jumpstart creative ideas.

To participate, you can either

  • Add your proposal as a comment to this post (one proposal per comment, please), or
  • Fill in this form

Please write your project idea in the same format as the project ideas on our website. Here’s an example:

Early detection center

Biorisk and Recovery from Catastrophes

By the time we find out about novel pathogens, they’ve already spread far and wide, as we saw with Covid-19. Earlier detection would increase the amount of time we have to respond to biothreats. Moreover, existing systems are almost exclusively focused on known pathogens—we could do a lot better by creating pathogen-agnostic systems that can detect unknown pathogens. We’d like to see a system that collects samples from wastewater or travelers, for example, and then performs a full metagenomic scan for anything that could be dangerous

You can also provide further explanation, if you think the case for including your project idea will not be obvious to us on its face.

Some rules and fine print:

  • You may submit refinements of ideas already on our website, but these might receive only a portion of the full prize.
  • At our discretion, we will award partial prizes for submissions that are proposed by multiple people, or require additional work for us to make viable.
  • At our discretion, we will award larger prizes for submissions that we really like.
  • Prizes will be awarded at the sole discretion of the Future Fund.

We’re happy to answer questions, though it might take us a few days to respond due to other programs and content we're launching right now.

We’re excited to see what you come up with!

(Thanks to Owen Cotton-Barratt for helpful discussion and feedback.)

236

Comments735
Sorted by Click to highlight new comments since: Today at 9:46 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
Pablo
1y1020

Retrospective grant evaluations

Research That Can Help Us Improve

EA funders allocate over a hundred million dollars per year to longtermist causes, but a very small fraction of this money is spent evaluating past grantmaking decisions. We are excited to fund efforts to conduct retrospective evaluations to examine which of these decisions have stood the test of time. He hope that these evaluations will help us better score a grantmaker's track record and generally make grantmaking more meritocratic and, in turn, more effective. We are interested in funding evaluations not just of our own grantmaking decisions (including decisions by regrantors in our regranting program), but also of decisions made by other grantmaking organizations in the longtermist EA community.

4
Avi Lewis
1y
I'd like to expand on this: a think-tank/paper that formulates a way of evaluating all grants by a set of objective, quantifiable, criteria. This in order to better inform future allocation decisions so that each dollar spent ends up making the greatest impact possible. In this respect Retrospective Grant Evaluations, is but one variable to measure grant effectiveness. I have a few more ideas that can be combined to create some kind of weighted scoring mechanism for grant evaluation: * Social return on investment (SROI). Arriving at a set of non-monetary variables to quantify social impact * Cost effective analysis. GiveWell is a leader in this. We could consider applying some of their key learnings from the non-for-profit space to EA projects * Horizon Scanning. Governmental bodies have departments that perform this kind of work. A proposal could be assessed by it's alignment with emerging technology forecasts * Backcasting. Seek out ventures that are working towards a desirable future goal * Pareto optimal.  Penalize ideas that could have potential negative impact on factors/people outside of the intended target audience. * Competence and track record. Prioritize grant allocators/judges based on previous successful grants. Prioritize grants to founder or organizations with a proven track record of competence Obviously this list could go on and  this is just a small number of possible variables. The idea is simply to build a model that can score the utility of a proposed grant. 
1
brb243
1y
Is this neglecting the notion that some of the grants are to strategically develop interest by presentation appealing to different decisionmakers, since the objectives are rather already known, such as improve lives of humans and animals in the long term and prevent actors, including those who use and develop AI to reduce the wellbeing of these individuals? It can be a bit of a reputational loss risk to evaluate 'well, we started convincing the government to focus on the long term by appealing by the extent of the future so now we can start talking about the quality of life in various geographies, and if this goes well then we move onto the advancement of animal-positive systems across the spacetime?'

This list should have karma hidden and entries randomised. I guess most poeple do not read and vote all the way to the bottom. I certainly didn't the first time I read it.

7
Stephen Clare
1y
I'm (pleasantly) surprised by the number of entries! But as a result the Forum seems pretty far from optimal as a platform for this discussion. Would be helpful to have a way to filter by focus area, for example.
3
Nathan Young
1y
Yeah I suggest it should be done like this, with search and filters as you suggest. https://forum.effectivealtruism.org/posts/KigFfo4TN7jZTcqNH/the-future-fund-s-project-ideas-competition?commentId=G7aLWq4zypE77Fn6f [https://forum.effectivealtruism.org/posts/KigFfo4TN7jZTcqNH/the-future-fund-s-project-ideas-competition?commentId=G7aLWq4zypE77Fn6f] 

I agree; something like Reddit's contest mode would be useful here. I've sorted the list by "newest first" to avoid mostly seeing the most upvoted entries.

6
Taras Morozov
1y
To prove the point: ATM the most upvoted comment is also the oldest one - Pablo's Retrospective grant evaluations.
4
Greg_Colbourn
1y
The winners have been announced [https://forum.effectivealtruism.org/posts/MBDHjwDvhDnqisyW2]. It's interesting to note the low correlation between comment karma and awards. Of the (3 out of 6) public submissions, the winners had a mean of 20 karma [as of posting this comment [https://forum.effectivealtruism.org/posts/MBDHjwDvhDnqisyW2/awards-for-the-future-fund-s-project-ideas-competition?commentId=GyCGwDTmg6uGYphCo]], minimum 18, and the (9 out of 15) honourable mentions a mean of 39 (suggesting perhaps these were somewhat weighted "by popular demand"), minimum 16. None of the winners were in the top 75 highest rated comments; 8/9 of the publicly posted honourable mentions were (including 4 in the top 11).  There are 6 winners and 15 honourable mentions listed in OP (21 total); the top 21 public submissions had a mean karma of 52, minimum 38; the top 50 a mean of 40, minimum 28; and the top 100 a mean of 31, minimum 18. And there are 86 public submissions not amongst the awardees with higher karma than the lowest karma award winner.  See spreadsheet [https://docs.google.com/spreadsheets/d/12ntN3djQvTE_k_rH_fDfhT4WQfNQZMO0VmfMD--ocr4/edit#gid=0] for details. Given that half of the winners were private entries (2/3 if accounting for the fact that one was only posted publicly 2 weeks after the deadline), and 40% of the honourable mentions, one explanation could be that private entries were generally higher quality. Note karma is an imperfect measure [https://forum.effectivealtruism.org/posts/GseREh8MEEuLCZayf/nunosempere-s-shortform?commentId=kLuhtmQRZBJpcaHhH] (so in addition to the factor Nathan mentions, maybe the discrepancy isn't that surprising).
2
Nathan Young
1y
Alternatively, there could be an alternate ranking mode where you get two comments shown at once and you choose if one is better if they are about the same. Even a few people doing that would start to get a sense of if they agree with the overall ranking.

Starting EA community offices

Effective altruism

Some cities, such as Boston and New York, are home to many EAs and some EA organizations, but lack dedicated EA spaces. Small offices in these cities could greatly facilitate local EA operations. Possible uses of such offices include: serving as an EA community center, hosting talks or reading groups, providing working space for small EA organizations, reducing overhead for event hosting, etc.

(Note: I believe someone actually is looking into starting such an office in Boston. I think (?) that might already be funded, but many other cities could plausibly benefit from offices of their own.)

Here is a more ambitious version:

EA Coworking Spaces at Scale

Effective Altruism

The EA community has created several great coworking spaces, but mostly in an ad hoc way, with large overheads. Instead, a standard EA office could be created in upto 100 towns and cities. Companies,  community organisers, and individuals  working full-time on EA projects would be awarded a membership that allows them to use these offices in any city. Members gain from being able to work more flexibly, in collaboration with people with similar interests (this especially helps independent researchers with motivation). EA organisations benefit from decreased need to do office management (which can be done centrally without special EA expertise). EA community organisers gain easier access to an event space and standard resources, such as a library, and hotdesking space, and some access to the expertise of others using the office.

Leo
1y150

Here is an even more ambitious one:

Found an EA charter city

Effective Altruism

A place where EAs could live, work, and research for long periods, with an EA school for their children, an EA restaurant, and so on. Houses and a city UBI could be interesting incentives.

9
RyanCarey
1y
What would be the value add of an EA city, over and above that of an EA school and coworking space? For example, I don't see why you need to eat at an EA restaurant, rather than just a regular restaurant with tasty and ethical food. Note also that the libertarian "Free State Project" [https://en.wikipedia.org/wiki/Free_State_Project] seems to have failed, despite there being many more libertarians than effective altruists.
2
mako yass
1y
Lower cost of living, meaning you can have more people working on less profitable stuff. I'm not sure 5000 free staters (out of 20k signatories) should be considered failure.
2
RyanCarey
1y
Right, but it sounds like it didn't go well afterwards? https://www.google.com/amp/s/newrepublic.com/amp/article/159662/libertarian-walks-into-bear-book-review-free-town-project [https://www.google.com/amp/s/newrepublic.com/amp/article/159662/libertarian-walks-into-bear-book-review-free-town-project]
1
Leo
1y
Mere libertarians may have failed, as anarchists did in similar attempts. But I believe that EAs can do better. An EA city would be a perfect place to apply many of the ideas and polices we are currently advocating for.
3
RyanCarey
1y
Could you elaborate on the policies? And what, roughly, are you picturing - an EA-sympathising municipal government, or a more of a Honduran special economic zone type situation?
1
Leo
1y
I don't think I will elaborate on policies, given that they are the last thing to worry about. Even RP negative report [https://forum.effectivealtruism.org/posts/EpaSZWQkAy9apupoD/intervention-report-charter-cities] counts new policies among the benefits of charter cities. Now we are supposed to have effective ways to improve welfare, why wouldn't we build a new city, start from scratch, do it better than everybody else, and show it to the world? While I agree that this can't be done without putting a lot of thinking into it, I believe it must be done sooner or later. From a longtermist point of view: how could we ever expect to carry out a rational colonization of other planets when nobody on earth has ever been able to successfully found at least one rational city?
1
mako yass
1y
Note, VR is going to get really good in the next three years, so I wouldn't personally recommend getting too invested in any physical offices, but I guess as long as we're renting it won't be our problem.
4
Jeff Kaufman
1y
I think it is pretty unlikely that VR improvements on the scale of 3y make people stop caring about being actually in person. This is a really hard problem that people have been working on for decades, and while we have definitely made a lot of progress if we were 3y from "who needs offices?" I would expect to already see many early adopters pushing VR as a comfortable environment for general work (VR desktop) or meetings.
1
mako yass
1y
What problem are you referring to. Face tracking and remote presence didn't have a hardware platform at all until 2016, and wasn't a desirable product until maybe this year (mostly due to covid), and wont be a strongly desirable product until hardware starts to improve dramatically next year. And due to the perversity of social software economics, it wont be profitable in proportion to its impact, so it'll come late. There are currently zero non-blurry face tracking headsets with that are light enough to wear throughout a workday, so you should expect to not see anyone using VR for work. But we know that next year there will be at least one of those (apple's headset). It will appear suddenly and without any viable intermediaries. This could be a miracle of apple, but from what I can tell, it's not. Competitors will be capable of similar feats a few years later. (I expect to see limited initial impact from applevr (limited availability and reluctance from apple to open the gates), the VR office wont come all at once, even though the technical requirements will.) (You can get headsets with adequate visual acuity (60ppd) right now, but they're heavy, which makes them less convenient to use than 4k screens. They're expensive, and they require a bigger, heavier, and possibly even more expensive computer to drive them (though this was arguably partly a software problem), which also means they wont have the portability benefits that 2025's VR headsets will have, which means they're not going to be practical for much at all, and afaik the software for face tracking isn't available for them, and even if it were, it wouldn't have a sufficiently large user network in professional realms.)
2
Chris Leong
1y
You think they'll get past the dizziness problem?
1
mako yass
1y
I think everyone will adapt. I vaguely remember hearing that there might be a relatively large contingent of people who never do adapt, I was unable to confirm this with 15 minutes of looking just now, though. Every accessibility complaint I came across seemed to be a solvable software problem rather than anything fundamental.
6
Chris Leong
1y
I heard that New York was starting a coworking space as well
2
JanBrauner
1y
I think Berlin has something like this
4
victor.yunenko
1y
Indeed, the space was organized by Effektiv Spenden: teamwork-berlin.org [https://www.teamwork-berlin.org]
1
Yonatan Cale
1y
I think EA Israel would have more people working remotely in international organizations if we had community offices. [We recently got an office which I'm going to check out tomorrow; Not an ideal location for me but will try!]
jh
1y730

Investment strategies for longtermist funders

Research That Can Help Us Improve, Epistemic Institutions, Economic growth

Because of their non-standard goals, longtermist funders should arguably follow investment strategies that differ from standard best practices in investing. Longtermists place unusual value on certain scenarios and may have different views of how the future is likely to play out. 

We'd be excited to see projects that make a contribution towards producing a pipeline of actionable recommendations in this regard. We think this is mostly a matter of combining a knowledge of finance with detailed views of the future for our areas of interest (i.e. forecasts for different scenarios with a focus on how giving opportunities may change and the associated financial winners/losers). There is a huge amount of room for research on these topics. Useful contributions could be made by research that develops these views of the future in a financially-relevant way, practical analysis of existing or potential financial instruments, and work to improve coordination on these topics.

Some of the ways the strategies of altruistic funders may differ include:

  • Mission-correlated investing
... (read more)

I have had a similar idea, which I didn't submit, relating to trying to create investor access to tax-deductible longtermist/patient philanthropy funds across all major EA hubs. Ideally these would be scaled up/modelled on the existing EA long term future fund (which I recall reading about but can't find now, sorry)

 

Edit - found it and some ideas - see this and top level post.

2
Greg_Colbourn
1y
Just going to note that SBF/FTX/Alameda are already setting a very high benchmark when it comes to investing!
1
brb243
1y
A systemic change investment strategy [https://docs.google.com/document/d/1qoQLU4KzkGvFj05XXGbLNnemgEYW2M6iFohJZzBkClA/edit] for your review.
1
JBPDavies
1y
You may be interested in the following project I'm working for: https://deeptransitions.net/news/the-deep-transition-futures-project-investing-in-transformation/ [https://deeptransitions.net/news/the-deep-transition-futures-project-investing-in-transformation/] . The project goal is developing a new investment philosophy & strategy (complete with new outcome metrics) aimed at achieving transformational systems change. The project leverages the Deep Transitions theoretical framework as developed within the field of Sustainability Transitions and Science, Technology and Innovation Studies to create a theory of change and subsequently enact it with a group of public and private investors. Would recommend diving into this if you're interested in the nexus of investment and transformation of current systems/shaping future trajectories. I can't say too much about future plans at this stage, except that following the completion of the current phase (developing the philosophy, strategies and metrics), there will be an extended experimentation phase in which these are applied, tested and continuously redeveloped.

Highly effective enhancement of productivity, health, and wellbeing for people in high-impact roles

Effective Altruism

When it comes to enhancement of productivity, health, and wellbeing, the EA community does not sufficiently utilise division of labour. Currently, community members need to obtain the relevant knowledge themselves and do related research, e.g. on health issues, themselves. We would like to see dedicated experts on these issues that offer optimal productivity, health, and wellbeing, as a service. As a vision, a person working in a high-impact role could book calls with highly trained nutrition specialists, exercise specialists, sleep specialists, personal coaches, mental trainers, GPs with sufficient time, and so on, increasing their work output by 50% while costing little time. This could involve innovative methods such as ML-enabled optimal experiment design to figure out which interventions work for each individual.

Note: Inspired by conversations with various people. I won't name them here because I don't want to ask for permission first, but will share the prize money with them if I win something.

6
Brendon_Wong
1y
I was going to write a similar comment for researching and promoting well-being and well-doing improvements for EAs as well as the general public! Since this already exists in similar form as a comment, strong upvoting instead. Relevant articles include Ben Williamson’s project (https://forum.effectivealtruism.org/posts/i2Q3DTsQq9THhFEgR/introducing-effective-self-help [https://forum.effectivealtruism.org/posts/i2Q3DTsQq9THhFEgR/introducing-effective-self-help]) and Dynomight’s article on “Effective Selfishness” (https://dynomight.net/effective-selfishness/ [https://dynomight.net/effective-selfishness/]). I also have a forthcoming article on this. Multiple project ideas that have been submitted also echo this general sentiment. For example “ Improving ventilation,” “Reducing amount of time productive people spend doing paperwork,” and “ Studying stimulants' and anti-depressants' long-term effects on productivity and health in healthy people (e.g. Modafinil, Adderall, and Wellbutrin).” Edit: I am launching this as a project called Better [https://www.better.so/]! Please get in touch if you're interested in funding, collaborating on, or using this!

Reducing gain-of-function research on potentially pandemic pathogens

Biorisk

Lab outbreaks and other lab accidents with infectious pathogens happen regularly. When such accidents happen in labs that work on gain-of-function research (on potentially pandemic pathogens), the outcome could be catastrophic. At the same time, the usefulness of gain-of-function research seems limited; for example, none of the major technological innovations that helped us fight COVID-19 (vaccines, testing, better treatment, infectious disease modelling) was enabled by gain-of-function research. We'd like to see projects that reduce the amount of gain-of-function research done in the world, for example by targeting coordination between journals or funding bodies, or developing safer alternatives to gain-of-function research.

 

Additional notes:

  • There are many stakeholders In the research system (funders, journals, scientists, hosting institutions, hosting countries). I think the concentration of power is strongest in journals: there are only a few really high profile life-science journals(*). Currently, they do publish gain-of-function research. Getting high-profile journals to coordinate against publishi
... (read more)

Putting Books in Libraries

Effective Altruism
 

The idea of this project is to come up with a menu of ~30 books and a list of ~10000 libraries, and to offer to buy for each library, any number of books from the menu. This would ensure that folks interested in EA-related topics, who browse a library, discover these ideas. The books would be ones that teach people to use an effective altruist mindset, similar to those on this list. The libraries could be ones that are large, or that that serve top universities or cities with large English-speaking populations.

The case for the project is that if you assume that the value of discovering one new EA contributor is $200k, and that each book is read once per year (which seems plausible based on at least one random library) then the project will deliver far greater than the financial costs, of about $20 per book. The time costs would be minimised by doing much of the correspondence with libraries over the space over a short period of weeks to months. It also can serve as a useful experiment for even larger-scale book distributions, and could be replicated in other languages.

mic
1y100

I really like this project idea! It's ambitious and yet approachable, and it seems that a lot of this work could be delegated to virtual personal assistants. Before starting the project, it seems that it would be valuable to quickly get a sense of how often EA books in libraries are read. For example, you could see how many copies of Doing Good Better are currently checked out, or perhaps you could nicely ask a library if they could tell you how many times a given book has been checked out.

In terms of the cost estimates, how would targeted social media advertising compare? Say targeting people who are already interested in charity and volunteering, or technology, or veg*anism, and offering to send them a free book.

8
RyanCarey
1y
Not sure, but targeted social media advertising would also be a great project.
6
Greg_Colbourn
1y
Added [https://forum.effectivealtruism.org/posts/KigFfo4TN7jZTcqNH/the-future-fund-s-project-ideas-competition?commentId=qgHdwu5yzSyAu48Qv].

I like this idea, but I wonder - how many  people / students actually use physical libraries still? I don't think I've used one in over 15 years. My impression is that most are in chronic decline (and many have closed over the last decade).

5
Cillian Crosson
1y
A way around this could be to provide e-books and audio books instead of physical copies. Would also make the distribution easier. (In the UK at least, it's possible to borrow e & audio from your local library using the Libby app)
3
Greg_Colbourn
1y
I imagine that e-book systems (text and audio) work via access to large libraries, rather than needing people to request books be added individually? So maybe there is no action needed on this front (although someone should probably check that most EA books are available in such collections).
2
mic
1y
My understanding is that individual libraries license an ebook for a number of uses or a set period of time (say, two years).
2
mic
1y
I think print books are still preferred by more readers compared to e-books. You might as well donate the books in both the physical and digital formats and probably also as an audiobook. It looks like libraries don't generally have an official way for you to donate print books virtually or to donate e-books, so I think you would have to inquire with them about whether you can make a donation and ask them to use that to buy specific books. Note that the cost of e-book licenses to libraries is many times the consumer sale price.

Never Again: A Blue-Ribbon Panel on COVID Failures

Biorisk, Epistemic Institutions

Since effective altruism came to exist as a movement, COVID was the first big test of a negative event that was clearly within our areas of concern and expertise. Despite many high-profile warnings, the world was clearly not prepared to meet the moment and did not successfully contain COVID and prevent excess deaths to the extent that should've been theoretically possible if these warnings had been properly heeded. What went wrong?

We'd like to see a project that goes into extensive detail about the global COVID response - from governments, non-profits, for-profit companies, various high-profile individuals, and the effective altruism movement - and understands what the possibilities were for policy action given what we knew at the time and where things fell apart. What could've gone better and - more importantly - how might we be better prepared for the next disaster? And rather than try to re-fight the last war, what needs to be done now for us to better handle a future disaster that may not be bio-risk at all?

Disclaimer: This is just my personal opinion and not the opinion of Rethink Priorities. This project idea was not seen by anyone else at Rethink Priorities prior to posting.

Are you thinking of EAs running this themselves?  We already have an informal sense of what some top priorities are for action in biosafety/pandemic-preparedness going forwards (ramp up investment in vaccines and sterilizing technology, improve PPE, try to ban Gain of Function research, etc), even if this has never been tied together into a unified and rigorously prioritized framework.

I think the idea of a blue-ribbon panel on Covid failures could have huge impact if it had (in the best-case) official buy-in from government agencies like the CDC, or (failing that) at least something like "support from a couple prestigious universities" or "participation from a pair of senators that care about the issue" or "we don't get the USA or UK but we do get a small European country like Portugal to do a Blue Ribbon Covid Panel".   In short, I think this idea might ideally look more like "lobby for the creation of an official Blue Ribbon Panel, and also try to contribute to it and influence it with EA research" rather than just running it entirely as an internal EA research project.  But maybe I am wrong and a really good, comprehensive EA report could change a lot of minds.

2
IanDavidMoss
1y
This is a great point. Also worth noting that there have been some retrospectives already, e.g. this one by the WHO: https://theindependentpanel.org/wp-content/uploads/2021/05/COVID-19-Make-it-the-Last-Pandemic_final.pdf It would be worth considering the right balance between putting resources toward conducting an original analysis vs. mustering the political will for implementing recommendations from retrospectives like those above.

Minor note about the name: "Never Again" is a slogan often associated with the Holocaust. I think that people using it for COVID might be taken as appropriation or similar. I might suggest a different name. 

https://en.wikipedia.org/wiki/Never_again 

2
Peter Wildeford
1y
Sorry - I was not aware of this
2
Ozzie Gooen
1y
No worries! I assumed as such.
4
Jan_Kulveit
1y
Note that CSER is running a project roughly in this direction.
4
Sean_o_h
1y
An early output from this project: Research Agenda (pre-review) Lessons from COVID-19 for GCR governance: a research agenda [https://f1000research.com/articles/11-514] The Lessons from Covid-19 Research Agenda offers a structure to study the COVID-19 pandemic and the pandemic response from a Global Catastrophic Risk (GCR) perspective. The agenda sets out the aims of our study, which is to investigate the key decisions and actions (or failures to decide or to act) that significantly altered the course of the pandemic, with the aim of improving disaster preparedness and response in the future. It also asks how we can transfer these lessons to other areas of (potential) global catastrophic risk management such as extreme climate change, radical loss of biodiversity and the governance of extreme risks posed by new technologies. Our study aims to identify key moments- ‘inflection points’- that significantly shaped the catastrophic trajectory of COVID-19. To that end this Research Agenda has identified four broad clusters where such inflection points are likely to exist: pandemic preparedness, early action, vaccines and non-pharmaceutical interventions. The aim is to drill down into each of these clusters to ascertain whether and how the course of the pandemic might have gone differently, both at the national and the global level, using counterfactual analysis. Four aspects are used to assess candidate inflection points within each cluster: 1. the information available at the time; 2. the decision-making processes used; 3. the capacity and ability to implement different courses of action, and 4. the communication of information and decisions to different publics. The Research Agenda identifies crucial questions in each cluster for all four aspects that should enable the identification of the key lessons from COVID-19 and the pandemic response.
2
Sean_o_h
1y
https://www.cser.ac.uk/research/lessons-covid-19/ [https://www.cser.ac.uk/research/lessons-covid-19/]

Cognitive enhancement research and development (nootropics, devices, ...)

Values and Reflective Processes, Economic Growth

Improving people's ability to think has many positive effects on innovation, reflection, and potentially individual happiness. We'd like to see more rigorous research on nootropics, devices that improve cognitive performance, and similar fields. This could target any aspect of thinking ability---such as long/short term memory, abstract reasoning, creativity---and any stage of the research and development pipeline---from wet lab research or engineering over testing in humans to product development.

 

Additional notes on cognitive enhancement research:

  • Importance:
    • Sign of impact: You already seem to think that AI-based cognitive aids would be good from a longtermist perspective, so you will probably think that non-AI-based cognitive enhancement is also at least positive. (I personally think that's somewhat likely but not obvious and would love to see more analysis on it).
    • Size of impact: AI-based cognitive enhancement is probably more promising right now. But non-AI-based cognitive enhancement is still pretty promising, there is some precedent (e.g. massive benefit
... (read more)
5
Jackson Wagner
1y
I think this is an underrated idea, and should be considered a good refinement/addition to the FTX theme #2 of "AI-based cognitive aids".  If it's worth kickstarting AI-based research assistant tools in order to make AI safety work go better, then doesn't the same logic apply towards: * Supporting the development of brain-computer interfaces like Neuralink. * Research into potential nootropics (glad to hear you are working on replicating the creatine study!) or the negative cognitive impact of air pollution and other toxins [https://patrickcollison.com/pollution]. * Research into tools/techniques to increase focus at work [https://80000hours.org/podcast/episodes/cal-newport-industrial-revolution-for-office-work/], management best practices for research organizations, and other factors that increase productivity/motivation. * Ordinary productivity-enhancing research software like better note-taking apps, virtual reality remote collaboration tools, etc.   The idea of AI-based cognitive aids only deserves special consideration insofar as: 1. Work on AI-based tools will also contribute to AI safety research directly, but won't accelerate AI progress more generally.  (This assumption seems sketchy to me.) 2. The benefit of AI-based tools will get stronger and stronger as AI becomes more powerful, so it will be most helpful in scenarios where we need help the most.  (IMO this assumption checks out.  But this probably also applies to brain-computer interfaces, which might allow humans to interact with AI systems in a more direct and high-bandwidth way.)
Linch
1y480

Create and distribute civilizational restart manuals

A number of "existential risks" we are worried about may not directly kill off everybody, but would still cause enough deaths and chaos to make rebuilding extremely difficult. Thus, we propose that people design and distribute "civilizational restart manuals" to places that are likely to survive biological or nuclear catastrophes, giving humanity more backup options in case of extreme diasters.

The first version can be really cheap, perhaps involving storing paper copies of parts of Wikipedia plus 10 most important books sent to 100 safe and relatively uncorrelated locations -- somewhere in New Zealand, the Antarctica research base, a couple of nuclear bunkers, nuclear submarines, etc.

We are perhaps even more concerned about great moral values like concern for all sentient beings surviving and re-emerging than preserving civilization itself, so we would love for people to do further research and work into considering how to preserve cosmopolitan values as well.

My comment from another thread applies here too:

Agreed, very important in my view! I’ve been meaning to post a very similar proposal with one important addition:

Anthropogenic causes of civilizational collapse are (arguably) much more likely than natural ones. These anthropogenic causes are enabled by technology. If we preserve an unbiased sample of today’s knowledge or even if it’s the knowledge that we consider to have been most important, it may just steer the next cycle of our civilization right into the same kind of catastrophe again. If we make the information particularly durable, maybe we’ll even steer all future cycles of our civilization into the same kind of catastrophe.

The selection of the information needs to be very carefully thought out. Maybe only information on thorium reactors rather than uranium ones; only information on clear energy sources; only information on proof of stake; only information on farming low-suffering food; no prose or poetry that glorifies natural death or war; etc.

I think that is also something that none of the existing projects take into account.

5
Greg_Colbourn
1y
Relatedly, see this post [https://forum.effectivealtruism.org/posts/z9KAkjEECYMp7ppyp/preserving-and-continuing-alignment-research-through-a] about continuing AI Alignment research after a GCR.
2
Dawn Drescher
1y
Very good!
5[anonymous]1y
If I may add, could this include:  - designated survivors who have some of the skills required to interpret these manuals and rebuild technologies. This could be a rotating duty, and designated survivors would live in fallout shelters [https://en.wikipedia.org/wiki/Fallout_shelter#Switzerland]. (I;m super uncertain on this, keen on your thoughts :) )  - storing specific technological artifacts, such that scientists in the future can reverse-engineer these technologies from the artifacts. These too would be stored in such bunkers. I wonder if these could increase the odds of successful knowledge transmission, beyond written material.
6
Linch
1y
Yes this sounds plausible. I'm generally excited to think about ways humanity can survive and/or flourish after civilizational collapse and other large-scale disasters.
3
ben.smith
1y
Building on the above idea... Research the technology required to restart modern civilization and ensure the technology is understood and accessible in safe havens throughout the world A project could ensure that not only the know-how but also the technology exists dispersed in various parts of the world to enable a restart. For instance, New Zealand is often considered a relatively safe haven, but New Zealand’s economy is highly specialized and for many technologies, relies on importing technology rather than producing it indigenously. Kick-starting civilization from wikipedia could prove very slow. Physical equipment and training enabling strategic technologies important for restart could be planted in locations like New Zealand and other social contexts which are relatively safe. At an extreme, industries could be subsidized which localize technology required for a restart. This would not necessarily mean the most advanced technology; rather, it means technologies that have been important to develop to the point we are at now.  
3
Linch
1y
Yes this is exciting to me, and related. Though of course generalist research talent is in short supply within EA, so the bar for any large-scale research project taking off is nontrivially high.
2
Dawn Drescher
1y
I didn’t write this up as a separate proposal as it seemed a bit self-serving, but creating underground cities for EAs with all the ALLFED technology and whatnot and all these backups could enable us to afterwards build a utopia with all the best voting methods and academic journals that require Bayesian analyses and publish negative results and Singer on the elementary school curriculum and universal basic income etc.
2
Hauke Hillebrandt
1y
All of wikipedia is just 20GB. [https://en.wikipedia.org/wiki/Wikipedia:Size_of_Wikipedia] Maybe there could be an way to share backups via Bittorrent or an 'offline version' of it... it would fit comfortably on most modern smartphones.
8
Linch
1y
Digital solutions are not great because ideally you want something that can survive centuries or at least decades. But offline USBs in prominent + safe locations might still be a good first step anyway.
2
Greg_Colbourn
1y
I've got a full version of the English Wikipedia, complete with images, on my phone (86GB). It's very easy to get using the Kiwix [https://www.kiwix.org/en/] app.
2
Greg_Colbourn
1y
Maybe someone should make an EA related collection and upload it to Kiwix? (Best books, EA Forum, AI Alignment Forum, LessWrong, SSC/ACX etc). This might be a good way of 80/20-ing preserving valuable information. As a bonus, people can easily and cheaply bury old phones with the info on, along with solar/hand-crank chargers.
2
Greg_Colbourn
1y
I note there isn't much on Kiwix in terms of survival/post-apocalype collections (just a few TED talks and YouTube videos): a low-hanging fruit ripe for the picking.
1
wbryk
1y
The  group who discovers this restart manual could gain a huge advantage over the other  groups in the world population -- they might reach the industrial age within a few decades while everyone else is still in the stone age. This discoverer group will therefore have a huge influence over the world civilization they create. I wonder if there were a way to ensure that this group has good values, even better values than our current world. For example, imagine there were a series of value tests within the restart manual that the discoverers were required to pass in order to unlock the next stage of the manual. Either multiple groups rediscover the manual and fail until one group succeeds, or some subgroup unlocks the next step and is able to leap technologically above the others in the group fast enough to ensure that their values flourish. If those value tests somehow ensure that a high score means the test-takers care deeply about the values we want them to have, then only those who've adopted these values will rule the earth. As a side note, this would be a really cool short story or movie :)
agnode
1y480

SEP for every subject

Epistemic institutions

Create free online encyclopedias for every academic subject (or those most relevant to longtermism) written by experts and regularly updated. Despite the Stanford Encyclopedia of Philosophy being widely-known and well-loved there are few examples from other subjects. Often academic encyclopedias are both behind institutional paywalls and not accessible on sci-hub (e.g. https://oxfordre.com/). This would provide decisionmakers and the public with better access to academic views on a variety of topics. 

5
Peter S. Park
1y
Can editing efforts be directed to Wikipedia? Or would this not suffice because everyone can edit it?
2
agnode
1y
I've read that experts often get frustrated with wikipedia because their work ends up getting undone by non-experts. Also there probably needs to be financial support and incentives for this kind of work. 
1
brb243
1y
Yeah make it accessible and normally accepted.
2
Yitz
1y
This would have to be a separate project from my proposed direct Wikipedia editing, but I'd  be very much in support of this (I see the efforts as being complementary)

Purchase a top journal

Metascience

Journals give bad incentives to academics - they require new knowledge to be written in hard to understand language, without pre-registration at great cost and sometimes focused on unimportant topics. Taking over a top journal and ensuring it incentivised high quality work on the most important topics would begin to turn the scientific system around.

We could, of course, simply get the future fund to pay for this. There is, however, an alternative that might be worth thinking about.

This seems like the kind of thing that dominant assurance contracts are designed to solve. We could run a Kickstarter, and use the future fund to pay the early backers if we fail to reach the target amount. This should incentivise all those who want the journals bought to chip in.

Here is one way we could do this:

  1. Use a system like pol.is to identify points of consensus between universities. This should be about the rules going forward if we buy the journal. For example, do they all want pre-registration? What should the copyright situation be? How should peer-review work? How should the journal be ran? etc
  2. Whatever the consensus is, commit to implementing it if the buyout is successful
  3. Start crowdsourcing the funds needed. To maximise the chance of success, this should be done using a DAC (dominant assurance contract). This works like any other crowdfunding mechanism (GoFundMe, Kickstarter, etc), except we have a pool of money that is used to pay the early backers if we fail to meet the goal. If the standard donation size we're asking the unis for i
... (read more)
3
Jonathan Nankivell
1y
Update: I emailed Alex Tabarrok [https://asp.mercatus.org/scholars/alexander-tabarrok] to get his thoughts on this. He originally proposed using dominant assurance contracts to solve public good problems, and he has experience testing it empirically. He makes the following points about my suggestion: * The first step is the most important. Without clarity of what the public good will be and who is expected to pay for it, the DAC won't work * You should probably focus on libraries as the potential source of funding. They are the ones who pay subscription fees, they are the ones who would benefit from this * DACs are a novel forum of social technology. It might be best to try to deliver smaller public goods first, allowing people to get more familiar, before trying to buy a journal He also suggested other ways to solve the same problem: * Have you considered starting a new journal? This should be cheaper. There would also be a coordination questions to solve to make it prestigious, but this one might be easier * Have you considered 'flipping' a journal? Could you take the editors, reviewers and community that supports an existing journal, and persuade them to start a similar but open access journal? (The Fair Open Access Alliance [https://www.fairopenaccess.org/] seem to have had success facilitating this. Perhaps we should support them?) My current (and weakly held) position is that flipping editorial boards to create new open access journals is the best way to improve publishing standards. Small steps towards a much better world. Would it be possible to for the Future Fund to entice 80% of the big journals to do this? The top journal in every field? Maybe.
2
brb243
1y
This is a reputational loss risk of an actor in the broader EA community seeking to influence the scientific discourse by economic/peer unreviewed means? There are repositories, such as of the Legal Priorities Project [https://www.ssrn.com/index.cfm/en/legal-priorities-project-res/#:~:text=The%20Legal%20Priorities%20Project%20Working,the%20protection%20of%20future%20generations.], of papers, that are cool and the EA community pays attention to aggregate narratives to keep some of its terms rather exclusive and convincing. If you mean coordinating research, to learn from the scientific community, then it can make sense to read papers and corresponding with academics. Maybe on the EA Forum or so. No need to buy a journal.
2
James Bailey
1y
Agree, was thinking of submitting a proposal like this. A few ways to easily improve most journals: -Require data and code to be shared -Open access, but without the huge author fees most open access journals charge -If you do charge any fees, use them to pay reviewers for fast reviews
1
Jonas Moss
1y
Shouldn't reviewers be paid, regardless of fees? It is a tough job, and there should strong incentives to do it properly. 

A Longtermist Nobel Prize

All Areas

The idea is to upgrade the Future of Life Award to be more desirable. The prizemoney would be increased from $50k to$10M SEK (roughly $1.1M) per individual to match the Nobel Prizes. Both for prestige, and to make sure ideal candidates are selected, the selection procedure would be reviewed, adding extra judges or governance mechanisms as needed. This would not immediately mean that longtermism has something to match the prestige of a Nobel, but it would give a substantial reward and offer top longtermists something to strive for.

(A variation on a suggestion by DavidMoss)

2
Gavin
1y
How much of the prestige is the money value, how much just the age of the prize, and how much the association with a fancy institution like the Swedish monarchy?  I seem to remember that Heisenberg etc were more excited by the money than the prize, back in the day.
2
RyanCarey
1y
The money isn't necessary - see the Fields Medal. Nor is the Swedish Monarchy - see the Nobel Memorial Prize in Econ. Age obviously helps. And there's some self-reinforcement - people want the prize that others want. My guess is that money does help, but this could be further investigated.
4
Hauke Hillebrandt
1y
The Jacobs Foundation awards $1m prizes to scientist as a grant - I think this might be one of the biggest - one could award $5-10m to make it the most prestigious prize in the world.
1
Taras Morozov
1y
I think Templeton Prize has become prestigious because they give more money than the Nobel on purpose.

Megastar salaries for AI alignment work

Artificial Intelligence

Aligning future superhuman AI systems is arguably the most difficult problem currently facing humanity; and the most important. In order to solve it, we need all the help we can get from the very best and brightest. To the extent that we can identify the absolute most intelligent, most capable, and most qualified people on the planet – think Fields Medalists, Nobel Prize winners, foremost champions of intellectual competition, the most sought-after engineers – we aim to offer them salaries competitive with top sportspeople, actors and music artists to work on the problem. This is complementary to our AI alignment prizes, in that getting paid is not dependent on results. The pay is for devoting a significant amount of full time work (say a year), and maximum brainpower, to the problem; with the hope that highly promising directions in the pursuit of a full solution will be forthcoming. We will aim to provide access to top AI alignment researchers for guidance, affiliation with top-tier universities, and an exclusive retreat house and office for fellows of this program to use, if so desired.

5
Greg_Colbourn
1y
Here [https://docs.google.com/document/d/1UAWwkt2L5aqgnceu1yHRyE-ALij6jzV0mOCwt983yBo/edit]'s a more fleshed out version, FAQ style. Comments welcome.
Fai
1y450

Preventing factory farming from spreading beyond the earth

Space governance, moral circle expansion (yes I am also proposing a new area of interest.)

 

Early space advocates such as Gerard O’Neill and Thomas Heppenheimer had both included animal husbandry in their designs of space colonies. In our time, the European Space Agency, the Canadian Space Agency, the Beijing University of Aeronautics and Astronautics, and NASA, have all expressed interests or announced projects to employ fish or insect farming in space. 

This, if successful, might multiply the suffering of farmed animals by many times of the numbers of farmed animals on earth currently, spanned across the long-term future. Research is needed in areas like:

... (read more)

Longtermist Policy Lobbying Group

Biorisk, Recovery from Catastrophe, Epistemic Institutions, Values and Reflective Processes

Many social movements find a lot of opportunity by attempting to influence policy to achieve their goals . While longtermism can and should remain bi-partisan, there may be many opportunities to pull the rope sideways on policy areas of concern.

We'd like to see a project that attempts to carefully understand the lobbying process and explores garnering support for identified tractable policies. We think while such a project could scale to be very large once successful, anyone working on this project should really aim to start small and tred carefully, aiming to avoid issues around the unilateralist curse and ensuring to not make longtermism into an overly partisan issue. It's likely that longtermist lobbying might also be best done as lobbying for other clear areas related to longtermism but as a distinct idea, such as lobbying for climate change mitigation or lobbying for pandemic preparedness.

Disclaimer: This is just my personal opinion and not the opinion of Rethink Priorities. This project idea was not seen by anyone else at Rethink Priorities prior to posting.

4
IanDavidMoss
1y
I think some form of lobbying for longtermist-friendly policies would be quite valuable. However, I'm skeptical that running lobbying work through a single centralized "shop" is going to be the most efficient use of funds. Lobbying groups tend to specialize in a specific target audience, e.g., particular divisions of the US federal government or stakeholders in a particular industry, because the relationships are really important to success of initiatives and those take time to develop and maintain. My guess is that effective strategies to get desired policies implemented will depend a lot on the intersection of the target audience + substance of the policy + the existing landscape of influences on the relevant decision-makers. In practice, this would probably mean at the very least developing a lot of partnerships with colleague organizations to help get things done or perhaps more likely setting up a regranting fund of some kind to support those partners. Happy to chat about this further since we're actively working on setting something like this up at EIP.
4
Peter Wildeford
1y
I agree with you on the value of not overly centralizing this and of having different groups specialize in different policy areas and/or approaches.
1[anonymous]1y
+1

Landscape Analysis: Longtermist Policy

Biorisk, Recovery from Catastrophe, Epistemic Institutions, Values and Reflective Processes

Many social movements find a lot of opportunity by attempting to influence policy to achieve their goals - what ought we do for longtermist policy? Longtermism can and should remain bi-partisan but there may be many opportunities to pull the rope sideways on policy areas of concern.

We'd like to see a project that attempts to collect a large number of possible longtermist policies that are tractable, explore strategies for pushing these policies, and also use public opinion polling on representative samples to understand which policies are popular. Based on this information, we could then suggest initiatives to try to push for.

Disclaimer: This is just my personal opinion and not the opinion of Rethink Priorities. This project idea was not seen by anyone else at Rethink Priorities prior to posting.

2
PeterSlattery
1y
I really like this idea and think that having a global policy network could be valuable over the long term. Particularly if coordinated with other domains of EA work. For instance, I can imagine RT and various other researcher orgs and researchers providing evidence on demand to EAs who are directly embedded within policy production. 
1
brb243
1y
If it shows that policies that safeguard the long-term objectives of the top lobbyists in the nation while disregarding others' preferences are the most popular, do you recommend them as attention-captivating conversation starters so that impartial consideration can be explained one-on-one to support its internalization by regulators by implementing measures to prevent the enactment of these, possible catastrophically risky (codified dystopia for some actors) popular policies, if I understand it correctly?
1[anonymous]1y
+1 
1
JBPDavies
1y
Hi Peter (if I may!), I love this and your other Longtermism suggestions, thanks for submitting them! Not sure if you saw my below suggestion of a Longtermism Policy Lab - but maybe this is exactly the kind of activity that could fall under such an organisation/programme (within Rethink even)? Likewise for your suggestion of a Lobbying group - by working directly with societal partners (e.g. National Ministries across the world) you could begin implementation directly through experimentation.  I've been involved in a similar (successful) project called the 'Transformative Innovation Policy Consortium (TIPC)', which works with, for example, the Colombian governement to shape innovation policy towards sustainable and just transformation (as opposed to systems optimisation).  Would love to talk to you about these your ideas for this space if you're interested. I'm working with the Institutions for Longtermism research platform at Utrecht University & we're still trying to shape our focus, so there may be some scope for piloting ideas.
2
IanDavidMoss
1y
JBPDavies, it sounds like you and I should connect as well -- I run the Effective Institutions Project [https://effectiveinstitutionsproject.org/] and I'd love to learn more about your Institutions for Longtermism research and provide input/ideas as appropriate.
1
JBPDavies
1y
Sounds fantastic - drop me an email at j.b.p.davies@uu.nl [j.b.p.davies@uu.nl] and I would love to set up a meeting. In the meantime I'll dive into EIP's work!
2
Peter Wildeford
1y
Sure! Email me at peter@rethinkpriorities.org [peter@rethinkpriorities.org] and I will set up a meeting.

Experiments to scale mentorship and upskill people

Empowering Exceptional People, Effective Altruism

For many very important and pressing problems, especially those focused on improving the far future, there are very few experts working full-time on these problems. What's more, these fields are nascent, and there are few well-defined paths for young or early-career people to follow, it can be hard to enter the field. Experts in the field are often ideal mentors - they can vet newcomers, help them navigate the field, provide career advice, collaborate on projects and gain access to new opportunities, but there are currently very few people qualified to be mentors. We'd love to see projects that experiment with ways to improve the mentorship pipeline so that more individuals can work on pressing problems. The kinds of possible solutions possible are very broad - from developing expertise in some subset of mentorship tasks (such as vetting) in a scalable way, increasing the pool of mentors, improving existing mentors' ability to provide advice by training them, experimenting with better mentor-mentee matchmaking, running structured mentorship programs, and more.

Proportional prizes for prescient philanthropists

Effective Altruism, Economic Growth, Empowering Excetional People

A low-tech alternative to my proposal for impact markets is to offer regular, reliable prizes for early supporters of exceptionally impactful charities. These can be founders, advisors, or donors. The prizes would not only go to the top supporters but proportionally to almost anyone who can prove that they’ve contributed (or where the charity has proof of the contribution), capped only at a level where the prize money is close to the cost of the administrative overhead.

Donors may be rewarded in proportion to the aggregate size of their donations, advisors may be rewarded in proportion to their time investment valued at market rates, founders may be rewarded in proportion to the sum of both.

If these prizes are awarded reliably, maybe by several entities, they may have some of the same benefits as impact markets. Smart and altruistic donors, advisors, and charity serial entrepreneurs can accumulate more capital that they can use to support their next equally prescient project.

5
IanDavidMoss
1y
Reading this again, I want to register that I am much more excited about the idea of rewarding donors for early investment than I am about the other elements of the plan. As someone who has founded multiple organizations, the task of attaching precise retrospective monetary values to different people's contributions of time, connections, talent, etc. in a way that will satisfy everyone as fair sounds pretty infeasible. Early donations, by contrast, are an objective and verifiable measure of value that is much easier to reward in practice. You could just say that the first, say $500k that the org raises is eligible for retroactive reward/matching/whatever, with maybe the first $100k or something weighted more heavily. It's also worth thinking through the incentives that a system like this would set up, especially at scale. It would result in more seed funding and more small charities being founded and sustained for the first couple of years. I personally think that's a good thing at the present time, but I also know people who argue that we should be taking better advantage of economies of scale in existing organizations. There is probably a point  at which there is too much entrepreneurship, and it's worth figuring out what that point is before investing heavily in this idea.
4
Dawn Drescher
1y
Owen Cotton-Barrett and I have thought about this for a while and have mostly arrived at the solution that beneficiaries who collaborated on a project need to hash this out with each other. So make a contract, like in a for-profit startup, who owns how much of the impact of the project. I think that capable charity entrepreneurs are a scarce resource as well, so that we should try hard to foster them. So that’s probably where a large chunk of the impact is. When it comes to the incentive structures: We – mostly Matt Brooks and I but the rest of the team will be around – will hold a talk on the risks from perverse incentives in our system at the Funding the Commons II conference [https://fundingthecommons.io/] tomorrow. Afterwards I can also link the video recording here. My big write-up, which is more comprehensive than the presentation but unfinished, is linked from the other proposal proposal. That said … I don’t quite understand… More funding for donors -> more donors -> more money to charities -> higher scale, right? So this system would enable charities to hire more so people can specialize etc., not the opposite? Thanks!
3
colin
1y
This is really interesting. Setting up individual projects as DAOs could be an effective way to manage this.  The DAO issues tokens to founders, advisors, and donors.  If retrospectively it turns out that this was a particularly impactful project the funder can buy and burn the DAO tokens, which will drive up the price, thereby rewarding all of the holders.
2
Dawn Drescher
1y
Yep! There’s this other proposal for impact markets linked above. That’s basically that with slight tweaks. It’s all written in a technology-agnostic way, but one of the implementations that we’re currently looking into is on the blockchain. There’s even a bit of a prototype already. :-D
2
IanDavidMoss
1y
I really like this idea, and FWIW find it much more intuitive to grasp than your impact markets proposal.
2
Dawn Drescher
1y
Sweet, thanks! :-D Then it’ll also help me explain impact markets to people.

High quality, EA Audio Library (HEAAL)

all/meta, though I think the main value add is in AI

(Nonlinear has made a great rough/low quality version of this, so at least some credit/prize should go to them.)

Audio has several advantages over text when it comes to consuming long-form content, with one significant example being that people can consume it while doing some other task (commuting, chores, exercising) meaning the time cost of consumption is almost 0. If we think that broad, sustained engagement with key ideas is important, making the cost of engagement much lower is a clear win. Quoting Holden's recent post:

I think a highly talented, dedicated generalist could become one of the world’s 25 most broadly knowledgeable people on the subject (in the sense of understanding a number of different agendas and arguments that are out there, rather than focusing on one particular line of research), from a standing start (no background in AI, AI alignment or computer science), within a year

What does high quality mean here, and what content might get covered?

  • High quality means read by humans (I'm imagining paying maths/compsci students who'll be able to handle mathematical n

... (read more)
2
Nathan Young
1y
Frankly, I'd like the ability to send a written feed to somewhere and have it turned into audio, maybe crowdfunded. Clearly non-linear can do it, so why can't I have it for, say, Bryan Caplan's writing.
3
alex lawsen (previously alexrjl)
1y
If you're ok with autogenerated content of roughly the quality of nonlinear, both Pocket [getpocket.com] and Evie [https://download.cnet.com/Evie-The-eVoice-book-reader/3000-18495_4-78351321.html] are reasonable choices.

High-quality human performance is much more engaging than autogenerated audio, fwiw.

4
alex lawsen (previously alexrjl)
1y
Hence the original pitch!
2
Nathan Young
1y
Non-Linear could be paid to repost the most upvoted posts but with voice actors. 
Arb
1y390

Our World in Base Rates

Epistemic Institutions

Our World In Data are excellent; they provide world-class data and analysis on a bunch of subjects. Their COVID coverage made it obvious that this is a very great public good. 

So far, they haven't included data on base rates; but from Tetlock we know that base rates are the king of judgmental forecasting (EAs generally agree). Making them easily available can thus help people think better about the future. Here's a cool corporate example. 

e.g. 

85% of big data projects fail”; 
10% of people refuse to be vaccinated because of fearing needles (pre-COVID so you can compare to the COVID hesitancy)”; 
"11% of ballot initiatives pass"
“7% of Emergent Ventures applications are granted”; 
“50% of applicants get 80k advice”; 
“x% of applicants get to the 3rd round of OpenPhil hiring”, "which takes y months"; 
“x% of graduates from country [y] start a business”.

MVP:

  • come up with hundreds of baserates relevant to EA causes
  • scrape Wikidata for them, or diffbot.com
  • recurse: get people to forecast the true value, or later value (put them in a private competition on Foretold,  index them on metaforecast.org)


Later, Q... (read more)

I think this is neat. 

Perhaps-minor note: if you'd do it at scale, I imagine you'd want something more sophisticated than coarse base rates. More like, "For a project that has these parameters, our model estimates that you have a 85% chance of failure."

I of course see this as basically a bunch of estimation functions, but you get the idea.

Teaching buy-out fund

Allocate EA Researchers from Teaching Activities to Research

Problem: Professors spend a lot of their time teaching instead of researching. Many don’t know that many universities offer “teaching buy-outs”, where if you pay a certain amount of money, you don’t have to teach. Many also don’t know that a lot of EA funding would be interested in paying that.

Solution: Make a fund that's explicitly for this, to make it so more EAs know. This is the 80/20 of promoting the idea. Alternatively, funders can just advertise this offering in other ways.

Adversarial collaborations on important topics

Epistemic Institutions

There are many important topics, such as the level of risk from advanced artificial intelligence and how to reduce it, among which there are  reasonable people with very different views. We are interested in experimenting with various types of adversarial collaborations, which we define as people with opposing views working to clarify their disagreement and either resolve the disagreement or identify an experiment/observation that would resolve it. We are especially excited about combining adversarial collaborations with forecasting on any double cruxes identified from them. Some ideas for experimentation might be varying the number of participants, varying the level of moderation and strictness of enforced structure, and introducing AI-based aids.

Existing and past work relevant to this space include the Adversarial Collaboration Project, SlateStarCodex's adversarial collaboration contests, and the Late 2021 MIRI Conversations.

1
brb243
1y
What topics? Which are not yet covered? (E. g. militaries already talk about peace) What adversaries? Are they rather collaborators (such as considering mergers and acquisitions and industry interest benefits for private actors and trade and alliance advantages for public actors)? Do you mean decisionmaker-nondecisionmaker collaborations - the issue is that systems are internalized, so you can get from the nondecisionmakers I want to be as powerful over others as the decisionmakers or also an inability to express or know their preferences (a chicken is in the cage so what can it say or a cricket is on the farm what do they know about their preferences) - probably, adversaries would prefer to talk about 'how can we get the other to give us profit' rather than 'how can we make impact' since the agreement is 'not impact, profit?'

Focus Groups Exploring Longtermism / Deliberative Democracy for Longtermism

Epistemic Institutions, Values and Reflective Processes

Right now longtermism is being developed within a relatively narrow set of stakeholders and participants relative to the broad set of people (and nonhumans) that would be affected by the decisions we make. We'd like to see focus groups that attempt to engage a more diverse group of people (diversity across many axes including but not limited to race, gender, age, geography, and socioeconomic status) and attempt to explain longtermism to them and explore what visions they have for the future of humanity (and nonhumans). Hopefully through many iterations we can find a way to go across what is likely rather large initial inferential distance to explore how a broader and more diverse group of people would think about longtermism once ideally informed. This can be related to and informed by engaging in deliberative democracy. This also could be helping to initiate what longtermists call "the long reflection".

Disclaimer: This is just my personal opinion and not the opinion of Rethink Priorities. This project idea was not seen by anyone else at Rethink Priorities prior to posting.

7
IanDavidMoss
1y
I absolutely love this idea and really hope it gets funded! It reminds me in spirit of the stakeholder research that IDinsight did [https://www.idinsight.org/publication/measuring-peoples-preferences/] to help inform the moral weights GiveWell uses in its cost-effectiveness analysis. At scale, it could parallel aspects of the process used to come up with the Sustainable Development Goals [https://en.wikipedia.org/wiki/Post-2015_Development_Agenda].
mic
1y380

Foundational research on the value of the long-term future

Research That Can Help Us Improve

If we successfully avoid existential catastrophe in the next century, what are the best pathways to reaching existential security, and how likely is it? How optimistic should we be about the trajectory of the long-term future? What are the worst-case scenarios, and how do we avoid them? How can we make sure the future is robustly positive and build a world where as many people are flourishing?


To elaborate on what I have in mind with this proposal, it seems important to conduct research beyond reducing existential risk over the next century – we should make sure that the future we have afterwards is good as well. I'd be interested in research following up on subjects like those of the posts:

... (read more)
8
Fai
1y
This sounds great! I particularly liked that you brought up S-risks and MCE. I think these are important considerations.

Incubator for Independent Researchers

Training People to Work Independently on AI Safety

Problem: AI safety is bottlenecked by management and jobs. There are <10 orgs you can do AI safety full time at, and they are limited by the number of people they can manage and their research interests.

Solution: Make an “independent researcher incubator”. Train up people to work independently on AI safety. Match them with problems the top AI safety researchers are excited about. Connect them with advisors and teammates. Provide light-touch coaching/accountability. Provide enough funding so they can work full time or provide seed funding to establish themselves, after which they fundraise individually. Help them set up co-working or co-habitation with other researchers.

This could also be structured as a research organization instead of an incubator.

EA Marketing Agency

Improve Marketing in EA Domains at Scale

Problem: EAs aren’t good at marketing, and marketing is important.

Solution: Fund an experienced marketer who is an EA or EA-adjacent to start an EA marketing agency to help EA orgs.

Expected value calculations in practice

Invest in creating the tools to approximate expected value calculations for speculative projects, even if hard.

Currently, we can’t compare the impact of speculative interventions in a principled way. When making a decision about where to work or donate, longtermists or risk-neutral neartermists may have to choose an organization based on status, network effects, or expert opinion. This is, obviously, not ideal.

We could instead push towards having expected value calculations for more things. In the same way that GiveWell did something similar for global health and development, we could try to do something similar for longtermism/speculative projects. Longer writeup here.

AGI Early Warning System
Anonymous Fire Alarm for Spotting Red Flags in AI Safety

Problem: In a fast takeoff scenario, individuals at places like DeepMind or OpenAI may see alarming red flags but not share them because of myriad institutional/political reasons.

Solution: create an anonymous form - a “fire alarm” (like an whistleblowing Andon Cord of sorts) where these employees can report what they’re seeing. We could restrict the audience to a small council of AI safety leaders, who then can determine next steps. This could, in theory, provide days to months of additional response time.

Alignment Forum Writers

Pay Top Alignment Forum Contributors to Work Full Time on AI Safety

Problem: Some of AF’s top contributors don’t actually work full-time on AI safety because they have a day job to pay the bills.

Solution: Offer them enough money to quit their job and work on AI safety full time.

(Per Nick's note, reposting)

Political fellowships

Values and Reflective Processes, Empowering Exceptional People

We’re like to fund ways to pull people who don’t run for political office to run for political office. It's like a MacArthur. You get a call one day. You've been selected. You'd make a great public servant, even if you don't know it. You'd get some training, like DCCC and NRCC, and when you run, you get two million spent by the super-PAC run by the best. They've done the analysis. They'll provide funding. They've lined up endorsers. You've never thought about politics, but they've got your back. Say what you want to say, make a difference in the world: run the campaign you don't mind losing. And if you win, make it real.

3
Jan-Willem
1y
Great idea, at TFG we have similar thoughts and are currently researching if we should run it and the best way to run a program like this. Would love to get input from people on this.

The Billionaire Nice List

Philanthropy

A regularly updated list of how much impact we estimate billionaires have created. Billionaires care about their public image, people like checking lists. Let's attempt to create a list which can be sorted by different moral weights and incentivises billionaires to do more good. 

9
PeterSlattery
1y
I really like this. I had a similar idea focused on trying to change the incentive landscape for billionaires to make it as high status as possible to be as high impact as possible. I think that lists and awards could be a good start. Would be especially good to have the involvement of some aligned ultrawealthy people who might have a good understanding of what will be effective.
3
Nathan Young
1y
Yeah, I would love those of us who know or are billionaires to give a sense of what motivates them.

Pro-immigration advocacy outside the United States

Economic Growth

Increasing migration to rich countries could dramatically reduce poverty and grow the world economy by up to 150%. Open Philanthropy has long had pro-immigration reform in the U.S. as a focus area, but the American political climate has been very hostile to and/or polarized on immigration, making it harder to make progress in the U.S. However, other high-income countries might be more receptive to increasing immigration, and would thus be easier places to make progress. For example, according to a 2018 Pew survey, 81% of Japanese citizens support increasing or keeping immigration levels about the same. It would be worth exploring which developed countries are most promising for pro-immigration advocacy, and then advocating for immigration there.

What this project could look like:

  1. Identify 2-5 developed countries where pro-immigration advocacy seems especially promising.
  2. Build partnerships with people and orgs in these countries with expertise in pro-immigration advocacy.
  3. Identify the most promising opportunities to increase immigration to these countries and act on them.

Related posts:

... (read more)
5
Greg_Colbourn
1y
Japan is coming from a very low base - 2% of population is foreign-born - vs. 15% in the US [https://worldpopulationreview.com/country-rankings/immigration-by-country]. A lot of room for more immigrants before "saturation" is reached I guess. Although I imagine that xenophobia and racism is anti-correlated with immigration, at least at low levels [citation needed].
1
brb243
1y
Top countries by refugees per capita [https://www.nrc.no/perspectives/2020/the-10-countries-that-receive-the-most-refugees/] The world's most neglected displacement crises [https://www.nrc.no/shorthand/fr/the-worlds-most-neglected-displacement-crises-in-2020/index.html] Should these countries be supported in their efforts (I read I think $0.1/person/day for food) and the crises prevented such as by supporting the source area parties to make and abide by legal agreements over resources, prevent drug trade by higher-yield farming practices and education or urban career growth prospects, improve curricula to add skills development in care for others (teaching preventive healthcare and others' preferences-based interactions), etc - as a possibly cost-effective alternative to pro-immigration advocacy - then, either privileged persons will be able to escape the poor situation, which will not be solved or unskilled persons with poor norms will be present at places which may not improve their subjective wellbeing, which is given by the norms' internalization?
2
BrownHairedEevee
1y
Your question is very long and hard to understand. Can you please reword it in plain English?
1
brb243
1y
Displacement crises are large and neglected. For example, for one of the top 10 crises, 6,000 additional persons are displaced per day. Displaced persons can be supported by very low amounts, which make large differences. For example, $0.1/day for food and low amount for healthcare. In some cases, this would have otherwise not been provided. So, supporting persons in crises in emerging economies, without solving the issues, can be cost-effective compared to  spending comparable effort on immigration reform. Second, supporting countries that already host refugees of neglected crises to better accommodate these persons (so that they do not need to stay in refugee camps reliant on food aid and healthcare aid), for example, by special economic zones, if these allow for savings accumulation, and education, so that refugees can better integrate and the public welcomes it due to economic benefits, can be also competitive in cost-effectiveness compared to immigration reform in countries with high public attention and political controversy and much smaller refugee populations, such as the US. The intervention is more affordable, makes larger difference for the intended beneficiaries, has higher chance of political support, and can be institutionalized while solving the problem. Third, allocating comparable skills to neglected crises rather than to immigration reform in industrialized nations where unit decisionmaker's attention can be much more costly, such as the US, can resolve the causes of these crises, which can include limited ability to draft and enforce legal agreements around natural resources or mitigate violence related to limited alternative prospects of drug farmers by sharing economic alternatives, such as higher-yield commodity farming practices, agricultural value addition skills, or upskilling systems related to work in urban areas. So, the cost-effectiveness of solving neglected crises by legal, political, and humanitarian assistance can be much higher th

Improving ventilation

Biorisk

Ventilation emerged as a potential intervention to reduce the risk of COVID and other pathogens. Additionally, poor air quality is a health concern in its own right, negatively affecting cognition and cognitive development. Despite this, there still does not seem to be commonly accepted wisdom about what kind of ventilation interventions ought to be pursued in offices, bedrooms, and other locations.

We'd like to see a project that does rigorous research to establish strong ventilation strategies in a variety of contexts and explores their effectiveness on various ventilation issues. Once successful ventilation strategies are developed, assuming it would be cost-effective to do so, this project could then aim to roll out ventilation and campaign/market for ventilation interventions either as a for-profit, non-profit, or hybrid.

Disclaimer: This is just my personal opinion and not the opinion of Rethink Priorities. This project idea was not seen by anyone else at Rethink Priorities prior to posting.

Advocacy organization for unduly unpopular technologies

Public opinion on key technologies.

Some technologies have enormous benefits, but they are not deployed very much because they are unpopular. Nuclear energy could be a powerful tool for enhancing access to clean energy and combating climate change, but it faces public opposition in Western countries. Similarly, GMOs could help solve the puzzle of feeding the global population with fewer resources, but public opinion is largely against them. Cellular agriculture may soon face similar challenges. Public opinion on these technologies must urgently be shifted. We’d like to see NGOs that create the necessary support via institutions and the media, without falling into the trap of partisan warfare with traditional environmentalists.

4
Jackson Wagner
1y
Probably want to avoid unifying all of these under one "we advocate for things that most people hate" advocacy group!  Although that would be pretty hilarious.  But funding lots of little different groups in some of these key areas is great, such as trying to make it easier to build clean energy projects of all kinds as I mention here [https://forum.effectivealtruism.org/posts/KigFfo4TN7jZTcqNH/the-future-fund-s-project-ideas-competition?commentId=6HJKgrbGzxLkXcCfJ].
4
simonfriederich
1y
Right, it sounds absurd and maybe hilarious, but it's actually what I had in mind. The advantage is internal coherence. The idea is basically to let "ecomodernism" go mainstream, having a Greenpeace-like org that has ideas more similar to the Breakthrough Institute.  It's far from clear that this can work, but it's worth a try, in my view. About your suggestion: I love it and voted for it. 
2
Jackson Wagner
1y
Maybe so... like an economics version of the ACLU that builds a reputation of sticking up for things that are good even though they're unpopular. Might work especially well if oriented around the legal system (where ACLU operates and where groups like Greenpeace and the ever-controversial NRA have had lots of success), rather than purely advocacy? Having a unified brand might help convince people that our side has a point. For instance, a group that litigates to fight against nimbyism by complaining about the overuse of environmental laws or zoning regulations... the nimbys would naturally see themselves as the heroes of the story and assume that lawyers on the pro-construction side were probably villains funded by big greedy developers. Seeing that their opposition was a semi-respected ACLU-like brand that fought for a variety of causes might help change people's minds on an issue. (On the other hand, I feel like the legal system is fundamentally friendlier terrain for stopping projects than encouraging them, so the legal angle might not work well for GMOs and power plants. But maybe there are areas like trying to ban Gain-of-Function research where this could be a helpful strategy.) We'd still probably want the brand of this group to be pretty far disconnected from EA -- groups like Greenpeace, the NRA, etc naturally attract a lot of controversy and demonization.
2
Andreas F
1y
Since Lifecycle Analysis show that it most likely is the best option, I fully agree on the nuclear Part.  I also agree on the GMO part, since huge Meta Analysis show no adverse effects on the environment (compared as yield/area   & Biodiversity/dollar & Yield/dollar  labor/ yield), in comparison with other agriculture. I have no Assessment on Cellular Agriculture, but I do think, that it is fair to support such schemes ( at least until we have solid data regarding this, and then decide again).  
2
Peter S. Park
1y
Note: Wanted to share an example. I think that while nuclear fission reactors are unpopular and this unpopularity is sticky, it is possible that efforts to preemptively decouple the reputation of nuclear fusion reactors with those of nuclear fission reactors can succeed (and that nuclear fusion's hypothetical positive reputation can be sticky over time). But it is also possible that the unpopularity of nuclear fission will stick to nuclear fusion.  Which of these two possibilities occurs, and how proactive action can change this, is mysterious at the moment. This is because our causal/theoretical understanding of the science of human behavior is incomplete. (see my submission, "Causal microfoundations for behavioral science") Preemptive action regarding historically unprecendented settings like emergent technologies---for which much of the relevant data may not yet exist---can be substantially informed by externally valid predictions of people's situation-specific behavior in such settings.
3
simonfriederich
1y
Interesting thought. FWIW, I think it's more realistic that we can turn around public opinion on fission first, reap more of the benefits of fission, and then have a better public public landscape for fusion, then that we accept the unpopularity of fission as a given but will have somehow popular fusion. But I may well be wrong.

Building the grantmaker pipeline

Empowering Exceptional People, Effective Altruism

The amount of funding committed to Effective Altruism has grown dramatically in the past few years, with an estimated  $46 billion dollars currently earmarked for EA. With this significant increase in available funding, there is now a greatly increased need for talented and thoughtful grantmakers, who can effectively deploy this money. It's plausible that yearly EA grantmaking could increase by a factor of 5-10x over the coming decade, and this requires finding and training new grantmakers on best practices, as well as developing sound judgement.  We'd love to see projects that build the grantmaker pipeline, whether that's grantmaking fellowships, grantmaker mentoring, more frequent donor lotteries, more EA funds-style organisations with rotating fund managers, and more. 

NB: This might be a refinement of fellowships, but I think it's particularly important.

7
Jackson Wagner
1y
This is such a good idea that I think FTX is already piloting a regranting scheme as a major prong of their Future Fund program [https://forum.effectivealtruism.org/posts/CQQtKkMGeGLxbgLjP/the-future-fund-s-regranting-program]! But it would be cool to build up the pipeline in other more general/systematic ways -- maybe with mentorship/fellowships, maybe with more experimental donation designs like donor lotteries and impact certificates [https://www.impactcerts.com/], maybe with software that helps people to make EA-style impact estimates [https://forum.effectivealtruism.org/posts/2ux5xtXWmsNwJDXqb/ambitious-altruistic-software-engineering-efforts#Internal_Tools__to_be_used_by_EAs_].
4
Cillian Crosson
1y
It seems that FTX's Regranting Program could be a great way to scalably distribute funds & build the grantmaker pipeline. We (Training for Good [https://www.trainingforgood.com/]) are also developing a grantmaker training programme like what James has described here to help build up EA's grantmaking capacity (which could complement FTX's Regranting Program nicely). It will likely be an 8 week, part-time programme, with a small pot of "regranting" money for each participant and we're pretty excited to launch this in the next few months. In the meantime, we're looking for 5-10 people to beta test a scaled-down version of this programme (starting at the end of March). The time commitment for this beta test would be ~5 hours per week (~2 hrs reading, ~2 hrs projects, ~1 hr group discussion). If anyone reading this is interested, feel free to shoot me an email cillian@trainingforgood.com [cillian@trainingforgood.com] 

Website for coordinating independent donors and applicants for funding

Empowering exceptional people, effective altruism

At EAG London 2021, many attendees indicated in their profiles that they were looking for donation opportunities. Donation autonomy is important to many prospective donors, and increasing the range of potential funding sources is important to those applying for funding. A curated website which allows applicants to post requests for funding and allows potential donors to browse those requests and offer to fully or partially fund applicants, seems like an effective solution.

Nuclear arms reduction to lower AI risk

Artificial Intelligence and Great Power Relations

In addition to being an existential risk in their own right, the continued existence of large numbers of launch-ready nuclear weapons also bears on risks from transformative AI. Existing launch-ready nuclear weapon systems could be manipulated or leveraged by a powerful AI to further its goals if it decided to behave adversarially towards humans. We think understanding the dynamics of and policy responses to this topic are under-researched and would benefit from further investigation.

4
aogara
1y
Strongly agree with this. There are only a handful of weapons that threaten catastrophe to Earth’s population of 8 billion. When we think about how AI could cause an existential catastrophe, our first impulse shouldn’t be to think of “new weapons we can’t even imagine yet”. We should secure ourselves against the known credible existential threats first. Wrote up some thoughts about doing this as a career path here: https://forum.effectivealtruism.org/posts/7ZZpWPq5iqkLMmt25/aidan-o-gara-s-shortform?commentId=rnM3FAHtBpymBsdT7 [https://forum.effectivealtruism.org/posts/7ZZpWPq5iqkLMmt25/aidan-o-gara-s-shortform?commentId=rnM3FAHtBpymBsdT7]
2
Greg_Colbourn
1y
On the flip side, you could make part of your 'pivotal act [https://intelligence.org/late-2021-miri-conversations/#:~:text=and%20Richard%20discuss%20%E2%80%9C-,pivotal%20acts,-%E2%80%9D%20%E2%80%94%20in%20particular%2C%20actions]' be the neutralisation of all nuclear weapons [https://www.newscientist.com/article/dn3734-neutrino-beam-could-neutralise-nuclear-bombs/#:~:text=A%20super%2Dpowered%20neutrino%20generator,neutrinos%20straight%20through%20the%20Earth.].
will_c
1y120

Incremental Institutional Review Board Reform

Epistemic Institutions, Values and Reflective Process

Institutional Review Boards (IRBs) regulate biomedical and social science research. In addition to slowing and deterring life-saving biomedical research, IRBs interfere with controversial but useful social science research, eg, Scott Atran was deterred from studying Jihadi terrorists; Mark Kleiman was deterred from studying the California prison system, and a Florida State University IRB cited public controversy as a reason to deter research. We would like to see a group focused on advocating for plausible reforms to IRBs that allow more social science research to be performed. Some plausible examples:

  1. Prof. Omri Ben-Shahar’s proposal to replace exempt IRB reviews with an electronic checklist or
  2.  Zachary Schrag’s proposal (from Ethical Imperialism) that Congress remove social science research from OHRP jurisdiction by amending the National Research Act of 1974. 

Concrete steps to these goals could be: 

  1. sponsoring a prize for the first university that allowed use of Prof. Omri Ben-Shahar’s electronic checklist tool;
  2.  setting up a journal for “Deterred Social Science Resea
... (read more)

Top ML researchers to AI safety researchers

Pay top ML researchers to switch to AI safety

Problem: <.001% of the world’s brightest minds are working on AI safety. Many are working on AI capabilities.

Solution: Pay them to switch. Pay them their same salary, or more, or maybe a lot more.

EA Productivity Fund

Increase the output of top longtermists by paying for things like coaching, therapy, personal assistants, and more.

Problem: Longtermism is severely talent constrained. Yet, even though these services could easily increase a top EAs productivity by 10-50%, many can’t afford them or would be put off by the cost (imposter syndrome or just because it feels selfish).

Solution: Create a lightly-administered fund to pay for them. It’s unclear what the best way would be to select who gets funding, but a very simple decision metric could be to give it to anybody who gets funding from Open Phil, LTFF, SFF, or FTX. This would leverage other people’s existing vetting work.

Studying stimulants' and anti-depressants' long-term effects on productivity and health in healthy people (e.g. Modafinil, Adderall, and Wellbutrin)

Economic Growth, Effective Altruism

Is it beneficial or harmful for long-term productivity to take Modafinil, Adderall, Wellbutrin, or other stimulants on a regular basis as a healthy person (some people speculate that it might make you less productive on days where you're not taking it)? If it's beneficial, what's the effect size? What frequency hits the best trade-off between building up tolerance vs short-term productivity gains? What are the long-term health effects? Does it affect longevity?


Some people think that taking stimulants regularly provides a large net boost to productivity. If true, that would mean we could relatively cheaply increase the productivity of the world and thereby increase economic growth. In particular, it could also increase the productivity of the EA community (which might be unusually willing to act on such information), including AI and biorisk researchers.

My very superficial impression is that many academics avoid researching the use of drugs in healthy people and that there is a bias against taking medic... (read more)

quinn
1y120

Sub-extinction event drills, games, exercises

Civilizational resilience to catastrophes

Someone should build up expertise and produce educational materials / run workshops on questions like 

  1. Nuclear attacks on several cities in a 1000 mile radius of you, including one within 100 miles. What is your first move? 
  2. Reports of a bioweapon in the water supply of your city. What do you do? 
  3. You're a survivor of an industrial-revolution-erasing event. What chunks of knowledge from science can be useful to you? After survival, what are the steps to rebuilding? 
  4. 6 billion people died and the remaining billion are uniformly distributed throughout the planet's former population centers. How can you build up robustness of basic survival, food and water production, shelter, etc.?
  5. (for the IT folks) 5 years after number 4, basic needs are largely met, and scavengers have filled a garage with old laptops and computer parts. Can you begin rebuilding the internet to connect with other clusters around the world? 

Differentially distributing these materials/workshops to people who live in geographical areas likely to survive at all could help rebuilding efforts in worlds where massive sub-extinction events occur. 

Centralising Information on EA/AI Safety

Effective Altruism, AI Safety

There are many list of opportunities available in EA/AI Safety and many lists of what organisations exist. Unfortunately these lists tend to get outdated. It would be extremely valuable to have a single list that is up to date and filterable according to various criteria. This would require someone being paid to maintain these part-time.

Another opportunity for centralisation would be to create an EA link shortener with pretty URLs. So for example, you'd be able to type in ea.guide/careers to see information on careers or ea.guide/forum to jump to the forum.

Notes: I own the URL ea.guide so I'd be able to donate it.

Automated Open Project Ideas Board

 The Future Fund

All of these ideas should be submitted to a board where anyone can forecast their value in dollars lives saved per $ as rated by a trusted research organisation, say Rethink Priorities. The forecasts can be reputation or prediction markets.  Then that research organisation  checks 1% of the ideas and scores them. These scores are used to weight the other forecasts. This creates a scalable system for ranking ideas. Then funders can donate to them as they see fit.

Massive US-China exchange programme

Great power conflict, AI

Fund (university) students to live in the other country in a host family: between US-China, Russia-US, China-India, potentially India-Pakistan. This is important if one thinks that personal experience make it less likely that individuals incentivise or encourage escalation, war and certain competitive dynamics.

8
Jackson Wagner
1y
This might have a hard time meeting the same effectiveness bar as #13, "Talent Search" and #17, "Advocacy for US High-Skill Immigration", which might end up having some similar effects but seem like more leveraged interventions.
2
IanDavidMoss
1y
I disagree, as this idea seems much more explicitly targeted at reducing the potential for great power conflict, and I haven't yet seen many other tractable ideas in that domain.
5
Alex D
1y
My understanding is the Erasmus Programme [https://en.wikipedia.org/wiki/Erasmus_Programme] was explicitly started in part to reduce the chance of conflict between European states.

Nuclear/Great Power Conflict Movement Building

Effective Altruism

Given the current situation in Ukraine, movement-building related to nuclear x-risk or great power conflict would likely be much more tractable than it was up until recently. We don't know how long this period will last for and the memory of the public can be short, so we intend to advantage of this opportunity. This outreach should focus on people with an interest in policy or potential student group organisers as these people are most likely to have an influence here.

Longtermism movement-building/election/appointment efforts, targeted at federal and state governments

Effective altruism

Increasing knowledge of and alignment with longtermism in government by targeted movement-building and facilitating the election/appointment of sympathetic people (and of close friends and family of sympathetic people) could potentially be very impactful. If longtermism/EA becomes a social norm in, say, Congress or the Washington 'blob', we could benefit from the stickiness of this social norm.

Pilot emergency geoengineering solutions for catastrophic climate change

Research That Can Help Us Improve

Toby Ord puts the risk of runaway climate change causing the extinction of humanity by 2100 at 1/1000, a staggering expected loss. Emergency solutions, such as seeding oceans with carbon-absorbing algae or creating more reflective clouds, may be our last chance to prevent catastrophic warming but are extraordinarily operationally complex and may have unforeseen negative side-effects. Governments are highly unlikely to invest in massive geoengineering solutions until the last minute, at which point they may be rushed in execution and cause significant collateral damage. We’d like to fund people who can:

  • Identify and pilot at large scale top geoengineering initiatives over the next 5-10 years to develop operational lessons. E.g. promote algae growth in a large, private lake, launch a small cluster of mirrors into space
  • Develop advanced supercomputer models, potentially with input from the above pilots, of the potential negative side-effects of geoengineering solutions
  • Identify and pilot harm-mitigation responses for geoengineering solutions

Epistemic status: there seems to be rea... (read more)

3
Kirsten
1y
I thought China has already done some low-key geoengineering? https://80000hours.org/podcast/episodes/kelly-wanser-climate-interventions/ [https://80000hours.org/podcast/episodes/kelly-wanser-climate-interventions/]
1
Rory Fenton
1y
Thanks for sharing!    My initial sense is that China's method is focused on controlling rainfall, which might mitigate some of the effects of climate change (e.g. reduce drought in some areas, reduce hurricane strength) but not actually prevent it. The ideas I had in mind were more emergency approaches to actually stopping climate change either by rapidly removing carbon (e.g. algae in oceans) or reducing solar radiation absorbs on the Earth's surface (making clouds/oceans more reflective, space mirrors). 

(Per Nick's note, reposting)

Market shaping and advanced market commitments

Epistemic institutions; Economic Growth

Market shaping is when an idea can only be jump-started by committed demand or other forces. Operation Warp Speed is the most recent example of market-shaping through advanced market commitments, but it has been used several times for other vaccine development. We are interested in funding work to understand when market-shaping makes sense, ideas for creating and funding market-shaping methods, and specific market-shaping or advanced market commitments in our areas of interest.

jh
1y350

(I drafted this then realized that it is largely the same as Zac's comment above - so I've strong upvoted that comment and I'm posting here in case my take on it is useful.)

Crowding in other funding

We're excited to see ideas for structuring projects in our areas of interest that leverage our funds by aligning with the tastes of other funders and investors. While we are excited about spending billions of dollars on the best projects we can find, we're also excited to include other funders and investors in the journey of helping these projects scale in the best way possible. We would like to maximize the chance that other sources of funding come in. Some projects are inherently widely attractive and some others are only ever likely to attract (or want) longtermist funding. But, we expect that there are many projects where one or more general mechanisms can be applied to crowd in other funding. This may include:

  • Offering financial incentives (e.g. advanced market commitments)
  • Highlighting financial potential in major projects we would like to see (e.g. especially projects of the scale of the Grok / Brookfield bid for AGL)
  • Portfolio structures / financial engineering (e.g. Bridge Bio)
  • Appealing to social preferences (e.g. highlight points of 'common sense' overlap between longtermist views and ESG)
1
colin
1y
I'll add that advanced market commitments are also useful in situations where a jump-start isn't explicitly required.  In that case, they can act similarly to prize based funding

A center applying epistemic best practices to predicting & evaluating AI progress

Artificial  Intelligence and  Epistemic Institutions

Forecasting and evaluating AI progress is difficult and important. Current work in this area is distributed  across multiple organizations or individual researchers, not all of whom possess (a) the technical expertise, (b) knowledge & skill in applying epistemic best practices,  and (c) institutional legitimacy  (or otherwise suffer from cultural constraints). Activities of the center could include providing services to AI groups (e.g. offering superforecasting training or prediction services), producing bottom-line reports on "How capable is AI system X?", hosting adversarial collaborations, pointing out deficiencies in academic AI evaluations, and generally pioneering "analytic tradecraft" for AI progress.

An Organisation that Sells its Impact for Profit

Empowering Exceptional People, Epistemic Institutions

Nonprofits are inefficient in some respects: they don't maximize value for anyone the way for-profits do for their customers. Moreover, they lack market valuations, so successful nonprofits scale too slowly while unsuccessful ones linger too long. One way to address this is to start an organisation that only accepts funding that incentivizes impact. Its revenue would come from: (1) Selling Impact Cerificates, (2) Prizes, and/or (3) Grants (but only if they value the work at a similar level to the impact certificates). Such an organization could operate on an entirely for-profit basis. Funding would be raised from for-profit investors. Staff would be paid in salary plus equity. The main premise here is that increased salaries are a small price to pay for the efficiencies that can be gained from for-profit markets. Of course, this can only succeed if the funding mechanisms (1-3) become sufficiently popular, but given the increased funding in longtermist circles, this now looks increasingly likely.

See also Retrospective grant evaluations,  Retroactive public goods funding, Impact ... (read more)

brb243
1y110

Tradable impact certificates

Effective Altruism, Research That Can Help Us Improve, Economic Growth

Issuing and trading impact certificates can popularize and normalize impact investment and profitable strategic research among the world's economic influencers. Then, economic growth will have an approximately good direction, only the relative popularization of impact certificates management/incentivization would remain.

Better understanding the needs of organisational leaders
Coincidence of wants problems

In EA, organisational leaders and potential workers often don't have good information about each other’s needs and offerings (See EA needs consultancies). The same is true for researchers who might like to do research for organisations but don't know what to do. We would like to fund work to help to resolve this. This could involve collecting advanced market commitments for funders (e.g., org group x would pay up to x for y hours of design time next year, on average).  It could involved identifying unknowns for key decision makers in EA in relevant areas (e.g., instructional decision-making, longtermism, or animal welfare) which could be used to develop a research agendas and kickstart research.
 

Organization to push for mandatory liability insurance for dual-use research

Biorisk and Recovery from Catastrophe

Owen Cotton-Barratt for the Global Priorities Project in 2015:

Research produces large benefits. In some cases it may also pose novel risks, for instance work on potential pandemic pathogens. There is widespread agreement that such ‘dual use research of concern’ poses challenges for regulation.

There is a convincing case that we should avoid research with large risks if we can obtain the benefits just as effectively with safer approaches. However, there do not currently exist natural mechanisms to enforce such decisions. Government analysis of the risk of different branches of research is a possible mechanism, but it must be performed anew for each risk area, and may be open to political distortion and accusations of bias.

We propose that all laboratories performing dual-use research with potentially catastrophic consequences should be required by law to hold insurance against damaging consequences of their research.

This market-based approach would force researcher institutions to internalise some of the externalities and thereby:

Encourage university departments and priva

... (read more)
2
Dawn Drescher
1y
The (late) Global Priorities Project produced a long list of policy interventions and found that none of them were feasible at that time and place (UK in 2015), but maybe some of them can be adapted to other times or places where they are feasible. Niel Bowerman’s article “Research note: Good policy ideas that won’t happen (yet) [http://globalprioritiesproject.org/2015/02/research-note-good-policy-ideas-that-wont-happen-yet/]” from 2015 gives an overview.

Rationalism But For Group Psychology

Epistemic Institutions

LessWrong and the rationalist community have done well to highlight biases and help individuals become more rational, as well as creating a community around this. But most of the biggest things in life are done by groups and organizations.

We'd like to see a project that takes group psychology / organizational psychology and turns it into a rationalist movement with actionable advice to help groups be less biased and help groups achieve more impact, like how the original rationalist movement did so with individuals. We imagine this would involve identifying useful ideas from group psychology / organizational psychology literature and popularizing them in the rationalist community, as well as trying to intentionally experiment. Perhaps this could come up with better ideas for meetings, how to hire, how to attract talent, better ways to help align employees with organizational goals, better ways to keep track of projects, etc.

Disclaimer: This is just my personal opinion and not the opinion of Rethink Priorities. This project idea was not seen by anyone else at Rethink Priorities prior to posting.

9
Gavin
1y
The Epistea Summer Experiment [https://www.lesswrong.com/posts/gBtHgFBcvRkQwgaGg/epistea-summer-experiment-ese] was a glorious example of this.  
1[comment deleted]1y
Fai
1y300

Wild animal suffering in space

Space governance, moral circle expansion.

 

Terraforming other planets might cause animals to come to exist in these planets, either because of intentional or unintentional behaviors. These animals might live net negative lives. 

Also, we cannot rule out the possibility that there are already wild "animals" (or any form of sentient beings) who might be suffering from net negative lives in other planets. (this does not relate directly to the Fermi Paradox, which is highly intelligent lives, not lives per se)

Relevant research include:

  • Whether wild animals lead net negative or positive lives on earth, under what conditions. And whether this might hold the same in different planets.
  • Tracking, or even doing research on using AI and robotics to monitor and intervene with habitats. This might be critical if there planets there are planets that has wild "animals", but are uninhabitable for humans to stay close and monitor (or even intervene with) the welfare of these animals.
  • Communication strategies related to wild animal welfare, as it seem to tend to cause controversy, if not outrage.
  • Philosophical research, including population ethics, environmental ethics, comparing welfare/suffering between species, moral uncertainty, suffering-focused vs non-suffering focused ethics.
  • General philosophical work on the ethics of space governance, in relation to nonhuman animals.
6
Dawn Drescher
1y
Another great concern of mine is that even if biological humans are completely replaced with ems or de novo artificial intelligence, these processes will probably run on great server farms that likely produce heat and need cooling. That results in a temperature gradient that might make it possible for small sentient beings, such as invertebrates, to live there. Their conditions may be bad, they may be r-strategists and suffer in great proportions, and they may also be numerous if these AI server farms spread throughout the whole light cone of the future. My intuition is that very few people (maybe Simon Eckerström Liedholm?) have thought about this so far, so maybe there are easy interventions to make that less likely to happen.
5
Dawn Drescher
1y
Brian Tomasik [https://reducing-suffering.org/will-space-colonization-multiply-wild-animal-suffering/] and Michael Dello-Iacovo [https://drive.google.com/file/d/17uXDoZkHPwlfykYDpqd74CtSKFzRkCwA/view?usp=sharing] have related articles.
3
DC
1y
Here's a related question [https://forum.effectivealtruism.org/posts/5bwRFz8yEnvNrW5r3/which-is-better-for-animal-welfare-terraforming-planets-or] I asked.

Physical AI Safety 

Drawing from work done in the former Soviet Union to improve safety in their bioweapons and nuclear facilities (e.g. free consultations and install of engineering safety measures, at-cost upgrades of infrastructure such as ventilation and storage facilities, etc), developing a standard set of physical/infrastructure technologies to help monitor AI Development labs/hardware and provide physical failsafes in the event of unexpectedly rapid takeoff (e.g., a FOOM scenario). Although unlikely, some standard guidelines modifying current best-practices for data center safety (e.g., restrictions on devices, physical air gaps between critical systems and the broader world, extensive onsite power monitoring and backup generators) could be critical to prevent anxiety over both physical and digital security from encouraging risk-taking behaviors by AI Development programs (Such as rushing builds, hiding locations, inappropriate dual-use or shared facilities which decrease control over data flows). In particular, physical low-tech hardware such as low-voltage switches have already provided demonstrable benefit in safeguarding high-tech, high-risk activity (See the Goldsb... (read more)

AI alignment prize suggestion: Introduce AI Safety concepts into the ML community

Artificial Intelligence

Recently, there have been several papers published at top ML conferences that introduced concepts from the AI safety community into the broader ML community. Such papers often define a problem, explain why it matters, sometimes formalise it, often include extensive experiments to showcase the problem, sometimes include some initial suggestions for remedies. Such papers are useful in several ways: they popularise AI alignment concepts, pave the way for further research,  and demonstrate that researchers can do alignment research while also publishing in top venues. A great example would be Optimal Policies Tend To Seek Power, published in NeurIPS. Future Fund could advertise prizes for any paper that gets published in a top ML/NLP/Computer Vision conference (from ML, that would be NeurIPS, ICML, and ICLR) and introduces a key concept of AI alignment.

2
Yonatan Cale
1y
Risk:  The course presents possible solutions to these risks, and the students feel like they "understood" AI risk, and in the future it will be harder to these students about AI risk since they feel like they already have an understanding, even though it is wrong. I am specifically worried about this because I try imagining who would write the course and who would teach it. Will these people be able to point out the problems in the current approaches to alignment? Will these people be able to "hold an argument" in class well enough to point out holes in the solutions that the students will suggest after thinking about the problem for five minutes? I'm not saying this isn't solvable, just a risk.

EA Macrostrategy:

Effective Altruism

Many people write about the general strategy that EA should take, but almost no-one outside of CEA has this as their main focus. Macrostrategy involves understanding all of the different organisations and projects in EA, how they work together, what the gaps are and the ways in which EA could fail to achieve its goals. Some resources should be spent here as an exploratory grant to see what this turns up.

A Project Candor for Global Catastrophic Risks

Biorisk and Recovery from Catastrophe, Values and Reflective Processes, Effective Altruism

This is a proposal to fund a large-scale public communications project on global catastrophic risks (GCRs), modeled on the Eisenhower administration's Project Candor. Project Candor was a Cold War  public relations campaign to "inform the public of the realities of the 'Age of Peril'" (see Unclassified 1953 Memo from Eisenhower Library). Policymakers were concerned that the public did not yet understand that the threats from nuclear weapons and the Soviet Union had inaugurated a new era in human history: the Age of Peril. Today, at the precipice, the Age of Peril continues with possible risks from engineered pandemics, thermonuclear exchange, great power war, and more. Voting behavior and public discourse, however, do not seem attuned to these risks. A new privately-funded Project Candor would communicate to the public the nature of the threats, their probabilities, and what we can do about them. This proposal is related to "a fund for movies and documentaries" and "new publications on the most pressing issues," but differs in that it would be a unified and coordinated campaign across multiple media. 

A social media platform with better incentives

Epistemic Institutions, Values and Reflective Processes

Social media has arguably become a major way in which people consume information and develop their values, and the most popular platforms are far from optimally set up to bring people closer to truthfulness or altruistic ends. We’d love to see experiments with social media platforms that provide more pro-social incentives and yet have the potential to reach a large audience.

Eliminate all mosquito-borne viruses by permanently immunizing mosquitoes 

Biorisk and Recovery from Catastrophe

Billions of people are at risk from mosquito-borne viruses, including the threat of new viruses emerging. Over a century of large-scale attempts to eradicate mosquitoes as virus vectors has changed little: there could be significant value in demonstrating large-scale, permanent vector control for both general deployment and rapid response to novel viruses. Recent research has shown that infecting mosquitoes with Wolbachia, a bacterium, out-competes viruses (including dengue, yellow fever and Zika), preventing the virus from replicating within the insect, essentially immunizing it. The bacterium passes to future generations by infecting mosquito eggs, allowing a small release of immunized mosquitoes to gradually and permanently immunize an entire population of mosquitoes. We are interested in proposals for taking this technology to massive scale, with a particular focus on rapid deployment in the case of novel mosquito-borne viruses. 

Epistemic status: Wolbachia impact on dengue fever has been demonstrated in a large RCT and about 10 city-level pilots. Impact on ot... (read more)

Increasing social norms of moral circle expansion/cooperation

Moral circle expansion

International cooperation on existential risks and other impactful issues is largely downstream of social norms of, for example, whether foreigners are part of one's moral circle. Research and efforts to encourage social norms of moral circle expansion and cooperation to include out-group members could potentially be very impactful, especially in relevant countries (e.g., US and China) and among relevant decision-makers.

Movement-building/research/pipeline for content creators/influencers

Effective altruism

Content creators/influencers have (if popular) a lot of outreach potential and earning-to-give potential. We should investigate the possibility of investing in movement-building or a pipeline into this field. Practical research on how to be a successful influencer is also likely to be broadly applicable for movement-building in general.

7
Jackson Wagner
1y
Rather than a pipeline for turning EAs (of which there are few) into media creators and celebrity influencers, it might be wiser to go the other way, and try to specifically target media creators and celebrity influencers for conversion to EA.  In my view, the quickest path to something like a high-quality youtube documentary series about EA [https://forum.effectivealtruism.org/posts/rF7D78va9pf3mtHFh/should-we-produce-more-ea-related-documentaries-1?commentId=zomSYkEqTQ5FswZAM#comments] probably looks more like "find an existing youtube studio with some folks who are interested in EA" than it does "get a group of EAs together and create a media studio".  Although the quickest path of all probably involves a mix of both strategies -- like 2-3 committed EAs with experience in media getting funding and hiring a bunch of other people already working in media to help them build the project. I've been talking about documentaries/videos because there seem to be a number of EA efforts currently to create media studios or etc.  But a broader, 80K-style effort to build the EA pipeline so we can attract and absorb more media people into the movement also seems worthwhile.
3
Peter S. Park
1y
"find an existing youtube studio with some folks who are interested in EA"-> This sounds very doable and potentially quite impactful. I personally enjoy watching Kurzgesagt and they have done EA-relevant videos in the past (e.g., meat consumption). "But a broader, 80K-style effort to build the EA pipeline so we can attract and absorb more media people into the movement also seems worthwhile." -> I agree!

Burying caches of basic machinery needed to rebuild civilisation from scratch

Recovery from Catastophe

Should the worst happen, and a global catastrophe happens, we want to be able to help survivors rebuild civilisation as quickly and efficiently as possible. To this end, burying caches of machinery that can be used to bootstrap development is a useful part of a civilisation recovery toolkit. Such a cache could be in the form of a shipping container filled with heavy machines of open source design, such as a wind turbine, an engine, a tractor with back hoe, an oven, basic computers and CNC fabricators, etc. Written instructions would also be included of course! Along with a selection of useful books. First we aim to put together a prototype of such a cache and test it in various locations with people of various skill levels, to see how well they fare at "rebuilding" in simulated catastrophe scenarios. Learning from this, we will iterate the design until at least 10% of simulations are successful (to what is judged to be a reasonable level). We ultimately aim to bury 10,000 such caches at strategic locations around the world. Some will be in well known locations (for the case of sudde... (read more)

2
Greg_Colbourn
1y
(I've edited the last part re locations after some feedback in this post [https://forum.effectivealtruism.org/posts/o5hYrj5asR4jCBiZh] (worth a read!))  

Targeted social media advertising to give away high-value books

Effective Altruism, Values and Reflective Processes, Epistemic Institutions

Books are a high-fidelity means of spreading ideas. We think that high-value books are those that promote the safeguarding and flourishing of humanity and all sentient life, using evidence and reason. Many of the most valuable books have come out of the Effective Altruism (EA) movement over the last decade. We are keen for more people who want to maximize the good they do to read them. Offering those most likely to be interested in EA ideas free high-value books via targeted adverts on social media could be a highly cost effective means of growing the EA movement in a values-preserving manner. Examples of target demographics are people interested in charity and volunteering, technology, or veg*anism. Examples of books that could be offered are The Life You Can Save, Doing Good Better, The Precipice, Human Compatible, The End of Animal Farming. Perhaps a list of books could be offered, with people being allowed to chose any one.

4
MaxRa
1y
One related idea might be to offer the books with a heavy discount. Historically, I'm much more likely to read a book if it pops up on my kindle like this: 10€ 0.99€, compared to books that are given away for free. Maybe book vendors are open to accept a subsidy to lower the price of EA books?
2
Greg_Colbourn
1y
This was inspired by Ryan Carey's books in Library idea [https://forum.effectivealtruism.org/posts/KigFfo4TN7jZTcqNH/the-future-fund-s-project-ideas-competition?commentId=9gBqR9mGqjxSDLh4x], and the trend of EA book giveaways to various groups (such as those attending EA Cambridge's AGI Safety Fundamentals [https://www.eacambridge.org/agi-safety-fundamentals] course).
PhilC
1y110

DNA banks and backup of Svalbard Global Seed Vault

Biorisk and Recovery from Catastrophe

Arguably, the most important information that the world has generated is the diversity of codes for life. Technologies are available to allow all these to be stored quickly and at low cost in DNA banks. Seed banks currently provide security for the world’s food supply. In the event of a catastrophe, it may be important to have multiple seed banks for redundancy.
 

Redefine humanity & assisting its transition

Artificial intelligence, values and reflective processes

As humanity inevitably evolves into coexistence with AI – the adage “if a man will not work, he shall not eat” needs to be redefined. Apart from AI’s early displacement effects already apparent (cue autonomous driving/trucking industry etc), humanity’s productivity function will continue rising due to the intrinsic nature of AI (consider 3D printing normal/lux goods at economies of scale), so much so that even plentitude becomes a potential problem. (In the usual then followed citation of ‘what about the African kids’ – kindly note this is a separate distribution problem) Ultimately – we should be contributing towards smoothing the AI transition curve and managing initial displacement by AI followed by proactively managing integration. 

Leo Gao
1y110

AI alignment: Evaluate the extent to which large language models have natural abstractions

Artificial Intelligence

The natural abstraction hypothesis is the hypothesis that neural networks will learn abstractions very similar to human concepts because these concepts are a better decomposition of reality than the alternatives. If it were true in practice, it would imply that large NNs (and large LMs in particular, due to being trained on natural language) would learn faithful models of human values, as well as bound the difficulty of translating between the model and human ontologies in ELK, avoiding the hard case of ELK in practice. If it turns out that the natural abstraction hypothesis is true at relevant scales, this would allow us to sidestep a large part of the alignment problem, and if it is false then this allows us to know to avoid a class of approaches that would be doomed to fail. 

We'd like to see work towards gathering evidence on whether natural abstractions holds in practice and how this scales with model size, with a focus on interpretability of model latents, and experiments in toy environments that test whether human simulators are favored in practice. Work towar... (read more)

Refinement of idea #33, "A fund for movies and documentaries":

I'd like to see filmmakers (including screenwriters and directors) working on EA-inspired films collaborate with social scientists and other subject-matter experts to ensure that their films realistically depict EA issues (such as x-risks) and social dynamics. These collaborations can help filmmakers avoid pitfalls like those committed by Don't Look Up and The Ministry for the Future.[1]

  1. ^

    From this review: "But while here and there an offhand reference to some reluctant group or other is made, they are, in Ministry, always feckless. The initial disaster undermines India’s Hindu nationalist party, rather than strengthening it. Further disasters are met with turns to socialism. The anti-fossil fuel terrorism that is portrayed (and both criticized and seen as necessary by varying characters) does not provoke anti-environmental terrorism in response. One particular striking example is about two-thirds of the way through the novel, when a small American town is evacuated in the name of half-Earth. While not welcomed, this evacuation is accepted in a way that is all but impossible to imagine, at least while we, looking up from

... (read more)

Accelerating Accelerators

Economic Growth

Y Combinator has had one of the largest impacts on GDP of any institution in history. We are interested in funding efforts to replicate that success across different geographies, sectors (e.g. healthcare, financial services), or corporate form (e.g. not-for-profit vs. for-profit). 

2
Nathan Young
1y
I'd like research alongside this to try and ascertain how GDP affects existential risk.
3
Greg_Colbourn
1y
See this [https://forum.effectivealtruism.org/posts/xh37hSqw287ufDbQ7/existential-risk-and-economic-growth-1] (by one of the Future Fund team!)

Salary Negotiation Service:

Effective Altruism

This service could negotiate salaries on behalf of EAs or others who would then commit a proportion of the extra to charity. This would increase the amount of money going to EA causes, promote Effective Altruism and draw people deeper into the community. Given the number of EAs who are working at high-paying tech companies this would likely be profitable.

(I remembered hearing this idea from someone else a few years back, but I can't remember who it was, unfortunately, so I can't give them credit unless they name themselves)

Risks: Might be expensive to find someone with the skills to do this and this might outweigh the money raised.

7
Jan-Willem
1y
Hi Chris! We run this on a recurring base with Training For Good! We already had a few dozens of people on the program and we are currently measuring the impact. See https://www.trainingforgood.com/salary-negotiation
3
Chris Leong
1y
I was suggesting an actual service and not just training.

Ambitious Altruistic Software Engineering Efforts

Values and Reflective Processes, Effective Altruism

There is a long list of altruistic software projects waiting to be built, with various worthy goals such as improving forecasting, improving groups' ability to intelligently coordinate, or improving the quality of research and social-media conversations.

Arb
1y290

Evaluating large foundations

Effective Altruism

Givewell looks at actors: object-level charities, people who do stuff. But logically, it's even more worth scrutinising megadonors (assuming that they care about impact or public opinion about their operations, and thus that our analysis could actually have some effect on them).

For instance, we've seen claims that the Global Fund, who spend $4B per year, meet a 2x GiveDirectly bar but not a Givewell Top Charity bar.

This matters because most charity - and even most good charity - is still not by EAs or run on EA lines. Also, even big cautious foundations can risk waste / harm, as arguably happened with the Gates Foundation and IHME - it's important to understand the base rate of conservative giving failing, so that we can compare hits-based giving. And you only have to persuade a couple of people in a foundation before you're redirecting massive amounts.

Refining EA communications and messaging

Values and Reflective Processes, Research That Can Help Us Improve

If we want to motivate a broad spectrum of people about the importance of doing good and ensuring the long-term goes well, it's imperative we find out which messages are "sticky" and which ones are forgotten quickly. Testing various communication frames, particularly for key target audiences like highly talented students, will support EA outreach projects in better tailoring their messaging. Better communications could hugely increase the number of people that consume EA content, relate to the values of the EA movement, and ultimately commit their life to doing good. We'd be excited to see people testing various frames and messaging, across a range of target audiences, using methodologies such as surveys, focus groups, digital media, and more.

1
Jack Lewars
1y
I think this exists [https://forum.effectivealtruism.org/posts/HboobjbDwc5KgpNWi/ea-market-testing] (but could be much bigger and should still be funded by this fund).

TL;DR: EA Retroactive Public Good's Funding

In your format:

Deciding which projects to fund is hard, and one of the reasons for that is that it's hard to guess which projects will succeed and which will fail. But wait, startups have solved this problem perfectly: Anybody is allowed to vet a startup and decide to invest (bet) their money on this startup succeeding, and if the startup does succeed, then the early investors get a big financial return.

The EA community could do the same, only it is missing the part where we give big financial returns to projects that turned out good.

This would make the fund's job much easier: They would have to vet which project helped IN RETROSPECT, which is much easier, and they'll leave the hard prediction work to the market.

Context for proposing this

I heard of a promising EA project that is for some reason having trouble raising funds. I'm considering funding it myself, though I am not rich and that would be somewhat broken to do. But I AM rich enough to fund this project and bet on it working well enough to get a Retroactive Public Good grant in the future, if such a thing existed. I also might have some advantage over the EA Fund in vetting this project.

In Vitalik's words:

https://medium.com/ethereum-optimism/retroactive-public-goods-funding-33c9b7d00f0c

2
Ben Dean
1y
Related: Impact Certificates [https://forum.effectivealtruism.org/tag/certificate-of-impact]

EA Forum Writers

Pay top EA Forum contributors to write about EA topics full time

Problem: Some of the EA Forum’s top writers don’t work on EA, but contribute some of the community’s most important ideas via writing.

Solution: Pay them to write about EA ideas full time. This could be combined with the independent researcher incubator quite well.

5
Nathan Young
1y
Pay users based on post karma.  (but not comment or question karma which are really easy to get in comparison)
3
Yitz
1y
could lead to disincentive to post more controversial ideas there though
2
Chris Leong
1y
Goodharts law
2
Nathan Young
1y
don't think we'd be wedded to a single metric. Also isn't karma already weak to goodhearts law? I think we should already be concerned with this.
2
Nathan Young
1y
I don't think we'd be wedded to this metric

A “Red Team” to rigorously explore possible futures and advocate against interventions that threaten to backfire

Research That Can Help Us Improve, Effective Altruism, Epistemic Institutions, Values and Reflective Processes

Motivation. There are a lot of proposals here. There are additional proposals on the Future Fund website. There are additional proposals also on various lists I have collected. Many EA charities are already implementing ambitious interventions. But really we’re quite clueless about what the future will bring.

This week alone I’ve discussed with friends and acquaintances three decisions in completely different contexts that might make the difference between paradise and hell for all sentient life, and not just in the abstract in the way that cluelessness forces us to assign some probability to almost any outcome but in the sense were we could point to concrete mechanisms along which the failure might occur. Yet we had to decide. I imagine that people in more influential positions than mine have to make similar decisions on almost a daily basis and on hardly any more information.

As a result, the robustness of an intervention has been the key criterion for prioritiza... (read more)

1
marswalker
1y
I had a similar idea, and I think that a few more things need to be included in the discussion of this.  There are multiple levels of ideas in EA, and I think that a red team becomes much more valuable when they are engaging with issues that are applicable to the whole of EA.  I think ideas like the institutional critique of EA, the other heavy tail, and others are often not read and internalized by EAs. I think it is worth having a team that makes arguments like this, then breaks them down and provides methods for avoiding the pitfalls pointed out in them.  Things brought up in critique of EA should be specifically recognized and talked about as good. These ideas should be recognized, held up to be examined, then passed out to our community so that we can grow and overcome the objections.  I'm almost always lurking on the forum, and I don't often see posts talking about EA critiques.  That should change. 
2
Dawn Drescher
1y
I basically agree but in this proposal I was really referring to such things as “Professor X is using probabilistic programming to model regularities in human moral preferences. How can that backfire and result in the destruction of our world? What other risks can we find? Can X mitigate them?” I also think that the category that you’re referring to is very valuable but I think those are “simply” contributions to priorities research as they are published by the Global Priorities Institute (e.g., working papers by Greaves and Tarsney come to mind). Rethink Priorities, Open Phil, FHI, and various individuals also occasionally publish articles that I would class that way. I think priorities research is one of the most important fields of EA and much broader than my proposal, but it is also well-known. Hence why my proposal is not meant to be about that.

Subsidise catastrophic risk-related markets on prediction markets

Prediction markets and catastrophic risk

Many markets don't exist because there isn't enough liquidity. A fund could create important longtermist markets on biorisk, AI safetry and nuclear war by pledging to provide significant liquidity once created. This would likely still only work for markets resolving in 1-10 years, due to inflation, but still*.

*It has been suggested to run prediction markets which use indices rather than currency. But people have shown reluctance to bet on ETH markets, so might show reluctance here too.

FTX, which itself runs prediction markets, might be particularly well-suited for prediction-market interventions like this.  I myself think that they could do a lot to advance people's understanding of prediction markets if in addition to their presidential prediction market, they also offered a conditional prediction market of how an indicator like the S&P 500 would do 1 week after the 2024 election, conditional on the Republicans winning vs the Democrats winning.  Conditional prediction markets for important indicators on big national elections would provide both directly useful info in addition to educating people about prediction markets' potential.

1
Alex D
1y
My company seeks to predict or rapidly recognize health security catastrophes, and also requires an influx of capital when such an event occurs (since we wind up with loads of new consulting opportunities to help respond). Is there currently any way for us to incentivize thick markets on topics that are correlated with our business? The idea of getting the information plus the hedge is super appealing!

Pandemic preparedness in LMIC countries

Biorisk

COVID has shown us that biorisk challenges fall on all countries, regardless of how prepared and well-resourced the countries are. While there certainly are many problems with pandemic preparedness high-income countries that need to be addressed, LMIC countries face even more issues in helping detect, identify, contain, mitigate, and/or prevent currently known and novel pathogens. Additionally, even after high income countries successfully contain a pathogen it may continue to spread within LMIC countries opening up risk of further more virulent mutations.

We'd like to see a project that works with LMIC governments to understand their current pandemic prevention plans and understand their local context. This project would especially focused on novel pathogens that are more severe than currently known pathogens -- and help provide the resources and knowledge needed to upgrade their plans to match the best practices of current bio-risk experts. Such a project would likely benefit from a team that contains expertise working with LMIC countries. An emergency fund and expert advice can also be provisioned to be ready to go when pathogens are... (read more)

Arb
1y270

Language models for detecting bad scholarship 

Epistemic institutions

Anyone who has done desk research carefully knows that many citations don't  support the claim they're cited for - usually in a subtle way, but sometimes a total nonsequitur. Here's a fun list of 13 features we need to protect ourselves.

This seems to be a side effect of academia scaling so much in recent decades - it's not that scientists are more dishonest than other groups, it's that they don't have time to carefully read everything in their sub-sub-field (... while maintaining their current arms-race publication tempo). 

Take some claim P which is below the threshold of obviousness that warrants a citation. 

It seems relatively easy, given current tech, to answer: (1) "Does the cited article say P?" This question is closely related to document summarisation - not a solved task, but the state of the art is workable. Having a reliable estimate of even this weak kind of citation quality would make reading research much easier - but under the above assumption of unread sources, it would also stop many bad citations from being written in the first place.

It is very hard to answer (2) "Is the cited ar... (read more)

[anonymous]1y100

Biorisk and information hazard workshops for iGEM competitors

Biorisk and Recovery from Catastrophe, Empowering Exceptional People

iGEM competitions are interdisciplinary synthetic biology competitions for students. They bring together the best and brightest university students with a considerable interest in synthetic biology. They already have knowledge and skills in bioengineering and many of them will likely choose it as a career path and will be very good at it. Educating them on biorisks and especially information hazards would therefore be a great contribution to safeguarding. They could also be introduced to EA ideas and rationalist approaches in general, bringing talented young people on board.

2
Tessa
1y
You might be interested to know that iGEM (disclosure: my employer) just published a blog post about infohazards [https://blog.igem.org/blog/2022/5/4/can-too-much-knowledge-be-a-bad-thing]. We currently offer biorisk workshops for teams; this year we plan to offer a general workshop on risk awareness, a workshop specifically on dual-use, and potentially some others. We don't have anything on general EA / rationality, though we do share biosecurity job and training opportunities with our alumni network.
Leo Gao
1y270

Getting former hiring managers from quant firms to help with alignment hiring

Artificial Intelligence, Empowering Exceptional People

Despite having lots of funding, alignment seems to not have been very successful at attracting top talent to date. Quant firms, on the other hand, have become known for very successfully acquiring talent and putting them to work on difficult conceptual and engineering problems. Although buy-in to alignment before one can contribute is often cited as a reason, this is, if anything, even more of a problem for quant firms, since very few people are inherently interested in quant trading as an end. As such, importing some of this know how could help substantially improve alignment hiring and onboarding efficiency.

Arb
1y270

On malevolence: How exactly does power corrupt?

Artificial Intelligence / Values and Reflective Processes

How does it happen, if it happens? Some plausible stories:

  • Backwards causation: People who are “corrupted” by power always had a lust for power but deluded others and maybe even themselves about their integrity;
     
  • Being a good ruler (of any sort) is hard and at times very unpleasant, even the nicest people will try to cover up their faults, covering up causes more problems... and at some point it is very hard to admit that you were incompetent ruler all along.
     
  • Power changes your incentives so much that it corrupts all but the strongest. The difference with the last one is that value drift is almost immediate upon getting power.
     
  • A mix of the last two would be: you get more and more adverse incentives with every rise in power.
     
  • It might also be the case that most idealist people come into power under very stressful circumstances, which forces them to make decisions favouring consolidation of power (kinda instrumental convergence).
     
  • See also this on the personalities of US presidents and their darknesses.
     
2
MaxRa
1y
Yes, that's interesting and plausibly very useful to understand better. Might also affect some EAs at some point. The hedonic treadmill might be  part of it. You get used to the personal perks quickly, so you still feel motivated & justified to still put ~90% of your energy into problems that affect you personally -> removing threats to your rule, marginal status-improvements, getting along with people close to you And some discussion about the backwards causation idea is here in an oldie from Yudkowsky: Why Does Power Corrupt? [https://www.lesswrong.com/posts/v8rghtzWCziYuMdJ5/why-does-power-corrupt]
Tessa
1y100

Screen and record all DNA synthesis 
Biorisk and Recovery from Catastrophe

Screening all DNA synthesis orders for potentially serious hazards would reduce the risk that a dangerous biological agent is engineered and released. Robustly recording what DNA is synthesized (necessarily in an encrypted fashion) would allow labs to prove that they had not engineered an agent causing an outbreak. We are interested in funding work to solve technical, political and incentive problems related to securing DNA synthesis.

 

Meta note: there are already some cool EA-aligned projects related to this, such as SecureDNA from the MIT Media Lab and Common Mechanism to Prevent Illicit Gene Synthesis from NTI/IBBIS. Also, this one is not an original idea of mine to an even greater extent than the others I've posted.

Group psychology in space

Space governance

When human colonies are established in outer space, their relationship with Earth will be very important for their well-being. Initially, they’re likely to be dependent on Earth. Like settler colonies on Earth, they may grow to desire independence over time. Drawing on history and research on social group identities from social psychology, researchers should attempt to understand the kind of group identities likely to arise in independent colonies. As colonies grow they’ll inevitably form independent group identities, but depending on relationships with social groups back home, these identities could support links with Earth or create antagonistic relationships with them. Attitudes on Earth might also vary from supportive, exclusionary, or even prejudiced. Better understanding intergroup relations between Earth powers and their settler colonies off-world could help us develop equitable governance structures that promote peace and cooperation between groups.

4
Alex D
1y
Would mostly apply to bunkers too!
zdgroff
1y100

Lobbying architects of the future

Values and Reflective Processes, Effective Altruism

Advocacy often focuses on changing politics, but the most important decisions about the future of civilization may be made in domains that receive relatively less attention. Examples include the reward functions of generally intelligent algorithms that eventually get scaled up, the design of the first space colonies, and the structure of virtual reality. We would like to see one or more organizations focused on getting the right values considered by influential decision-makers at institutions like NASA and Google. We would be excited about targeted outreach to promote consideration of aligned artificial intelligence, existential risks, the interests of future generations, and nonhuman (both animal and digital) minds. The nature of this work could take various forms, but some potential strategies are prestigious conferences in important industries, retreats including a small number of highly-influential professionals, or shareholder activism.

Bounty Budgets

Like Regranting, but for Bounties

Problem: In the same way that regranting decentralizes grantmaking, so do the same thing for bounties. For example, give the top 20 AI safety researchers up to $100,000 to create bounties or RFPs for, say, technical research problems. They could also reallocate their budget to other trusted people, creating a system of decentralized trust.

In theory, FTX’s regrantors could already do this with their existing budgets, but this would encourage people to think creatively about using bounties or RFPs.

Bounties are great because you only pay out if it's successful. If hypothetically each researcher created 5 bounties at $10,000 each that’d be 100 bounties - lots of experiments.

RFPs are great because it puts less risk on the applicants but also is a scalable, low-management way to turn money into impact.

Examples: 1) I’ll pay you $1,000 for every bounty idea that gets funded
2) Richard Ngo

EA ops: "Immigration Tech" 

I have an idea for a cloud based, AI-powered SaaS platform to help governments handle immigration. Think KYC meets immigration

Today the immigration process is disjointed and fragmented amongst different countries and in most cases it's cumbersome, overly bureaucratic. That means that difficulties for immigrants, particularly in clear Human Rights cases, as well as for countries, who may be losing out on highly skilled migrants.

The idea is a platform that connects between potential immigrants and potential host countries. Instead of an immigrant applying individually to a number of countries, he would upload his relevant documentation to the platform that will then be shared with his countries of choice. Another model could be for interested countries to directly reach out to the potential immigrant of their own accord.

Part of the work of the platform would be to perform the relevant KYC work to authenticate the request as legitimate - thereby saving time and resources for national immigration departments, particularly when a request is lodged to multiple countries. 

Obviously the idea is still in it's early stages and there are a number of detail... (read more)

1
Avi Lewis
1y
Basically, the aim here is twofold: 1. Skilled migrants. Enable host countries to perform a reverse-lookup to attract skilled migrants with a background in say Tech, STEM or IT. And vice verca. Support skilled  migrants in their search for a new home environment that can foster their growth and development. An influx of academic and entrepreneurial immigrants can be a boost to the economies of their newly adoptive countries,  and can lead to a increase scientific advancement 2. Human Right Cases. All too often these fall through the cracks. Long wait times, particularly in danger zones. A principle aim of this platform would be to help find a new home country for those that need it most

More public EA charity evaluators

Effective Altruism

There are dozens of EA fundraising organizations deferring to just a handful of organizations that publish their research on funding opportunities, most notably GiveWell, Founders Pledge and Animal Charity Evaluators. We would like to see more professional funding opportunity research organizations sharing their research with the public, both to increase the quality of research in the areas that are currently covered - through competition and diversity of perspectives and methodologies - and to cover important areas that aren’t yet covered such as AI and EA meta.