Hide table of contents

My team (Rethink Priorities’ General Longtermism Team) is aiming to incubate 2-3 longtermist projects in 2023. I’m currently collecting a longlist of project ideas, which we’ll then research and evaluate, with the aim of kicking off the strongest projects (either via an internal pilot or collaboration with an external founder). 

I’m interested in ideas for entrepreneurial or infrastructure projects (i.e., not research projects, though a project could be something like “create a new research institute focused on X”). 

Some examples to give a sense of the type of ideas we’re interested in (without necessarily claiming that these specific ideas are particularly strong): An organization that lobbies for governments to install far UVC lights in government buildings; a third-party whistleblowing entity taking reports from leading AI labs; or a remote research institute for independent researchers. You can see a list of our existing ideas here.

I’ll begin reviewing the ideas on April 17, so ideas posted before then would be most helpful.

New Answer
New Comment


4 Answers sorted by

I see that the compilation and distribution of "civilisational reboot manuals" is already on the list. I love the concept, but think this scope should be significantly expanded to include stress testing and refinement of the drafted content. This would verify whether the most important facets of knowledge and technology are covered, and if the detail and style are such that they can be followed. I heard this suggested by Lewis Dartnell (author of "The Knowledge") on the 80,000hrs podcast, and think it would be great to really run with it. A fun, high-profile and potentially profitable way would be through a televised competition format, where teams of "survivors" have to try rebuild as much of the tech tree as possible (or reach a set technological achievement), with a "civilisational reboot manual" as their guide.

The mechanics of such a competition would need thoughtful planning to get a working balance between being sufficiently realistic of civilisational collapse scenarios (number of people, resources on hand etc.), have an acceleration mechanism to model decades of rebuilding within a season length, and be watchable. Challenging, but I don't think it would be a show-stopper (terrible pun, sorry). 

Benefits of this could include: 

  • Raising awareness of various existential risks. Perhaps each season/team could model a different collapse scenario such as nuclear winter, engineered pandemics, AI misalignment and so on, with the opening sequences explaining the likelihood of these events occurring and what action needs to be done to prevent them. I acknowledge that broadcasting to a potentially global audience could be a reputational risk to EA, and would have to be managed carefully.
  • Stress testing many of the assumptions we have around collapse and rebuilding scenarios.
  • An opportunity to get funding and visibility for larger scale testing of proposed technologies and solutions. E.g. for a "nuclear winter" scenario, trialing some of ALFED's research on simple greenhouse construction, seaweed/mushroom farming and the like.
  • Learn where the gaps are proposed civilisational reboot manuals, as there are likely some we cannot anticipate until they are tested realistically. In my head I see someone trying to recreate a particular machine or chemical process, but one small component isn't described in sufficient detail and everything grinds to a halt. 
  • Study what skill sets are needed amongst "survivors" and what governance structures work well, to ensure both progress and relative harmony. Some of these may be counter-intuitive.
  • Experiment whether the provision of select tools/technologies dramatically accelerate the tech tree rebuild. For example, a good blacksmith's anvil is very difficult to make from scratch, but once you have one it lasts nearly forever and facilitates the creation of innumerable useful items. These could be then be included in "reboot kits" along with the manuals themselves.

An incubator team could refine the concept and goals, perhaps do some limited trials, and then pitch to various networks or streaming services. 

 

Physical engineering lab to build capacity for prototyping hardware ideas with relevance to areas identified as important to the long-term future

https://forum.effectivealtruism.org/posts/9BDzFqAXu7sqPvRn5/reslab-request-for-information-ea-hardware-projects-1

An organization that identifies cities/regions across the world that are in danger of constructing a new power plant in the next 5-10 years and lobbies for the construction of a virtual power plant instead. 

In terms of wild animal welfare, I felt frighted when I've first seen the extent of using barbed fence in Central America (and probably American continent in general, I assume). There were published several field studies mainly from Australia, in which authors tried to evaluate harmfulness of barbed fence on wild animal health and results were relatively sound. It came especially bad for big mammals such as kangaroos, dogs etc. and winged mammals such as bats and flying foxes. Personaly, I've seen couple dogs in Costa Rica with damaged eyes and cut injuries on the body. The barbed fence could had been the cause.

Because extend of using barbed fence in Central America is vast and apparently preference for using it is deeply rooted into the culture of locals (on the rural and even small cities majority of people use barbed fence as a common way to demarcate their property), eradication of this custom and agency for replacement for more compasionate version of fence to wildlife, will be a longterm project. It almost certainly won't happen in a few years. And ultimately I would love to see the change in policy of using barbed fence worldwide, not just in the Americas.

[comment deleted]1
0
0
Comments2
Sorted by Click to highlight new comments since:

Wonderful work! I’ve commented directly on the spreadsheet, but for the benefit of anyone who won’t check it:

Several of these ideas can be rolled into one:

  1. A remote research institute for independent researchers
  2. Infrastructure to support independent researchers
  3. Building vibrant EA academic communities in Africa, Asia, and Latin America
  4. AI alignment prizes, advance market commitments, and other forms of proto–impact markets

The scheme that I imagine could have all of these benefits:

  1. Find and recruit more independent researchers for high-impact research
  2. Tap into talent pools in countries that don’t have a lot of EA presence
  3. Circumvent restrictions on foreign donations/grants to researchers in some countries
  4. Support independent researchers monetarily
  5. Kickstart academic careers
  6. Support many small research projects efficiently (low due diligence overhead)
  7. Recruit for-profit investors such as business angels and impact investors to derisk research for researchers
  8. Derisk research for potential risk-averse non-EA funders
  9. Help researchers network, find potential advisors or collaborators
  10. Provide researchers with infrastructure (servers, labs, etc.) efficiently
  11. Monitor and improve the counterfactual impact of prize contests
  12. Tap into corporate funding for prize contests (in high-growth industries) 

The best thing is that to my knowledge it should be fully legal to do this.

We (GoodX) are working on infrastructure to support independent researchers with funding and simplify grant applications. We’re not currently implementing this particular scheme, but that could change given the right team (experts in US securities law and startup fundraising).

The approach:

  1. Build a network:
    1. Set up a nonprofit think tank with some form of limited liability and a suitable purpose so that it is exempt from the requirement to register any publicly traded securities with the SEC.
    2. Network among business angels, HNWI (including non-altruists), possibly VCs. Even $100k go a long way in low-income countries, so “high net worth” can be a low bar depending on the country.
    3. Watch out for prize contests, AMCs, governmental and private outcome payers, etc.
  2. Let the investors scout out great researchers:
    1. An investor could be an Indian economics professor – smaller high-context investors can lead; larger low-context investors can follow.
    2. Possibly help with the match-making, especially once we have a mature network ourselves.
  3. Match-make between investor + researcher and prize contest/AMC/outcome payer:
    1. Make contracts with large funders such as Open Phil, Gates Foundation, USAID, etc. over outcome purchases, AMCs, etc. that match the areas of expertise of the researchers.
    2. This could even work for risk-averse funders who could not otherwise support scientific research.
    3. Or enter the research into existing prize contests.
  4. Take the investment from the investor, pay the researcher a monthly contractor salary, hold some money back to cover costs.
  5. If the researcher is successful and the outcome payment is disbursed, it goes to the investors.

Meanwhile nothing keeps that think tank from also seeking grant funding and using its network to pay contract researchers from that. Especially researchers who have proven themselves in a prize contest but can’t currently find any new suitable outcome payor, could be kept under contract from grant money.

We’re always happy to have calls on such topics!

I appreciate this initiative, @Buhl .  I went to the google form and noticed it requires permissions to update. There are a lot of entries on it, and it looks like the last update was January 2022, pre FTX crash. Not sure if others felt this way, but personally this made me question whether reading through the existing ideas /coming up with new ones would be a good use of time versus parasocial.

Is your goal to find out which ideas have the greatest support on the forum, to generate more ideas, to find people interested in working on particular ideas, or something different? 

Are you hoping people will compile and upvote/ downvote ideas here in the comments? 

Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
Ronen Bar
 ·  · 10m read
 · 
"Part one of our challenge is to solve the technical alignment problem, and that’s what everybody focuses on, but part two is: to whose values do you align the system once you’re capable of doing that, and that may turn out to be an even harder problem", Sam Altman, OpenAI CEO (Link).  In this post, I argue that: 1. "To whose values do you align the system" is a critically neglected space I termed “Moral Alignment.” Only a few organizations work for non-humans in this field, with a total budget of 4-5 million USD (not accounting for academic work). The scale of this space couldn’t be any bigger - the intersection between the most revolutionary technology ever and all sentient beings. While tractability remains uncertain, there is some promising positive evidence (See “The Tractability Open Question” section). 2. Given the first point, our movement must attract more resources, talent, and funding to address it. The goal is to value align AI with caring about all sentient beings: humans, animals, and potential future digital minds. In other words, I argue we should invest much more in promoting a sentient-centric AI. The problem What is Moral Alignment? AI alignment focuses on ensuring AI systems act according to human intentions, emphasizing controllability and corrigibility (adaptability to changing human preferences). However, traditional alignment often ignores the ethical implications for all sentient beings. Moral Alignment, as part of the broader AI alignment and AI safety spaces, is a field focused on the values we aim to instill in AI. I argue that our goal should be to ensure AI is a positive force for all sentient beings. Currently, as far as I know, no overarching organization, terms, or community unifies Moral Alignment (MA) as a field with a clear umbrella identity. While specific groups focus individually on animals, humans, or digital minds, such as AI for Animals, which does excellent community-building work around AI and animal welfare while
Max Taylor
 ·  · 9m read
 · 
Many thanks to Constance Li, Rachel Mason, Ronen Bar, Sam Tucker-Davis, and Yip Fai Tse for providing valuable feedback. This post does not necessarily reflect the views of my employer. Artificial General Intelligence (basically, ‘AI that is as good as, or better than, humans at most intellectual tasks’) seems increasingly likely to be developed in the next 5-10 years. As others have written, this has major implications for EA priorities, including animal advocacy, but it’s hard to know how this should shape our strategy. This post sets out a few starting points and I’m really interested in hearing others’ ideas, even if they’re very uncertain and half-baked. Is AGI coming in the next 5-10 years? This is very well covered elsewhere but basically it looks increasingly likely, e.g.: * The Metaculus and Manifold forecasting platforms predict we’ll see AGI in 2030 and 2031, respectively. * The heads of Anthropic and OpenAI think we’ll see it by 2027 and 2035, respectively. * A 2024 survey of AI researchers put a 50% chance of AGI by 2047, but this is 13 years earlier than predicted in the 2023 version of the survey. * These predictions seem feasible given the explosive rate of change we’ve been seeing in computing power available to models, algorithmic efficiencies, and actual model performance (e.g., look at how far Large Language Models and AI image generators have come just in the last three years). * Based on this, organisations (both new ones, like Forethought, and existing ones, like 80,000 Hours) are taking the prospect of near-term AGI increasingly seriously. What could AGI mean for animals? AGI’s implications for animals depend heavily on who controls the AGI models. For example: * AGI might be controlled by a handful of AI companies and/or governments, either in alliance or in competition. * For example, maybe two government-owned companies separately develop AGI then restrict others from developing it. * These actors’ use of AGI might be dr