Hide table of contents

Ben Todd (CEO of 80,000 Hours) says "Effective altruism needs more 'megaprojects'. Most projects in the community are designed to use up to ~$10m per year effectively, but the increasing funding overhang means we need more projects that could deploy ~$100m per year."

What are some $100m projects that you think might be worth consideration?

By megaproject, I'm referring to any project that could eventually be scaled up to $100 million, not ones that are planned from the start to cost $100 million. In many cases, this could include very small efforts that would have to achieve multiple levels of success to eventually get $100Million+ per year.

New Answer
New Comment

56 Answers sorted by

Filling the $100m funding gap in nuclear, since the MacArthur Foundation is pulling out of nuclear policy.

"Since 2015 alone, MacArthur directed 231 grants totaling >$100m in some cases providing more than half the annual funding for individual institutions or programs."
"MacArthur was providing something like 40 to 55 percent of all the funding worldwide of the non-government funding worldwide on nuclear policy”

Out of all the ideas, this seems the most shovel-ready. 

MacArthur will (presumably) be letting go of some staff who do nuclear policy work, and would (presumably) be happy to share the organisations they've granted to in the past. So you have a ready-made research staff list + grant list.

All ("all" :) ) you need is a foundation and a team to execute on it. Seems like $100 million could actually be deployed pretty rapidly. 

Possibly not all of that money would meet EA standards of cost-effectiveness though - indeed MacArthur's withdrawal provides some evidence that it isn't cost effective (if we trust their judgement).

Here's the interesting, frustrating evaluation report:  https://www.macfound.org/media/article_pdfs/nuclear-challenges-synthesis-report_public-final-1.29.21.pdf[16].pdf
Looks to me like a classic hits-based giving bet - you mostly don't make much impact, then occassionaly (Nixon arms control, H.W. Bush's START and Nunn-Lugar, maybe Obama JCPOA/New START) get a home run.

To clarify, this is $100m over around 5 years, or $20m/year - which is a good start, but far less than $100m/year.

I agree with this. As the article says, multiple funders are pulling out of nuclear arms control, not just MacArthur. So it would be a good idea for EA funders like Open Phil to come in and close the gap. But in doing so, we should understand why MacArthur and other funders are exiting this field and learn from them to figure out how to do better.

I misread this as "nuclear power", not "nuclear arms control"  😂

This is a weird one that is illustrative:

Taking over the US private prison system (described here).


  • Benefits from returns to scale, maybe only available as a "mega project"
  • Could literally make a profit (CEA is infinite, pretty much the only way to beat GiveWell?)
  • It gives access to institutions, even political capital for reform aligned to social change cause areas
  • Almost no one else would do this
  • Probably a lot of bad things going on inside of them EAs could improve

There's a ton of drawbacks. These include barriers to entry like regulations and capture which could make this impractical. Once inside, implementation issues such as cultural/institutional challenges will be far outside the typical circle of competence of EA. 

But I think that's the point—this idea has a flavor orthogonal to "New R&D/policy institute for X".

Certaintly innovative, although I wonder about the PR consequences

I'm late to this, but I wonder if Charles' analysis ought to extend beyond private prisons to address all the ways in which prisons and jails have come to privatize essential services. This includes telephone calls and digital communication, which are largely controlled by a legal monopoly, along with medical treatment and food preparation.

The hyperlinked stories and legal cases are but a few examples of the potentially life-altering negative outcomes that have come out of privatization. One of the major challenges with combating this trend is that do... (read more)

Charles He
This is a great and deep comment. I think it’s extremely generous to call my little blurb above an “analysis”. I am not informed and I am not involved in this area of prison or justice reform.  I’m writing this because I don’t want anyone to “wait” on me or anyone else. If you are reading this and want to dedicate some time on this cause or intervention, you should absolutely do so! Again, thanks for this comment.

I love this. Could be big or small nearly anywhere in the world. Some precedent too: Prison reform charity Nacro joins bid to run jails | Prisons and probation | The Guardian

I think why I like this so much is that it isn't another idea that is fiddling on the margins of a problem with a complicated theory of impact - it just provides a project vehicle to solve one of the more tractable key problems head on.

Love this! We could also use prisons as a place where social scientists could study how to optimize ethical development amongst criminals. These samples are so hard to access, but could produce so much impactful insight on when and why ethical decision-making fails, and how to improve ethical decision-making under conflict. This could also be coupled this with a grant competition that would fund the best ideas on how to rehabilitate inmates and improve their ethical decision-making both while in prison and after being reintegrated back into society.

Build up an institution that does the IGM economic experts survey with every scientific field, with paid editors, additionally probabilistic forecasts, monetary incentives for the experts maybe. https://www.igmchicago.org/igm-economic-experts-panel/


I like this idea in general, but would it ever really be able to employ $100m+ annually? For comparison, GiveWell spends about $6 million/year and CSET was set up for $55m/5 years (11m/year)

I think you’re right. Even if the experts were paid really well for their participation, say 10k per year (maybe as a fixed sum or in expectation given some incentive scheme), and you might have on the order of 50 experts each for 20(?) fields, then you end up with 10 million per year. But probably it wouldn’t even require that, as long as it’s prestigious and is set up well with enough buy-in. Paying for their judgement would make the latter easier I suppose.

I would upvote if someone wrote a quick summary of this and a number of the other ideas which aren't immediately clear on first reading.

I think the gist of this idea might be something like a massively-scaled up prediction platform that focuses on recruiting subject-matter experts and pays them to make predictions on questions relevant to their expertise while perhaps additionally discussing important/neglected trends in their fields. 

The Center for Election Science could easily make efficient use of greater than $50M a year with infrastructure and ballot initiatives. We've already laid out a plan on how we would spend it. We could also potentially build towards some hyper-aggressive $100M years by including lobbying in the remaining states that don't allow ballot initiatives. In any case, we are woefully underfunded relative to our goals and could at the very least surpass the $50M threshold in a couple of years with sufficient funding. If even greater funding were available, we could build in lobbying following more state-level wins.

For clarity, our lack of funding has already cost us approval voting campaign opportunities and is a big issue for us.

Okay, but I'm not persuaded that the Center for Election Science is scientific. I think it should be called "The Center for Approval Voting (especially the single-winner district kind)™"

I studied electoral systems for a school project and reached very different conclusions, for instance: that all single-winner-district systems are inherently  non-proportional and subject to gerrymandering. I went so far as to design my own system (I suppose its merits are debatable — but never debated). In emails from the CES I see none of the insights I gained in my ... (read more)

Finally get acceptable information security by throwing money at the problem.

Spend $100M/year to hire, say, 10 world-class security experts and get them everything they need to build the right infrastructure for us, and for e.g. Anthropic.

Strong second - we should build up secure open computing from bare metal (secure, open verifiable CPUS, memory, etc.) to the OS, to compilers, to a secure applications layer.

Is this something we could purchase for a few hundred million in a few years?
I discussed this with a couple people ca. 2 years ago, and thought it was likely that a company like Google could design and produce a full stack secure system as a moderately large internal project. And some groups are already doing parts of this - for example, a provably secure OS microkernel, for far less than what we'd be able to spend. As a fermi estimate on the high end, if we hire 10 top hardware design people for $500k/year each, throw in the same number of OS design people, and compiler designers at the same cost, and a team of 50 great people to do the rest of the development and testing at $300k/year, $100m means that we have 3 years to do this - and it's an open source project, so we'd get universities, etc. working on this as well.  (i.e. we could not mass produce the hardware at theses prices, but that's commercialization, not design, and it should be funded by sales.)

(not an expert) My impression is that a perfectly secure OS doesn't buy you much if you use insecure applications on an insecure network etc.

Also, if you think about classified work, the productivity tradeoff is massive: you can't use your personal computer while working on the project, you can't use any of your favorite software while working on the project, you can't use an internet-connected computer while working on the project, you can't have your cell phone in your pocket while talking about the project, you can't talk to people about the project over normal phone lines and emails... And then of course viruses get into air-gapped classified networks within hours anyway. :-P

Not that we can't or shouldn't buy better security, I'm just slightly skeptical of specifically focusing on building a new low-level foundation rather than doing all the normal stuff really well, like network traffic monitoring, vetting applications and workflows, anti-spearphishing training, etc. etc. Well, I guess you'll say, "we should do both". Sure. I guess I just assume that the other things would rapidly become the weakest link.

In terms of low-level security, my old company has a big line of business designing chips themselves to be more secure; they spun out Dover Microsystems to sell that particular technology to commercial (as opposed to military) customers. Just FYI, that's just one thing I happen to be familiar with. Actually I guess it's not that relevant.

Agreed that secure low level without application security doesn't get you there, which is why I said we need a full stack - and even if it wasn't part of this, redeveloping network infrastructure to be done well and securely seems like a very useful investment. But doing all the normal stuff well on top of systems that still have insecure chips, BIOS, and kernel just means that the exploits move to lower levels - even if there are fewer, the differences between 90% secure and 100% secure is far more important than moving from 50% to 90%. So we need the full stack.  

I see enormous value in it and think it should be considered seriously.

On the other hand, the huge amount of value in it is also a reason I'm skeptical about it being obvious to be achievable: there are already individual giant firms who'd internally at multi-million annual savings (not to talk about the many billions the first firm marketing something like that would immediately earn) from having a convenient simple secure stack 'for everything', yet none seems to have something close to it (though I guess many may have something like that in some sub-sys... (read more)

I think the budget to do this is easily tens of millions a year, for perhaps a decade, plus the ability to hire the top talent, and it likely only works as a usefully secure system if you open-source it. Are there large firms who are willing to invest $25m/year for 4-5 years on a long-term cybersecurity effort like this, even if it seems somewhat likely to pay off? I suspect not - especially if they worry (plausibly) that governments will actively attempt to interfere in some parts of this.
Agree with the "easily tens of millions a year", which, however, could also be seen to underline part of what I meant: it is really tricky to know how much we can expect from what exact effort. I half agree with all your points, but see implicit speculative elements in them too, and hence remain with, a maybe all too obvious statement: let's consider the idea seriously, but let's also not forget that we're obviously not the first ones thinking of this, and in addition to all other uncertainties, keep in our mind that none seems to seriously have very much progress in that domain despite the possibly absolutely enormous value even private firms might have been able to make from it if they had serious progress in it.

Epistemic status: Confused person with zero expertise in this area

Who is "us" in this scenario? I assume it's meant to be "organizations with access to infohazardous bio/AI data"?

If so, what makes you think of the current infosec of these orgs as "unacceptable"? If you think they'd disagree with this characterization, do you have a sense for why?

If not, what do you see as some plausible consequences of weak infosec that could plausibly total $100m in damages for EA orgs if they came to pass, given that EA is a network of lots of organizations, with pretty ... (read more)

This is my impression based on (a) talking to a bunch of people and hearing things like "Yeah our security is unacceptably weak" and "I don't think we are in danger yet, we probably aren't on anyone's radar" and "Yeah we are taking it very seriously, we are looking to hire someone. It's just really hard to find a good security person." These are basically the ONLY three things I hear when I raise security concerns, and they are collectively NOT reassuring. I haven't talked to every org and every person so maybe my experience is misleading. also (b) on priors, it seems that people in general don't take security seriously until there's actually a breach.  (c) I've talked to some people who are also worried about this, and they told me there basically isn't any professional security person in the EA community willing to work full time on this.


I will go further than that. Everyone I know in infosec, including those who work for either the US or the Israeli government, seem to strongly agree with the following claim:
"No amount of feasible security spending will protect your network against a determined attempt by an advanced national government (at the very least, US, Russia, China, and Israel) to get access. If you need that level of infosec, you can't put anything on a computer."

If AI safety is a critical enabler for national security, and/or AI system security is important for their alignment, that means we're in deep trouble.

Makes sense. Just to clarify — the phrasing here makes me think these are organizations with potentially dangerous technical knowledge, rather than e.g. CEA. Is that right?


https://evervault.com/ are launching in October and generally working on problems in this space

Here's a few suggestions for near-term megaprojects: 

- Longevity research
- Meat-replacement mega-cost reduction investments (leapfrogging current tech) 
- Eliminating disease-bearing mosquitoes 
- Eliminating all vaccine-preventable diseases worldwide 
- Developing cheap, universal metagenomic scanning for biosecurity (Also see this slightly less ambitious version, mentioned by Alex in a different answer.)
- Large-scale governance reform initiatives 
- Universally available, validated, well-build apps for CBT to reduce depression / increase happiness 
- AI safety (We're doing this one already, so the key players may not have room for funding.)

I suggest you split this into different comments so each can be upvoted seperately

For AI safety - maybe Redwood has the most room for funding? They seem to be the most interested in growth (correct me if I'm wrong). And even if the existing players don't have more room, other ways need to be thought of to scale up further through funding as the field is clearly still too small to compete in the race against the titanic field of AI capabilities.

Agree longevity needs to be funded more as well, though lots of aging billionaires like Bezos seem to be throwing tons of money at it these days too so maybe EA money would be much less useful/uniquely needed there than e.g. AI alignment.

Impact certificates. Announce that we will purchase NFT's representing altruistic acts created by one of the actors. (Starting now, but with a one-year delay, such that we can't purchase an NFT unless it's at least a year old.) Commit to buy $100M/year of these NFTs, and occasionally reselling them and using the proceeds to buy even more. Promise that our purchasing decisions will be based on our estimate of how much total impact the action represented by the NFT will have. 

Promise that our purchasing decisions will be based on our estimate of how much total impact the action represented by the NFT will have.

It may be critical that the purchasing decisions will somehow account for historical risks (even ones that did not materialize and are no longer relevant), otherwise this approach may fund/incentivize net-negative interventions that are extremely risky (and have some chance of being very beneficial). I elaborated some more on this here.

I don't understand. Can you explain more what this project would do and how it would create change?

Also, this project seems to involve commitment of i) hundreds of millions of dollars of funding and ii) reliable guarantees that these will be used cost effectively.

These (extraordinarily) strong promises are structurally necessarily and also seem only achievable by "centralization".

Given this centralization, what is the function or purpose of the NFT?

(Note that my question isn't about technical knowledge about "blockchain" or "NFTs" and you can assume gears knowledge of them and their instantiations up through 2020.)

Think of it like a grants program, except that instead of evaluating someone's pitch for what they intend to do, you are evaluating what they actually did, with the benefit of hindsight. Presumably your evaluations will be significantly more accurate this way. (Also, the fact that it's NFT-based means that you can recruit the "wisdom of the efficient market" to help you  in various ways, e.g. lots of non-EAs will be buying and selling these NFTs trying to predict what you will think of them, and thus producing lots of research you can use.) I don't think it should replace our regular grants programs. But it might be a nice complement to them. I don't see what you mean by centralization here, or how it's a problem. As for reliable guarantees the money will be used cost effectively, hell no, the whole point of impact certificates is that the evaluation happens after the event, not before. People can do whatever they want with the money, because they've already done the thing for which they are getting paid.
Charles He
  But the reason why you would evaluate someone's pitch as opposed to using hindsight is that nothing would be done without funding? I think I am using centralization in the same way that cryptocurrency designers/architects talk about crypto currency systems actually work ("centralization pressures"). The point of NFTs, as opposed to you, me, or a giant granter producing certificates, is that it is part of a decentralized system, not under any one entity's control.  My understanding is that this is the only logical reason why NFTs have any value, and are not a gimmick.  They don't have any magical power by themselves or have any special function or information or anything like that. Under this premise, centralization is undermined, if any other structural component of the system is missing.  For example, if the grantors or their decisions come from a central source. Then the value of having a decentralized certificate is unclear. Note undermining "centralization" is sort of like having a wrong step in a math theorem, it's existentially bad as opposed to a reduction in quality or something. I meant that you have written out two distinct promises here that seem to be necessary for this system to structurally work in this proposal. One of these promises seem to be high quality evaluation:

Once it's established that you will be giving $100M a year to buy impact certificates, that will motivate lots of people already doing good to mint impact certificates, and probably also motivate lots of people to do good (so that they can mint the certificate and later get money for it)

By buying the certificate rather than paying the person who did the good, you enable flexibility -- the person who did the good can sell the certificate to speculators and get money immediately rather than waiting for your judgment. Then the speculators can sell it back and forth to each other as new evidence comes in about the impact of the original act, and the conversations the speculators have about your predicted evaluation can then help you actually make the evaluation, thanks to e.g. facts and evidence the speculators uncover. So it saves you effort as well.

Charles He
Ok, I see what you're saying now. I might see this as a creating bounty program for altruistic successes, while at the same time creating a "thick" market for bounties that is crowd sourced, hopefully with virtuous effects.
Thats a succinct way of putting it, nice!

Hire ~5 film-studios to each make a movie that concretely shows an AI risk scenario which at least roughly survives the rationalist fiction sniff test. Goal: Improve AI Safety discourse, motivate more smart people to work on this.

Hell yeah! Get JGL to star - https://www.eaglobal.org/speakers/joseph-gordon-levitt/

(Sentinel is a system for testing new diseases such that unknown pathogens could be recognised from the first sample. Listen to the podcast alexrjl has linked) 

What about creating academic institutes in reputable universities to tackle important problems, eg similar to FHI or CSER, creating research prizes, and sponsoring conferences. I'm mostly thinking about AI Safety, but it may be useful in other areas too.

Hard science funding seems able to absorb this scale of funding, though this might not count as 'EA-specific' projects:
On climate: carbon capture, new solar materials, new battery R&D, maybe even fusion as 'hits-based giving'?
On bio preparedness there's quite a lot, e.g. Cassidy Nelson recommendations, Andy Weber recommendations

Something that could increase economic growth, dramatically reduce inequality of opportunity, and improve well-being of people worldwide:

Try to get as many people connected to the internet with a personal device as possible. 

The stat is that ~50% of the world is connected to the internet is misleading. To be connected you must have used a networked device once in three months, which is far from what most people would expect. 

Source: International Telecommunication Union ( ITU ) World Telecommunication/ICT Indicators Database

The importance of internet connectivity is hard to understate. It's necessary to function as 21st-century citizens and is the backbone of our societies. It's also necessary for securing various human rights. 

Some quick reasons why internet access is important: 

  • Grants access to free education on just about anything 
  • Access to banking, communication technologies, etc.
  • Increase economic growth, which well-being is somewhat a function of as internet access effectively increases the computational power of the economic system and can 'improve' the substrate upon which it runs (people).
  • Increase awareness of EA in general  

Wrote this quickly so apologies for the brevity. I've been working on a longer post where I dive into this in a lot more detail. 

My very uninformed sense is that starlink might make internet access a lot easier. Metaculus question writing opportunitiy.

Some people have been saying that Starlink system's limit is 0.5M consumers even after they release a whole lot more satellites: https://www.techdirt.com/articles/20200928/09175145397/report-notes-musks-starlink-wont-have-capacity-to-truly-disrupt-us-telecom.shtml. This would mean you can't expect it to turn even 0.1% of unconnected people to netizens.
Yeap, it's incredibly exciting.  I see a few issues with it in this context, though.  In the short-run, it will be prohibitively expensive for most of the world's population, and it doesn't solve for the device ownership necessity.  I also don't like the idea internet access being in the control of a company that is subject to the national laws. I feel that we need a censorship-resistant internet, especially in the existing climate. We're increasingly seeing crack-downs across the world, and I don't the US will be immune from increased internet suppression. 

I think this would be broadly useful and in particular increase the reach of mobile payment-based activities like GiveDirectly. I'd be curious about estimates of how cost-effective increasing internet penetration would be, compared to throwing more money at GD.

Mozilla have a fellowship aimed at this: https://foundation.mozilla.org/en/what-we-fund/fellowships/fellows-for-open-internet-engineering/

Developing new climate models has costs in the hundreds of millions of dollars. Useful longtermist climate modelling could include:


I don't climate research as very valuable. The value of information would only be high if this research would change how people act. Climate inaction seems to be mainly political inertia, not lack of information about potential catastrophe. 

Do you mean just the fourth bullet, or do you think this about all four?  The 1980s nuclear winter and asteroid papers (I'm thinking especially Sagan et al, and Alvarez et al) were very influential in changing political behaviour - Gorbachev and Reagan explicitly acknowledged that on nuclear, the asteroid evidence contributed to the 90s asteroid films and the (hugely successful!) NASA effort to track all 'dino-killers'. On the margin now, I think more scary stuff would be motivating. There's also VOI in resolving how big a concern nuclear winter is (eg some recent papers are skeptical) - if it turned out to not be as existential as we thought, that would change cause prioritisation for GCRs. On geoengineering (sorry 'climate interventions'(!)), note 'getting more climate modelling' is a key aim for e.g. Silver Lining.  On the fourth one, on the margin, I think more research - especially if it were the basis for an IPCC special report - would be influential. There's also VOI for our cause priotisation. It just is really remarkable how understudied it is! https://www.pnas.org/content/114/39/10315 https://forum.effectivealtruism.org/posts/HaXxEtx4QdykBjJi7/betting-on-the-best-case-higher-end-warming-is
i was just referring to the last bullet re climate change. eg in the last IPCC report, it would have been reasonable for govts to believe that there was a >10% chance of >6C of warming and that has been true since the 1970s, without having any impact. The political response to climate change seems to be influenced by most mainstream media coverage and public opinion in some circles which it would be fair to characterise as 'very concerned' about climate change.  An opinion poll suggests that 54% of British people think that climate change threatens human extinction (depending on question framing). I agree that in a rational world we want to know how bad climate change could be, but the world isn't rational. If you're just talking about EA cause prioritisation, the cost-benefit ratio looks pretty poor to me. Wrt reducing uncertainty about climate sensitivity, you're talking costs of $100m per year to have a slim chance of pushing climate change up above AI, bio, great power war for major EA funders. Or we might find out that climate change is less pressing than we thought in which case this wouldn't make any difference to the current priorities of EA funders.  I also don't see how research on solar geoengineering could be a top pick - stratospheric aerosol injection just doesn't seem like it will get used for decades because it requires unrealistic levels of international coordination. Also, I don't think extra modelling studies on solar geo would shed much light unless we spent hundreds of millions. Climate models are very inaccurate and would provide much insight into the impacts of solar geo in the real world. There might be a case for regional solar geo research, though. (fwiw, i really don't rate that Xu and Ramanathan paper. they're not using existential in the sense we are concerned about. They define it as "posing an existential threat to the majority of the population". The evidence they use to support their conclusions is very weak. For example, they not
Interesting first point, but I disagree. To me, the increased salience of climate change in recent years can be traced back to the 2018 Special Report on Global Warming of 1.5 °C (SR15), and in particular the meme '12 years to save the world'. Seems to have contributed to the start of School Strike for Climate, Extinction Rebellion and the Green New Deal. Another big new scary IPCC report on catastrophic climate change would further raise the salience of this issue-area. I was thinking that $100m would be for all four of these topics, and that we'd  get cause-prioritisation VOI across all four of these areas. $100m for impact and VOI across all four seems pretty good to me (however I'm a researcher not a funder!) On solar geo, I'm not an expert on it and am not arguing for it myself, merely reporting that its top of the 'asks' list for orgs like Silver Lining. I actually rather like the framing in Xu & Ram - I don't think we know enough about >5 °C scenarios, so describing them as "unknown, implying beyond catastrophic, including existential threats" seems pretty reasonable to me. In any case, I cited that more to demonstrate the lack of research thats been done on these scenarios.
On the last point, during the early Pliocene, early hominids  with much worse technology than us lived in a world in which temperatures were 4.5C warmer than pre-industrial. It would be a surprise to me if this level of warming would kill off everyone, including people in temperate regions.  There's more to come from me on this topic, but I will leave it at that for now

I definitely want to see more modeling of supervolcano and comet disasters.

Take some EAs involved in public outreach, some journalists who made probabilistic forecasts on there own volition (Future Perfect people, Matt Yglesias, ?), and buy them their own news media organization to influence politics and raise the sanity- and altruism-waterline.

We could buy (a significant number of shares in) media companies themselves and shift their direction. Bezos bought the Washington Post for $250 million. Some are probably too big, like the New York Times at a $8 billion market cap and Fox Corporation at $20 billion.

I generally agree, although I think these >$1B general audience entities are too expensive for EAs. Whereas I think it would make sense to buy media companies and consultancies that are somewhat focused on global security, AI and/or econ research. e.g. Foreign Policy magazine, Wired, GZero Media. Stratfor. Economist Intelligence Unit, and so on. At least, I think the value of info from trying out buying up one or more smaller entities, to see how one could steer them, or bolster them with some EA talent, could be high - the most similar things I can think of EAs having done previously were investing in DeepMind and OpenAI.

Another way of thinking about this question is - are there other entities that are of less value to invest in than DM/OAI, but more than the media/consulting orgs that I mentioned?

Who should buy them? I'm concerned that it would look really shady for OpenPhil to do so, but maybe Sam Bankman-Fried or another very big EA donor could do it - but then the purchaser needs to figure out who to pick to actually manage things, since they aren't experts themselves. (And they need to ensure that their control doesn't undermine the publication's credibility - which seems quite tricky!)
It could only be billionaires who are running out of donation targets. If Bezos can buy WaPo, then less prominent billionaires can buy less popular media with much less (though not zero) controversy. But I agree that it only works well if you have EA-leaning talent to work there, especially at the executive level.

Matt makes lots of money on his independent substack now, so that feels less urgent, but funding other things like future perfect in other news sources as the Rockefeller Foundation does now seems great.

Urgent doesn‘t feel like the right word, the question to me is whether his contributions could be scaled up well with more money. I think his substack deal is on the order of 300k per year, but maybe he could found and lead a new news organization, hire great people that want to work with him and do more rational, informative and world-improvy journalism?
I would be extremely surprised if he had any interest in doing this, given what he’s said about his reasons for leaving Vox.
Thanks, didn't see what he said about this. Just read an Atlantic article about this and I don't see why it shouldn't be easy to avoid the pitfalls from his time with Vox, and why he wouldn't care a lot about starting a new project where he could offer a better way to do journalism. https://www.theatlantic.com/ideas/archive/2020/11/substack-and-medias-groupthink-problem/617102/ Also, the idea of course is not at all dependent on him, I suppose there would be other great candidates, Yglesias just came to mind because I really like his work. 
Yeah, I guess the impression I had (from comments he made elsewhere — on a podcast, I think) was that he actually agreed with his managers that at a certain point, once a publication has scaled enough, people who represent its “essence” to the public (like its founders) do need to adopt a more neutral, nonpartisan (in the general sense) voice that brings people together without stirring up controversy, and that it was because he agreed with them about this that he decided to step down.
Interesting, the Atlantic article didn't give this impression. I'd also be pretty surprised if you had to become essentially the cliche of a moderate politician if you're part of the leadership team of a journalistic organization. In my mind, you're mostly responsible for setting and living the norms you want the organization to follow, e.g.  * epistemic norms of charitability, clarity, probabilistic forecasts, scout mindset * values like exploring neglected and important topics with a focus on having an altruistic impact?  And then maybe being involved in hiring the people who have shown promise and fit?
Yeah, I mean, to be clear, my impression was that Yglesias wished this weren't required and believed that it shouldn't be required (certainly, in the abstract, it doesn't have to be), but nonetheless, it seemed like he conceded that from a practical standpoint, when this is what all your staff expect, it is required. I guess maybe then the question is just whether he could "avoid the pitfalls from his time with Vox," and I suppose my feeling is that one should expect that to be difficult and that someone in his position wouldn't want to abandon their quiet, stable, cushy Substack gig for a risky endeavor that required them to bet on their ability to do it successfully. I think too many of the relevant causes are things that you can't count on being able to control as the head of an organization, particularly at scale, over long periods of time, and I'd been inferring that this was probably one of the lessons Yglesias drew from his time at Vox.
Nathan Young
Or indeed experimenting with different incentives in news production. What would EAs do if they all had £10 to spend on news production.

Wouldn't they lose readers if they left their organizations? Is that what you mean? The fact that Future Perfect is at Vox gets Vox readers to read it.

In the short term yes, but my vision was to see a news media organization under the leadership of a person like Kelsey Piper that is able to hire talented reasonably aligned journalists to do great and informative journalism in the vein of Future Perfect. Not sure how scalable Future Perfect is under the Vox umbrella, and how freely it could scale up to its best possible form from an EA perspective.

I have claimed that the first few hundred million dollars of preparation for agricultural and electricity disrupting GCRs is competitive with AGI safety for the longterm, and preparation for agricultural GCRs is more cost effective than GiveWell interventions. Since these catastrophes could happen right away, I think it does make sense to scale up quickly to $100 million per year to get the preparation fast. Beyond research, this money could be used for piloting new technologies and developing response plans and training. To maintain $100 million per year may then be lower cost effectiveness than AGI safety at the expected margin, but would still provide additional value and may be competitive with other priorities. Projects could include subsidizing resilient food sources such as seaweed, cellulosic sugar, methane single cell protein, etc. Or building factories flexibly such that they could switch quickly from producing animal feed or energy to human food. These could easily be many billions of dollars per year.

No idea what it would cost, but we should get to work on cloning John von Neumann: https://fantasticanachronism.com/2021/03/23/two-paths-to-the-future/

Interesting! Do you know anything about the state of regulations around this? 

(sorta related, there are several pet cloning services)

I'm not sure what are the potential downsides of such a wide-spread tech, but it seems like something which can have high scalability if done as a for-profit company.

Yeah, cloning humans is effectively illegal almost everywhere. (I specifically know it's banned in the US and Israel, I assume the EU's rules would be similar.)

The Sustainable Development Goals - and their predecessor, the MDGs - are like a megaproject led by the UN. Some of these are already aligned with EA priorities, such as the following:

  • Eradicating extreme poverty (Goal 1, Target 1.1)
  • Ending hunger (Goal 2, Target 2.1) and malnutrition (Target 2.2)
    • Fortify Health aims to improve health by providing fortified wheat flour
  • Good health and well-being (Goal 3)
  • Clean water and sanitation (Goal 6)
  • Ending energy poverty (Goal 7, Target 7.1)
  • Increasing the share of renewable energy (Target 7.2) and energy efficiency (Target 7.3)
  • Promoting clean energy innovation (Target 7.A)
  • Decent work and economic growth (Goal 8)

The Economist has written that Goal 1 (ending poverty) should be "at the head of a very short list." In my opinion, if we're going to do a megaproject, we should take a handful of the SDG targets (such as 1.1, ending extreme poverty) and spend billions of dollars aggressively optimizing them.

An institute for the science of suffering.

Do you know about QRI? They're pretty close to what you're describing. https://www.qualiaresearchinstitute.org/

Yes I know, thank you ADS, but I rather have in mind something like "Toward an Institute for the Science of Suffering" https://docs.google.com/document/d/1cyDnDBxQKarKjeug2YJTv7XNTlVY-v9sQL45-Q2BFac/edit#

You can maybe make very good civilizational refuges for 100M/year, though this is probably considerably more capital than MVPs I'd like to consider.

I did some more thinking (still not full Fermis) and now think that this is a >1B project even for just a sufficiently good MVP, possibly considerably more. 

Though most of the cost is upfront cost like digging, and constructing full bunkers with individual nuclear power plants. The running cost should be considerably lower than <100M/year, unless I'm missing something important.

Is there a good writeup anywhere on cost estimates for this kind of refuge? Or what it would require?

Not that I know of, Nick Beckstead wrote a moderately negative review of civilizational refuges 7 years ago (note that this was back when longtermist EA had a lot less $s than we currently do). 

One reason I'd like to write out a moderately detailed MVP is that then we can have a clear picture for others to critique concrete details of, suggest clear empirical or conceptual lines for further work, etc, rather than have most of this conversation a) be overly high-level or b) too tied in with/anchored to existing (non-longtermist) versions of what's currently going on in adjacent spaces. 

Funding a „serious“ prediction market.

Not sure if 100M is necessary or sufficient if you want many people or even multiple organizations to seriously work full-time on forecasting EA relevant questions. Maybe could also be used to spearhead its usage in politics.

www.ideamarket.io is working on something that's in the same vein. It's not a prediction market, but seeks to use markets to identify credible/trustworthy sources. 

Disclaimer: i started working with Ideamarket a month ago

I hope it’s ok to mention something I’d like to do at Foresight Institute:

Crowdsource + Crowdfund Civilization Tech Map


  • Build on this map for Civilizational Self-Realization (scroll to end of article) to create an interactive technology map for positive long-term futures that is crowdsourced (Wikipedia-style) and allows crowdfunding (Kickstarter-style)
  • The map surveys the ecosystem of areas relevant for civilizational long-term flourishing, from health, AI, computing, biotech, nanotech, neurotech, energy, space tech, etc.  
  • The map branches out into milestones in each area, and either lists projects solving them, or requests projects to solve them, including options to fund either
  • Crowdsourcing of milestones and requests for projects will get it very wrong at first but can get continuously course corrected, e.g. via prediction markets 
  • Crowdfunding makes more and more people have skin in the game for the long-term future, e.g. via tokenization, retroactive public goods funding, or a similar mechanism
  • In sum, the map can serve as a north star to coordinate those seeking to work toward positive futures and those seeking to fund such work.

Challenge prize(s) to incentivise the development of innovative solutions in priority areas. These could be prizes for goals already suggested by people in this thread  (e.g. producing resilient food sources, drastic changes to diagnostic testing, meat alternatives underinvested in by the market) or others. 

Quotes from a Nesta report on challenge prizes (caveat that I haven't spent any time looking up opposing evidence/perspectives):

By guiding and incentivising the smartest minds, prizes create more diverse solutions. Because prizes only pay out when a problem has been solved, you can support long shots, radical ideas and unusual suspects while minimising risk...

The high profile of a prize can raise public awareness and shape the future development of markets and technologies. Prizes can help identify best practice, shift regulation and drive policy change...

For the Ansari XPRIZE, 26 teams spent $100 million chasing the $10 million prize, jump starting the commercial space industry.


See also Musk's $100m prize for carbon capture tech

Buy up scarce resources which are being used for bad things and just sit on them. Like the thing where you buy rainforest to prevent logging. Coal mines, agricultural land used for animals, GPUs?!

Interesting idea! I think this works much better when supply is constrained, eg land, and not when supply is elastic (eg GPUs). I'm curious whether anyone has actually tried this

Feels like buying GPUs would just increase their production.

That's true. I just listened to the most recent 80k podcast where they joke about buying up GPUs so it was in my head :) 
Nathan Young
Haha, fair :)

Scaling up carbon removal and other promising climate-related technologies before governments are willing to fund them. A lot like what Stripe and Shopify have been doing, but about an order of magnitude bigger. If the timing is right (I'm not sure it is) this strategy could get a fair bit of leverage by driving costs down and accelerating even larger-scale deployments.


Launching a Nucleic Acid Observatory, as outlined recently by Kevin Esvelt and others here (link to paper). With $100m one could launch a pilot version covering 5 to 10 states in the US.

Activist investment fund which invests in large companies and then leans on them to change their policies. Examples abound in climate change, but other than that:

  • Food related companies to stop factory farming
  • Biotech companies to stop them from doing gain of function or mirror life research

Bezos bought the Washington Post for $250 million. We could try to buy some other media groups, or at least a significant number of shares in them. Some are probably too big, like the New York Times at $8 billion market cap and Fox Corporation at $20 billion.

I think food-related companies are also probably too big relative to impact, with market caps in the billions or tens of billions of dollars for Tyson, Pilgrim's Pride, JBS SA, McDonald's. You could buy shares in smaller ones, but they also probably have a disproportionately smaller share of farmed animals, although getting a few of them to improve animal welfare policies could make the big ones look bad and push them to follow.

Responding as a member of the ALLFED team.

A network of reliable, long distance shortwave radio systems that do not depend on external sources of electricity and are unable to be disabled by widespread cyber attack, EMP, or most other threats to the global communication infrastructure.

In a wide range of catastrophes, communication systems are a critical vulnerability which, if disrupted, delay societal recovery from the disaster. A highly resilient and reliable system is HAM shortwave radio, which allows reliable, low cost communication to a significant fraction of
the global population. Maintaining key high speed communication
channels during a catastrophe would greatly increase disaster resilience beyond flyer distribution, potentially at relatively little additional cost. A backup shortwave radio communication system would facilitate the timely advice on where to locate clean water sources, identify sensible relocation options, allow improved international cooperation, and allow
coordination about the nature and likely duration of the outage.

We’ve identified HAM shortwave radios as key electronic equipment that is both likely to be highly resilient to global communication disruption on large or small scales, and as relatively easy to distribute. Another interesting use for these radios is distribution to power grid stations
for use to aid blackstart communications after large scale electrical grid collapse.

While several network configurations may serve GCR reduction purposes, our preliminary network design involves around a dozen central stations receiving and broadcasting globally, a network of several hundred two-way NVIS transceiver networks operated by trained personnel, and a few
thousand distributed receiver-only radios. The network would utilize SSB communications to lower power requirements. To cover the entire earth’s population we estimate the total construction and shipping cost at between USD $2 million and $10 million, scaling roughly proportionally with the fraction of global population able to be reached by the network. 

Total costs would therefore reasonably reach into the tens to hundreds of millions for this sort of mega project, depending on the spatial density of the network.

Announce $100M/year in prizes for AI interpretability/transparency research. Explicitly state that the metric is "How much closer does this research take us towards, one day when we build human-level AGI, being able to read said AI's mind, understand what it is thinking and why, what its goals and desires are, etc., ideally in an automated way that doesn't involve millions of person-hours?"
(Could possibly do it as NFTs, like my other suggestion.)

I don't know the area well, but I guess that one option would be to invest in relevant AI companies, to be able to influence their decision-making (and it could also be profitable). I guess that one could in principle invest very large sums in that. And unlike some other suggested projects, it is maybe not necessarily logistically complicated (though it depends on the set-up). Cf. Ryan's comment.

Proof of concept for a geoengineering scheme (could be controversial)

9 PACs have raised/spent more than $100m (source). So an EA PAC?

Although I guess Sam Bankman-Fried was the second-largest donor to Biden (coindesk, Vox), and Dustin Moskovitz gave $50m; and they're both involved with Future Forward and Mind The Gap, so maybe EA is already kinda doing this.

Create global distributed governments.

Governments as they exist today seem antiquated to me as they are linked to particular geographic regions, and the particular shapes and locations of those regions are becoming increasingly irrelevant.

Meanwhile some governments are good at providing for their people – social security, health insurance, enforcement of contracts, physical protection, etc. – so that’s fine, but there are also a lot of governments that are weak in one or more of these critically important departments.

If there were a market of competing global governments, we’d get labor mobility without anyone actually having to move. The governments that provide the best services for the cheapest prices would attract the most citizens.

These governments could draw on something like Robin Hanson’s proposal for a reform of tort law to incentivize a market for crime prevention, could use proof of stake (where the stake may be a one-time payment that the government holds in escrow or a promise of a universal basic income to law-abiding citizens) for some legal matters, and could use futarchy for legislation.

They could also provide physical services, such as horizontal health interventions and physical protection in countries where they can collaborate with the local governments.

An immediate benefit would be the reduction of poverty and disease, but they could also serve to unlock a lot of intellectual capacity by giving people the spare time to educate themselves on matters other than survival. They could define protocols for resolving conflicts between countries and lock in incentives to ensure that the protocols are adhered to. (I bet smart contracts can help with this.)

That way, they could form a union of autonomous parts sort of like the cantons of Switzerland. Such a union of global distributed governments could eventually become a de-facto world government, which may be beneficial for existential security and for enabling the Long Reflection.

Such a government could be bootstrapped out of the EA community. A nonpublic web of trust could form the foundation of the first citizens. If the system fails even when the citizenry is made up largely of highly altruistic, conscientious people who can pay taxes and share a similar culture, it’s probably not ready for the real world. But if it proves to be valuable, it can be gradually scaled up to a broader population spanning more different cultures.

I’ve come to feel like it’s a red flag if such a project bills itself as a distributed state or something of the sort. There seems to be a risk that people would start such a project only to do something grand-sounding rather than solve all the concrete problems that a state solves.

I’d much rather have a bunch of highly specialized small companies that solve specific problems really well (and also don’t exclude anyone based on their location or citizenship) than one big shiney distributed state that is undeniably state-like but is just as flawed as most ge... (read more)

As I alluded to in a comment to KHorton's related post, I believe SoGive could grow to spend something like this much money.

SoGive's core idea is to provide EA style analysis, but covering a much more comprehensive range of charities than the charities currently assessed by EA charity evaluators.

As mentioned there, benefits of this include:

  • SoGive could have a broader appeal because we would be useful to so many more people; it could conceivably achieve the level of brand recognition achieved by charity evaluators such as Charity Navigator, which have high levels of brand recognition in the US (c50% with a bit of rounding).
  • Lots of the impact here is the illegible impact that comes from being well-known and highly influential; this could lead to more major donors being attracted to EA-style donating, or many other things.
  • There's also the impact that could come from donating to higher impact things within a lower impact cause area, and the impact of influencing the charity sector to have more impact

Full disclosure: I founded SoGive.

This short comment is not sufficient to make the case for SoGive, so I should probably right up something more substantial.

The Human Diagnosis Project (disclaimer: I currently work there). If successful, it will be a major step toward accurate medical diagnosis for all of humanity.

Creating a new academic institute - the EA university - that houses a lot of EA research and (somehow) avoids the many issues seen in traditional academia. 

let's add a high school/prep school to it ;-)

Seriously though, I think having an institute more separate than GPI would not be great for disseminating research and gaining reputation. It would be nice though for training up EA students.

I'd be interested in thinking more about this, even as just a thought experiment :) 

I like this! However, in a perfect world, rather than there being one university (or one institute at one university) that studies global priories, wouldn't all top research universities across the world  have global priorities schools (like business or policy schools are prevalent at most research universities)? With philosophers and scientists working together in one school on having the most impact on humanity, and coordinating with one another on how to do so—where students can get PhDs in Global Priorities Research (with specialization in one of ... (read more)

I think  is James' central claim. I personally find myself confused about how much EA research should be done in academia vs outside of it; I can imagine us moving more towards academia (or other more standardized systems) as we institutionalize. 
Sami Kassirer
Why would we have to choose between EA research being in vs. out of academia--why not both (which is kind of what we do now, right)? 
Academia has a lot of costs and benefits. It would be moderately surprising if the costs and benefits exactly balance out (or come anywhere close) for the median EA researcher. 

OK, throwing out an idea here… could somebody cobble together a massive direct cash transfer fund? It’s not like there’s a lack of global poor to receive funding…

(Submitted without knowing a whole lot of details about cash transfers; I just know they are a thing.)

I looked up GiveDirectly's financials (a charity that does direct cash transfers) to check how easily it could be scaled up to megaproject-size and it turns out, in 2020, it made $211 million in cash transfers and hence is definitely capable of handling that amount! This is mostly $64m in cash transfers to recipients in Sub-Saharan Africa (their Givewell-recommended program) and $146m in cash transfers to recipients in the US.

Paul's "message in a bottle" for future civilisations

Creating a program like Birthright offering free, all inclusive 10 day trips to countries where EA global health/development programs are run (ie Malawi).

The trip could be targeted towards high achieving youth with a focus on helping make the abstract ideals of EA feel more "real", in addition to being loaded with all sorts of EA programming.

How can we foster longterm global trust and status as a social movement? In order to foster global backing for some of the movement's non-normative or 'creative' ideas (e.g., build post-apocalyptic bunkers to help re-build society in case of nuclear war) that may actually be highly impactful in the longterm future, we likely need to first prove ourselves as a movement that can actually create large-scale global impact. 

Here's one idea for a megaproject that could help to foster global trust/status by proving our ability to use evidence and reason to make a positive impact on the world:

  • Part 1: Survey representative samples of most (or all) countries and ask them “if you had 100 million dollars and wanted to use this money to make the world a better place, how would you spend it?”, giving open-ended text and a rank order option of some of the things we’re considering
    • Getting cross-cultural responses to this question could produce the most amount of global backing for EA, and it could look *very good* if we made the movement more democratic! But the latter is an empirical question (I.e., perceived trust in a social movement when the movement relies on experts only, the masses only, or a mix of experts and the masses, vs. a no-mention control)
  • Part 2: Create a list of top 10 or so most cared about global issues, and have EA researchers rank each of them in terms of total impact and effectiveness
  • Part 3: run a RCT again on nationally representative samples globally and compare the globally top ranked cause area to the EA most effective cause within the top 10 (if these two aren’t the same) to look at trade offs between indirect movement building impact and direct cause area impact --> after this RCT, choose which cause area will produce the most total impact as the "winner" of the $100 million grant.
  • Part 4: run a large grant competition to find the best approaches to solving whatever cause area is selected globally (note: I’d hypothesize that it’s very important to solve big issue *globally* to facilitate a new norm of collective global action and foster obligation perceptions towards EA from all countries), and aim to R&D for about 5-10 years (rough estimate), and then roll out the most effective intervention(s) based on these findings
  • (Repeat this every X years to maintain longterm support of EA)

Samo Burja said at a meetup the other day that he thinks Vitalik Buterin should give a medium university 10 million dollars to put ten top tier internet bloggers on tenure. No idea if that's a good idea or anywhere near possible, but it could use a decent amount of money for a while.

Educate, empower, and enable diverse talent to work on solutions for the world’s biggest issues.

What is it?

A remote school offering tuition-free education and job placement for vital roles (data scientist, researcher, engineer, etc.) in areas of crucial need (climate, economics, healthcare, etc.).


  • Identify important areas where key talent is lacking.
  • Establish tuition-free online school led by top thinkers.
  • Dispense task-oriented knowledge in short period of time.
  • Create post-graduation job placement program for sectors in need.


  • Remove barriers to higher education.
  • Create access to opportunities, regardless of location, language, background, etc.
  • Lift people out of poverty.
  • Funnel talent into organizations and projects that need the most support.
  • Solve range of vital issues.
  • Grow pool of world problem solvers.
  • Inspire next generation of doers and founders.


  • Open up to more students, more languages, more education levels, more areas of speciality.
  • Create accelerator program to invest in alum startups.


Two things that scale well are knowledge and technology. So, rather than attempt to choose a single area of focus, create a megaproject that both democratizes pursuits and crowdsources solutions. This has the potential to produce a network effect on a variety of problems, while removing hierarchal barriers. Scaling continues until new talent declines to join and/or roles disappear, or problems are solved (due to lack of new focus areas and/or some yet-to-be realized superior option i.e. ML/AI).


How about Qualia Research Institute?


(extremely speculative) 

Promote global cooperation and moral circle expansion by paying people (/ incentivising them in some smarter way) to have regular video calls with a random other person somewhere on the planet.

I think aligning narrow superhuman models could be one very valuable megaproject and this seems scalable to >= $100 million, especially if also training large models (not just fine-tuning them for safety). Training their own large models for alignment research seems to be what Anthropic plans to do. This is also touched upon in Chris Olah's recent 80k interview.

Doing something to democratize randomized controlled trials (RCTs) - thereby reducing the risk involved in testing new ideas and interventions.

RCTs are a popular methodology in medicine and the social sciences. They create a safety net for the scientists (and consumers) to test that the drug works as intended and doesn't turn people into mutants.

I think using this methodology in other fields would be a high-leverage intervention. For example startups, policy-making, education, etc. Being able to try out new ideas without facing a huge downside should be a feature of every field.  Big institutions already conduct similar tests before they release something. But I'm wondering how useful it would be to allow small institutions, startups, and maybe even individuals to do this.

Plus, adding an RCT into the launch pipeline of any intervention/product allows us to see the unintended consequences before they're out there. I think this would have at least been helpful for the social media companies.

Based on some googling, I've understood that RCTs are very costly. But if the reasoning makes sense, this is exactly the kind of thing others can't try out that a megaproject should.

Here's a paraphrased quote by Eliezer Yudkowsky, that is relevant in this context: If people could learn from their mistakes without dying from them, well actually, that in itself would tend to fix a whole lot of problems over time. [source]

P.S. I'm thinking on working on this idea full-time in 2022. It would be very helpful to hear whatever criticism/thoughts you have - It'll help me make sure my time is effectively spent.

I think you should write this up as a full post or at least as a question. 

I don't think people will see this and you deserve reasonable attention if its a full time project.

Note that my knee jerk reaction is caution. The value of RCTs is well known and they are coveted. Then, in the mental models I use, I would discount the idea that it could be readily distributed. 

For example, something like the following logic might apply:

  • An RCT, or something that looks like it, with many of the characteristics/quality you want, will cost more than the seed g
... (read more)

An organizational version of 80k, GiveWell, Project Drawdown for "incentives".That is, an organization that specializes in 1) solving incentive problems in the most effective way possible (ease of implementation, minimizing costs, minimizing side effects...), 2) identifying priority changes based on their research (in general or for specific public policies such as climate change or longtermism...)

Could you give an example of what this might look like?

Ko Dama
Yes (and sorry for my English, I am French (and not very good in English)). Summary in a few lines : At the level of a country (but it can be at another level of governance), the organization chooses one/several indicators, aiming at maximizing long-term well-being. It identifies the priority areas affecting them (based on importance, neglectedness, tractability).For each area, it analyzes the incentive structure, which means all the forces that push in a certain direction (e.g. what are the incentives of the 40 most influential people and organizations in this area? ).  It compares it with the system that would be needed to move forward in a robust way (which implies, and this would be the whole purpose of the organization, to develop expertise on this). It then identifies the most relevant levers to make the system evolve (ease of implementation, political acceptability, efficiency...). Finally it prioritizes each area according to the expected utility of the proposed systemic reforms. One can also imagine a less ambitious version, for example a J-PAL of incentives, which would help governments calling on them for a specific problem (for example: increasing the mathematical performance of students). I identify several advantages.  1) Focuses decision makers on priority problems (like 80k does for individual careers, or Givewell for donations). 2) Incentives are a language that speaks to economists, whose influence on governments is significant. They have a real impact on the world, are often not aligned with the common good, and seem fairly objectifiable (in an otherwise extremely complex social world). 3) The cost-benefit ratio can be very high insofar as some systemic changes have almost no cost. The best example I can think of is this article by Eliezer Yudkowsky (a comprehensive reboot of law enforcement), which gives an overview of the process I imagine. And with more quantitative models, an analysis of the decision-making process to facilitate the chances

We could finance ballot initiatives, lobbying, running our own candidates. Running US presidential primary candidates could shift conversations and bring attention to issues (although bringing attention to an issue can backfire). Bloomberg spent over $500 million on his own presidential primary campaign and was 4th.

Running presidential candidates could be risky for EA, though. Non-partisan ballot initiatives seem safer.

(highly speculative and I see a lot of flaws, but I can see it scaled)

EA training institute/alternative university. Kind of like creating navy seals: highly selective, high dropout rate, but produces the most effective people (with a certain goal) in the world.

My hunch is that that isn't a $100m per year-project, within reasonable time frames (the same is true of several other suggestions in this thread). Cf. Kirsten's post.

This isn't really a megaproject, but I'm a bit busy to make a top-level post of it so I'm dropping it in here.

An evidence clearinghouse informed by Bayesian ideas and today's political mess.

One of humanity's greatest sources of conflict in the modern area is disagreement about (1) the facts, and (2) how to interpret them. Even basic facts are often difficult to distinguish from severe misinterpretations. I used to be hugely interested in climate misinformation, and now I'm looking at anti-vax stuff, but the problem is the same and has real consequences, from my unvaccinated former legal guardian dying of Covid (months after I questioned popular anti-vax evidence), to various genocides that were fueled by popular prejudices.

To me, a central problem is that (1) most people believe it is easy to figure out what the truth is, so do not work very hard at verifying facts, (2) don't actually have enough time to verify facts anyway (doing it well is hard and very time-consuming!), and (3) are wasting a lot of effort by doing it because there is no durable place where the information you discover can be permanently stored, shared, and cross-referenced by others. The multi-millionaire antivaxxer Steve Kirsch has a dedicated substack with "thousands" of customers paying $5/mo. or $50/year to hear his latest Gish Gallop, while debunkings of Steve Kirsch are randomly scattered around and (AFAIK) unprofitable. If I personally discover something, I might mention it to someone on ACX and/or dump it in the old thread I linked to above, and here's a guy who got 359 "claps" on Medium for his debunking. The response is disorganized and not nearly as popular as the original misinformation.

Another example: I spent 27 years in a religion I now know is false.

Or consider what happened on the extremely popular Joe Rogan program that inspired this meme (a joke, but some believe it was a true story):

Joe Rogan: hamburgers are good but I am trying to eat less pork
Guest: hamburgers are made with beef
Joe Rogan: ham is from pork it says ham in hamburger
Guest: it is beef
Joe Rogan: that’s not what I’ve heard Jamie look that up
Jamie: it beef
Guest: it beef
Joe: ok but can we really trust hamburger makers and butchers and grocery stores when the word ham is in hamburger and ham means pork 
Joe Rogan Fans: this is why I like him he is good at thinking

There are studies (Singer et al., Patone et al. 2021) that say there is a small risk of myocarditis in young people who catch Covid, and a much smaller risk of myocarditis in young people who take a mRNA Covid vaccine. Naturally, since he often listens to anti-vaxxers, Rogan had it backwards and thought the risk was higher in those who had a vaccine. If you watched this program, you'd probably come away confused about whether vaccines are worse than the disease or not.

Obviously a web site isn't going to solve this whole problem, but the absence of such a web site is a serious problem that we can solve.

Another way of framing the central problem is as a matter of distrust of institutions. My sense is that a large minority of the population doesn't trust government organizations and doesn't trust scientific research if it is done with money from the government or big companies, yet at the same time they do seem to trust random bloggers and political pundits who have the "right" opinions. But it's worse than that: anybody can put up a PDF and say "this is a peer-reviewed paper", or put up a web site and call it a peer-reviewed journal. For instance, consider the Walach paper that was retracted for various errors, such as the antivax cardinal sin of ignoring base rates of disease and death—see if you can spot this error in action:

...there were 16 reports of severe adverse reactions and 4 reports of deaths per 100,000 COVID-19 vaccinations delivered. According to the point estimate [...] for every 6 (95% CI 2-11) deaths prevented by vaccination in the following 3–4 weeks there are approximately 4 deaths reported to Lareb that occurred after COVID-19 vaccination. Therefore, we would have to accept that 2 people might die to save 3 people.

But antivax scientists have their own "peer-reviewed journal", which republished the paper with no mention of the earlier retraction, and Kirsch simply linked to that instead. Right now, to figure out that this paper is garbage, you have to suspect that "something is wrong" with it and its journal, and to know what's wrong with it exactly, you have to comb through it looking for the error(s). But that's hard! Who does that? No, in today's world we are almost forced to rely on a more practical method: we notice that the conclusion of the paper is highly implausible, and so we reject it. I want to stress that although this is perfectly normal human behavior, it is exactly like what anti-science people do. You show them a scientific paper in support of the scientific consensus and they respond: "that can't be true, it's bullsh**!" They are convinced "something is wrong" with the information, so they reject it. If, however, there were some way to learn about the fatal flaws in a paper just by searching for its title on a web site, people could separate the good from the bad in a principled way, rather than mimicking the epistemically bad behavior of their opponents.

So I envision a democratization of evidence evaluation, as an alternative to the despised "ivory towers". A site where anyone can go to present evidence, vote on its significance, and construct arguments. Something that uses Wikipedia and other well-sourced articles as a seed, and eventually grows into something hundreds of times larger. Something that has an automated reputation system like StackOverflow. Something that has a network of claims, counterclaims, and evidence for each. Where no censorship is necessary, as false claims are shown not to be credible under the weight of counterevidence. Where people recursively argue over finer and finer points, and recursively combine smaller claims ("greenhouse gases can increase average planetary surface temperature", "humans are causing a net increase of greenhouse gases") to build larger claims ("humans are causing global warming via greenhouse gas emissions"). Where vague or inaccurate claims get replaced over time for clearer and more precise claims. Where steelmen gain more prominence than strawmen.  Where offline and paywalled references must be cited with a quote or photo so users can verify the claim. Where people don't "like or dislike" statements, but vote on epistemically useful questions like "this is a fair summary of the claim made in the source" and "the conclusion follows from the premises", and where the credibility of sources is itself an entire universe of debate and evidence.

This site is just one idea I have under my primary cause area, "Improving Human Intellectual Efficiency" (IHIE), which, taken as a whole, could be a megaproject. I have been meaning to publish an article on the cause area, but haven't found the time and motivation to do it in the last year. Anyway, while it's possible to figure out the truth in today's world, it's only via luck (e.g. good teachers) or a massively inefficient and unreliable search process. Let's improve that efficiency, and maybe fewer people will volunteer to kill and die, and more people will understand their world better.

I think this relates to the top-rated answer too, since the lack of support for nuclear power is driven by unscientific myths. After Fukushima, it seemed like no one in the media was even asking the question of how dangerous X amount of radiation is, as if it made sense to forcibly relocate over 100,000 people without checking the risk first. The information was so hard to find that I ended up combing through the scientific literature for it (and I didn't find it there either, just some information that I could use as input for my own back-of-envelope calculation indicating that 100 mSv of radiation might yield a 0.05% chance of death by leukemia IIRC, less than normal risks of air pollution. Was my conclusion reasonable? If this site existed, I could pose my question there.)

Technological developments in the biotech / pharma industry are notoriously expensive, and my (fairly subjective) impression is that the industry is riddled with market failures.

Especially when applied to particularly pressing problems like pandemic prevention / preparedness, infectious diseases in LMICs, vaccines, ageing and chronic pain, I think EA for-profits and non-profits in this industry could absorb 100 million dollars of annual funding while providing high expected value in terms of social impact.

Universal flu vaccine development and testing  

(idea probably stolen from somewhere else) create an organisation employing an army of superforecasters to gather facts and/or forecasts about the world that are vitally important from an EA perspective.

Maybe it's hard to get to $100million? E.g. 400 employees each costing $250k would get you there, which (very naively) seems on the high end of what's likely to work well. Also e.g. other comments in this post have said that CSET was set up for $55m/5 years.

I realise re-reading this that I'm not sure whether these projects are supposed to cost $100million per year or e.g. $100million over their lifetime or something. Maybe something in between?

Nathan Young
They are meant to grow to eventually be spending 100 million a year.

Qualia Research Institute

Maybe diet pledge programs like Veganuary and Challenge 22? They could spend a lot more on ads and expand to more countries. Maybe this would be better set up like the Open Wing Alliance, where The Humane League supports, trains and regrants to local organizations working on cage-free campaigns in different countries.

I'm not sure this could reach $100 million while still spending reasonably cost-effectively, though.

How do companies sponsor films and TV? 

Rather than paying for a film to have Fords in it, pay for it to have more EA ideas. 

Feels like it might be seen as propaganda and could backfire.

[This comment is no longer endorsed by its author]

This isn't megaprojects scale and is just marketing.

Shamelessly copy the success of StitchFix but use it for the food industry but only sending information, not the actual food. 

I've thought about this one a lot so I'll try my best to summarize this:
Cook. Eat. Rate. Repeat. A 

The foundation would have data scientists/engineers behind the scenes that help customers find their perfect recipes via information and testing. The foundation would eventually then expand into eating out at sustainable restaurants based off feedback from the customer, then merging into community vertical farming which moves into individual household vertical farming. 

The company Yummly is pretty close to this but isn't quite there yet and are expanding in the wrong direction imo. 

Revamping the food industry to where we are not dependent on grocery stores' supply chains and instead growing it downstairs inside our own homes and creating absolutely delicious recipes from around the world is massive, healthy impact. Something the US could benefit from easily. It's only a matter of time where every individual will have to (mostly) live off the land in their backyard again in my opinion and to curb that catastrophe, we create FoodieFix. 

Revamping the food industry to where we are not dependent on grocery stores' supply chains and instead growing it downstairs inside our own homes and creating absolutely delicious recipes from around the world is massive, healthy impact

Sorry, why? This just seems really minor in the grand scheme of things, unless I'm missing something important (which is very possible).

Sorted by Click to highlight new comments since: