This isn't really a megaproject, but I'm a bit busy to make a top-level post of it so I'm dropping it in here.
An evidence clearinghouse informed by Bayesian ideas and today's political mess.
One of humanity's greatest sources of conflict in the modern area is disagreement about (1) the facts, and (2) how to interpret them. Even basic facts are often difficult to distinguish from severe misinterpretations. I used to be hugely interested in climate misinformation, and now I'm looking at anti-vax stuff, but the problem is the same and has real consequences, from my unvaccinated former legal guardian dying of Covid (months after I questioned popular anti-vax evidence), to various genocides that were fueled by popular prejudices.
To me, a central problem is that (1) most people believe it is easy to figure out what the truth is, so do not work very hard at verifying facts, (2) don't actually have enough time to verify facts anyway (doing it well is hard and very time-consuming!), and (3) are wasting a lot of effort by doing it because there is no durable place where the information you discover can be permanently stored, shared, and cross-referenced by others. The multi-millionaire antivaxxer Steve Kirsch has a dedicated substack with "thousands" of customers paying $5/mo. or $50/year to hear his latest Gish Gallop, while debunkings of Steve Kirsch are randomly scattered around and (AFAIK) unprofitable. If I personally discover something, I might mention it to someone on ACX and/or dump it in the old thread I linked to above, and here's a guy who got 359 "claps" on Medium for his debunking. The response is disorganized and not nearly as popular as the original misinformation.
Another example: I spent 27 years in a religion I now know is false.
Or consider what happened on the extremely popular Joe Rogan program that inspired this meme (a joke, but some believe it was a true story):
Joe Rogan: hamburgers are good but I am trying to eat less pork
Guest: hamburgers are made with beef
Joe Rogan: ham is from pork it says ham in hamburger
Guest: it is beef
Joe Rogan: that’s not what I’ve heard Jamie look that up
Jamie: it beef
Guest: it beef
Joe: ok but can we really trust hamburger makers and butchers and grocery stores when the word ham is in hamburger and ham means pork
Joe Rogan Fans: this is why I like him he is good at thinking
There are studies (Singer et al., Patone et al. 2021) that say there is a small risk of myocarditis in young people who catch Covid, and a much smaller risk of myocarditis in young people who take a mRNA Covid vaccine. Naturally, since he often listens to anti-vaxxers, Rogan had it backwards and thought the risk was higher in those who had a vaccine. If you watched this program, you'd probably come away confused about whether vaccines are worse than the disease or not.
Obviously a web site isn't going to solve this whole problem, but the absence of such a web site is a serious problem that we can solve.
Another way of framing the central problem is as a matter of distrust of institutions. My sense is that a large minority of the population doesn't trust government organizations and doesn't trust scientific research if it is done with money from the government or big companies, yet at the same time they do seem to trust random bloggers and political pundits who have the "right" opinions. But it's worse than that: anybody can put up a PDF and say "this is a peer-reviewed paper", or put up a web site and call it a peer-reviewed journal. For instance, consider the Walach paper that was retracted for various errors, such as the antivax cardinal sin of ignoring base rates of disease and death—see if you can spot this error in action:
...there were 16 reports of severe adverse reactions and 4 reports of deaths per 100,000 COVID-19 vaccinations delivered. According to the point estimate [...] for every 6 (95% CI 2-11) deaths prevented by vaccination in the following 3–4 weeks there are approximately 4 deaths reported to Lareb that occurred after COVID-19 vaccination. Therefore, we would have to accept that 2 people might die to save 3 people.
But antivax scientists have their own "peer-reviewed journal", which republished the paper with no mention of the earlier retraction, and Kirsch simply linked to that instead. Right now, to figure out that this paper is garbage, you have to suspect that "something is wrong" with it and its journal, and to know what's wrong with it exactly, you have to comb through it looking for the error(s). But that's hard! Who does that? No, in today's world we are almost forced to rely on a more practical method: we notice that the conclusion of the paper is highly implausible, and so we reject it. I want to stress that although this is perfectly normal human behavior, it is exactly like what anti-science people do. You show them a scientific paper in support of the scientific consensus and they respond: "that can't be true, it's bullsh**!" They are convinced "something is wrong" with the information, so they reject it. If, however, there were some way to learn about the fatal flaws in a paper just by searching for its title on a web site, people could separate the good from the bad in a principled way, rather than mimicking the epistemically bad behavior of their opponents.
So I envision a democratization of evidence evaluation, as an alternative to the despised "ivory towers". A site where anyone can go to present evidence, vote on its significance, and construct arguments. Something that uses Wikipedia and other well-sourced articles as a seed, and eventually grows into something hundreds of times larger. Something that has an automated reputation system like StackOverflow. Something that has a network of claims, counterclaims, and evidence for each. Where no censorship is necessary, as false claims are shown not to be credible under the weight of counterevidence. Where people recursively argue over finer and finer points, and recursively combine smaller claims ("greenhouse gases can increase average planetary surface temperature", "humans are causing a net increase of greenhouse gases") to build larger claims ("humans are causing global warming via greenhouse gas emissions"). Where vague or inaccurate claims get replaced over time for clearer and more precise claims. Where steelmen gain more prominence than strawmen. Where offline and paywalled references must be cited with a quote or photo so users can verify the claim. Where people don't "like or dislike" statements, but vote on epistemically useful questions like "this is a fair summary of the claim made in the source" and "the conclusion follows from the premises", and where the credibility of sources is itself an entire universe of debate and evidence.
This site is just one idea I have under my primary cause area, "Improving Human Intellectual Efficiency" (IHIE), which, taken as a whole, could be a megaproject. I have been meaning to publish an article on the cause area, but haven't found the time and motivation to do it in the last year. Anyway, while it's possible to figure out the truth in today's world, it's only via luck (e.g. good teachers) or a massively inefficient and unreliable search process. Let's improve that efficiency, and maybe fewer people will volunteer to kill and die, and more people will understand their world better.
I think this relates to the top-rated answer too, since the lack of support for nuclear power is driven by unscientific myths. After Fukushima, it seemed like no one in the media was even asking the question of how dangerous X amount of radiation is, as if it made sense to forcibly relocate over 100,000 people without checking the risk first. The information was so hard to find that I ended up combing through the scientific literature for it (and I didn't find it there either, just some information that I could use as input for my own back-of-envelope calculation indicating that 100 mSv of radiation might yield a 0.05% chance of death by leukemia IIRC, less than normal risks of air pollution. Was my conclusion reasonable? If this site existed, I could pose my question there.)
Out of all the ideas, this seems the most shovel-ready.
MacArthur will (presumably) be letting go of some staff who do nuclear policy work, and would (presumably) be happy to share the organisations they've granted to in the past. So you have a ready-made research staff list + grant list.
All ("all" :) ) you need is a foundation and a team to execute on it. Seems like $100 million could actually be deployed pretty rapidly.
Possibly not all of that money would meet EA standards of cost-effectiveness though - indeed MacArthur's withdrawal provides some evidence that it isn't cost effective (if we trust their judgement).
Here's the interesting, frustrating evaluation report: https://www.macfound.org/media/article_pdfs/nuclear-challenges-synthesis-report_public-final-1.29.21.pdf.pdf
Looks to me like a classic hits-based giving bet - you mostly don't make much impact, then occassionaly (Nixon arms control, H.W. Bush's START and Nunn-Lugar, maybe Obama JCPOA/New START) get a home run.