This post is a continuation/extension of the EA megaproject article by Nathan Young. Our list is by no means exhaustive, and everyone is welcome to extend it in the comments. 

The people who contributed to this article are in no particular order: Simon Grimm, Marius Hobbhahn, Max Räuker, Yannick Mühlhäuser and Jasper Götting. 

We would like to thank Aaron Gertler for his feedback and suggestions. 

Donating to GiveDirectly is often seen as a scalable baseline for the effectiveness of a project. Another scalable baseline could be investing in clean energy research. We don’t expect all of the following ideas to meet either bar. Some of them might just be interesting or thought-provoking. 
The value of these megaprojects depends, among other things, on your risk profile, your views on patient philanthropy and the option value of the idea. 

Movement building:

There are many EA organizations in the larger space of “getting more EAs into the right position”, such as 80K, different scholarships, local chapters, etc. However, given that EA is currently more people- than resource-constrained, it seems plausible that there is much room for growth. 

Some people have argued to “keep EA small” (see here or here). However, we don’t think it makes sense to limit access to the movement to the extent that EA would stop growing. A plausible future role for EA in society might be similar to science. Everyone broadly knows what science is, and most people have a good opinion of it, but most people aren’t scientists themselves. Most scientific fields are competitive, select for highly talented people, and grow somewhere between linearly and exponentially. The goal should not be to approach as many people as possible (such as religions) but to approach and funnel talented individuals into the movement (such as sports organizations or universities). 

Ideas for bigger projects in the space of movement building include:

1. A large global scholarship foundation: 

EAs could build a foundation that helps young people with their choice of study while providing financial support, visas and mentorship during their studies. Such a foundation would also yield an extensive, global network of young, talented individuals. Furthermore, such a foundation can scout for talented individuals earlier on through similar events like existing math/science/coding/forecasting tournaments or recruiting in sports. Some of these elements already exist on a smaller and more informal scale, but a global institution would provide increased visibility, prestige, and scaling effects. Furthermore, a scholarship foundation could focus specifically on developing countries and identify talents who could migrate to richer countries. The aim would be to create an organization that lends as much prestige as Fulbright or Rhodes scholarships. 

2. Buy a Big Newspaper: 

Philanthropists and other wealthy individuals regularly buy or own major news outlets (Jeff Bezos bought WaPo, Rupert Murdoch owns vast swathes of center-right to right-wing media). An affluent EA could buy a trusted newspaper and direct it in a more evidence-based direction, for instance, by incorporating forecasts into reporting or by highlighting certain positive developments that get neglected. Future Perfect has shown how EA-influenced reporting can have a place in a major news outlet, and the Ezra Klein show sometimes uses an EA-framing, but we think there is room for more. This could have a particularly relevant impact in non-English speaking countries.

Science:

1. Fund very large RCTs/population-wide studies

Conventional RCTs are often too underpowered for certain (often population-wide) relations. Nonetheless, knowing of these relationships can be impactful, as seen in the recent literature on the harms of air pollution. Funding more large-scale studies could uncover further, important knowledge Valuable targets for such an effort might be mental health, impacts of different foods, pollution, fertility, nootropics (cognitive enhancers), effects of agrochemicals, and many more. But, the marginal benefit of such studies might be small, given that governments are already incentivized and interested in most of these research questions.

2. A Max-Planck Society (MPG) for EA research

In 2019 the MPG had a yearly budget of 2.5€ billion, distributed among 86 institutes, i.e. roughly ø30€ Mio. per institute. The MPG institutes attract a lot of international talent and are usually ranked among the best institutions in their area of research. They allow researchers to focus entirely on research, with no teaching or admin requirements. Moreover, MPG institutes provide grants for very long projects—durations of up to 10 years aren’t uncommon. A similar English-speaking institution might be the Institute for Advanced Study.
Setting up 3-5 EA-aligned institutes with a similar model would attract talent, increase prestige, and research output. These institutes could focus on classic EA topics such as longtermism, AI safety, biosecurity, global health and development, animal welfare, and so on. 

3. An EA university (suggested by Will MacAskill at EAG2021)

In contrast to a research-focused institute such as the Max-Planck Society, a university would include teaching and a broader range of topics and disciplines. Nonetheless, a university and MPG have substantial overlap. 
Given that education and career choice are important aspects of EA, it might make sense to launch an entire university. This university could provide both generalist degrees on all EA subfields while also offering focused degrees on AI alignment, biosecurity, animal welfare, and so on. An EA-funded university could pay higher salaries, attracting highly talented researchers thus increasing its pull popularity among students. Some researchers could also be freed from teaching requirements. Other researchers might be hired explicitly to do high-quality teaching to benefit the students’ education. 

4. Fund Professorships focussing on EA relevant topics 

This proposal is the smaller, more diversified, and less public version of the EA university idea.

5. A Forecasting Organization (suggested by Will MacAskill at EAG2021)

Create a forecasting institute that employs top-performing forecasters to predict EA-aligned questions. Such an institute could also host, maintain and develop currently existing forecasting platforms (1, 2) since most existing platforms are relatively small and rely on the work of volunteers. 

6. Create EA-aligned advance-market commitments (AMC)

AMCs incentivize the development of products by guaranteeing a government or non-profit pay-out or purchase after development. They have a good track record in facilitating vaccine development against neglected diseases, such as pneumococcal disease
AMCs could be used in cause areas such as biosecurity, incentivizing researchers to develop technologies with little or no existing market. Some examples within biosecurity are antivirals against potential pandemic pathogens (SARS, MERS, Influenza), widespread cheap metagenomic sequencing, needle-free broad-spectrum vaccines, or better PPE (i.e. more comfortable, safer, and better looking) . Furthermore, EA-guided development would enable norm-setting and the prioritization of low-downside technologies (differential technological development). Finally, AMCs could also be used in cause areas such as global health and longevity research, though these do not seem as neglected.

7. Prizes for important technologies and research

Set challenges and pay prices for EA-relevant inventions, similar to the Ansari X Price. Compared to AMCs, these instruments are probably usable for very early-stage work while requiring less funding.

8. Implement a small pilot of the Nucleic Acid Observatory (NAO): 

Getting a clearer picture of currently circulating pathogens is one of the most valuable interventions to enable the fast detection and containment of emerging outbreaks, thereby also deterring the intentional use of engineered pathogens.
A significant project enabling this is a proposed Nucleic Acid Observatory. Such an observatory would monitor circulating nucleic acids in the sewage of major cities or in wastewater of central, highly-frequented facilities (e.g. airports, hospitals, train stations). 
The pilot NAO outlined in the paper would cover all US ports of entry, costing $700m a year + $700m for the setup). Keeping with the megaprojects figure of $100m, one could launch a smaller pilot covering 1 to 3 coastal states in the US. 

9. Purchase high impact academic journals (suggested by Will MacAskill at EAG2021)

Existing incentives within science are not fully aligned with truth-seeking. Traditional research focuses on novelty and significance, while replication studies or negative results aren’t valued. Having an EA-aligned top-tier journal might be able to address some of those problems. However, existing scientific norms are entrenched and hard to solve. We would guess that this idea is among the less effective proposals in this post. 

10. Buy promising ML labs

In 2014 DeepMind was acquired by Google for $500 million. DeepMind has considerable leeway, but Google will still influence important decisions. Thus, the impact of Google’s acquisition could be very high if DeepMind continues to be a leading organization on the path towards transformative AI. There might be other companies today who would be interested in signing a similar agreement, only that they instead would agree to steer their work towards contributing to the design of safe and value-aligned AI. They might start working together with existing AI Safety organizations and would more likely contribute to any difficult coordination problems that might come up in the coming decades, e.g. AI races.

Governance:

1. A really large think tank: 

There are a couple of EA-aligned think tanks and NGOs in the policy space, but they are all relatively small. For comparison, The Brookings Institution spends over $90 Mio. per year and the RAND corporation spends around $340 Mio. An EA think tank with large research teams could focus on important policy questions and monitor major countries’ legislative processes. Housing a think-tank in the US seems most promising, but one could also create new think tanks in the EU or Asia. 
This could also entail developing a deep network of lobbyists working on EA causes. There is reason to believe that well-resourced lobbying efforts can have a significant influence if they are well-targeted. But, the biggest bottleneck for such an organization might not be money but the lack of trusted, senior individuals. 

2. Invest in local/national policy: 

There are a couple of structural, political changes that might be worth pursuing from an EA perspective. For example, a ministry of the future, more funding for long-termist science projects, advancing international denuclearisation, proposing very progressive animal welfare laws, etc.  
EAs could invest in cities/political parties or candidates whose success would be better for the world (e.g. who want to build the institutions, implement the laws and start the projects mentioned above). Other topics include improved immigration and housing policies, experimentation with governance mechanisms, basically, everything that makes some EAs excited about charter cities, only that these policies are implemented in existing cities. Again, we are not as excited about this proposal given its low tractability and potential politicization of EA.

Miscellaneous:

1. Buy a coal mine (suggested by Will MacAskill at EAG2021)

Alex Tabarrok of Marginal Revolution stated that one might lease a coal mine for as little as 7.8$ Million. However, this number is misleading as the contract includes an extraction target, making the actual cost much higher. 
Nevertheless, when buying or leasing a mine we would aim to then not use it. There are two reasons to do this. First, this would add to climate change prevention. But, more importantly, having a backup coal mine would reduce existential risks. If humanity is victim to a global catastrophe that wipes out civilization (most humans, technology, most trade, etc.) but does not kill everyone, we need easy ways to restart. Reserving some coal in easily accessible locations could make a relaunch of civilization easier, as easily accessible coal could be used to create energy and heat to bootstrap our tech tree. 

2. Fund an enormous EA forecasting tournament to find talented forecasters all around the world

A relatively modestly prized forecasting tournament on Metaculus last year was reported on by Forbes (the total prize pool was $50,000). We could easily scale such tournaments, to acquire global scale: Without having talked to people involved in the forecasting community, we believe such a tournament could lead to

a) identifying talented people from all over the world who can think about complex and EA-related issues (analogous to chess/math/gaming competitions),
b) highlight EA-related issues to broad public attention, and
c) publicize the idea and value of probabilistic forecasting.
It seems unclear if such tournaments can be scaled without losing valuable features such as incentivizing honest reporting of uncertainty and sharing useful information, as discussed here. It also seems unclear if a better idea is to fund prediction markets and push them towards more questions relevant to EA. 

3. Stockpiling Personal protective equipment (PPE): 

A rolling stockpile of PPE is very desirable for emerging pandemics, yet most countries were not prepared for the COVID-19 induced surge in demand, resulting in shortages of basic PPE like masks or gloves.

While some countries like the US or UK set up or expanded their stockpiles, it might still be valuable to create an international stockpile for subsets of the population that need to be mobile during any emerging pandemic and do not have access to the existing stockpiles, e.g. in the developing world. Most PPE is cheap, non-perishable, easy to store, and almost entirely pathogen-agnostic. But we think this idea is among the less effective due to existing stockpiles and the high demand for logistics and upkeep.

174

47 comments, sorted by Click to highlight new comments since: Today at 5:59 PM
New Comment

I'm very interested in some of these ideas. If anyone would like to build a pilot version of one of these projects, EA Funds is interested in receiving your application and might fund good pilots with an initial $30k–$500k. Apply here.

(You can also email me at jonas@effectivealtruismfunds.org, though I prefer getting applications over getting emails.)

Just to let everyone know that there's a group of us (former partners at a multi-billion dollar PE fund, global edtech founders/operators, EA community builders, etc.) who've been actively working on setting up a large global scholarship EA foundation and an 'EA University' over this past year.

We've been coordinating with major EA donors and will be launching relatively soon.

In case anyone woud like to be kept in the loop , or if there's ways you'd like to potentially get involved, feel free to dm me or reach out at affectivealtruist@outlook.com :)
 

Sent you an email!

I didn't get a response so far and talked to some other grantmakers who didn't seem to know you, either – so I'm confused what's going on here.

Hahah same here Jonas! Let me know if you know more

Wow this sounds super interesting.

Also sent you an email for possible involvement from Training For Good

I'd also highlight: 

  • Much bigger EA orgs - either new ones or substantially scaled versions of existing ones
    • E.g., make something like Rethink-Priorities-in-2030 but in 2023 instead
  • Large consultancies of various types
  • Large projects related to resilient foods or other things ALLFED advocates
    • I haven't checked their latest ideas or tried to vet them, but I think they have some ideas for pretty ambitious, large projects that could plausibly make a big dent in some global catastrophic or existential risks

Thanks for your further suggestions! We actually briefly looked at the ALLFED website and their publications but had the sense that many ideas were still in the modelling/research phase so we didn't include them. The new 80k episode goes into a bit more detail (e.g. funding conversion kits for paper mills to enable quick repurposing for glucose production), so anybody more familiar with their work is very welcome to share potential megaproject ideas.

ALLFED has nearly completed our prioritization, and given the amount of commercialization that has already been done on resilient foods, we think we are ready to partner with other companies to do piloting of the most promising solutions in a way that is valuable for global catastrophes (e.g. very fast construction). Repurposing a paper mill for sugar (and protein if the feedstock is agricultural residues) is a good large project. But there is also fast construction of pilot scale of natural gas (methane) single cell protein and the fast construction of pilot scale hydrogen single cell protein (splitting water or gasifying a solid fuel such as biomass). Furthermore, there is the backup global radio communication system that would be extremely useful for loss of electricity scenarios. 

I think there still is quite a bit of research to be done, especially analyzing cooperation scenarios and the potential of resilient food production by country. This could help inform country-level response plans. This could be facilitated by setting up a research institute on resilient foods. Another possibility is running an X prize for radical new solutions.

Just noting that in the comments of the original post by Nathan Young that the authors linked to, the top-upvoted suggestion was to offset the gap in nuclear security funding created by the MacArthur Foundation's exit from the field. I recently had an opportunity to speak to someone who was there at the time of MacArthur's decision and can share more about that privately, but suffice to say that our community should not treat the foundation's shift in priorities as a strong signal about the importance or viability of work in this space going forward.

I wouldn't treat the upvotes there as much evidence; I think most EAs voting on these things don't have very good qualitative or quantitative models of xrisks and what it'd take to stop them. 

A reductio ad absurdum here you might have is whether this is an indictment of the karma system in general. I don't think it is, because (to pick a sample of other posts on the frontpage) posts about burnout and productivity can simply invoke people's internal sense/vibe of what makes them worried, so just using affect isn't terrible, posts about internship/job opportunities can be voted based on which jobs EAs are internally excited for themselves or their acquaintances/coworkers to work at, posts about detailed specific topics have enough details in them that people can try to evaluate post on the merits of the posts themselves, etc. 

Thanks for posting this - I like these ideas. Whose job is it to actually make these things happen - who has to take these ideas to donors? Is that responsibility entirely with high net worth advisors? Should there be a megaprojects team within CEA given how much money is on the table?

This perspective strikes me as as extremely low agentiness.

Donors aren't this wildly unreachable class of people, they read EA forum, they have public emails, etc. Anyone, including you, can take one of these ideas, scope it out more rigorously, and write up a report. It's nobody's job right now, but it could be yours.

haha that's fair!  there is of course a tragedy of the commons risk here though - of people discussing these ideas and it not being anyone's job to make them happen

Compared to the other ideas here, I think the benefits of an explicitly EA university seem small (compared to the current set-up of EA institutes at normal universities, EAs doing EA-relevant degrees at normal universities and EA university societies).

Are there other major benefits I’m missing other than more value-alignment + more co-operation between EAs?

One downside of EA universities I can think of is that it might slow movement growth since EAs will be spending less time with people unfamiliar with the movement / fewer people at normal universities will come across EA.

I think this is one of these things that are a bit hard to judge unless you have contextual knowledge of, e.g. how things work out at EA-dominated research university institutes. I think more abstract considerations will only take you so far.

The same point also pertains to the other comments in this thread.

I would be surprised if it were worthwhile building an entire university with all the normal departments, but I could see value if it offered specialist masters degrees that you can't obtain elsewhere such as a Masters of AI Safety.

Even then it would seem preferable to me to fund something like a “department of AI safety” at an existing university, since the department (staff and graduates) could benefit from the university’s prestige. I assume this is possible since FHI and GPI exist.

Some quick thoughts: 

  • Word on the grapevine is that many universities have really poor operations capacity, including R1 research universities in the US and equivalent ones in Europe. It's unclear to me if an EA university can do better (eg by paying for more ops staff, by thinking harder about incentives), but it's at least not implausible.
    • Rethink Priorities, Open Phil, and MIRI all naively appear to have better ops than my personal impression of what ops at EA-affliated departments in research universities look like.
  • Promotion tracks in most (but not all) elite American universities are based on either a) (this is typical) paper publication record or b) (especially in liberal arts colleges) teaching. This can be bad if we (e.g.) want our researchers to study topics that may be unusually sensitive. So we might want to have more options like a more typical "research with management" track (like in thinktanks or non-academic EA research orgs), or prize funding like Thiel/All Souls (though maybe less extreme).
  • Having EAs work together seems potentially really good for wasting less time of both researchers and students.
  • Universities often just do a lot of things that I personally perceive as pretty immoral and dumb (eg in student admissions, possibly discriminate a lot against Asian descent or non-Americans, have punitive mental health services). Maybe this is just youthful optimism, but I would hope that an EA university can do better on those fronts.

I have argued for a more "mutiny" (edit: maybe "exit" is a better word for it) style theory of change in higher education so I really like the idea of an EA university where learning would be more guided by a genuine sense of purpose, curiosity and ambition to improve the world rather than a zero-sum competition for prestige and a need to check boxes in order to get a piece of paper. Though I realize that many EAs probably don't share my antipathy towards the current higher education system.

One downside of EA universities I can think of is that it might slow movement growth since EAs will be spending less time with people unfamiliar with the movement / fewer people at normal universities will come across EA.

Though if it becomes really successful and prestigious, it could also raise the profile of EA.

Though I realize that many EAs probably don't share my antipathy towards the current higher education system.

Anecdotally, most EAs I have spoken to about this topic have tended to agree 

I am a professor and have steadily been e xiting higher education. It is bad

Out of curiosity, would you be interested in sharing your biggest "causes for concern" with higher education? 

In my experience, EAs tend to be pretty dissatisfied with the higher education system, but I interpreted the muted/mixed response to my post on the topic as a sign that my experience might have been biased, or that despite the dissatisfaction, there wasn't any real hunger for change. Or maybe a sense that change was too intractable.

Though I might also have done a poor job at making the case.

My speculative, cynical, maybe unfair take is that most senior EAs are so enmeshed in the higher education system, and sunk so much time succeeding in it, that they're incentivized against doing anything too disruptive that might jeopardize their standing within current institutions. And why change how undergrad education is done if you've already gone through it?

My guess is that it can help converting non-EAs into people who have roughly EA-aligned objectives which seems highly valuable ! What I mean is that a simple econ degree is enough to have people who think almost like EAs so I I expect an EA university to be able to do that even better

I'm keen on the idea of funding large RCTs, especially if it's explicitly building the evidence base for policy change, like J-PAL. I think that's definitely room for more organisations and funding in that area.

I can't share more detail right now and they might not work out, but just FYI, I'm currently working on the details of Science #5 and Miscellaneous #2.

Thanks, this seems like a useful collection! 

Here are some other things readers might find useful:

  • List of EA funding opportunities
    • As noted there: "I strongly encourage people to consider applying for one or more of these [funding opportunities]. Given how quick applying often is and how impactful funded projects often are, applying is often worthwhile in expectation even if your odds of getting funding aren’t very high. (I think the same basic logic applies to job applications.)"
  • I recently collected here a bunch of active grantmaking ideas, i.e. ideas for either projects that might be worth trying to make happen or people/teams/orgs that might be worth funding to do something
  • I have some thoughts on or know people/resources relevant to the following ideas from your list, so if someone is seriously considering working on one of those things, feel free to reach out to me via a Forum direct message:
    • "Buy a Big Newspaper"
    • "A Forecasting Organization"
    • "Purchase high impact academic journals"
    • "A really large think tank"
    • "Fund an enormous EA forecasting tournament"

An affluent EA could buy a trusted newspaper and direct it in a more evidence-based direction, for instance, by incorporating forecasts into reporting or by highlighting certain positive developments that get neglected.

Have any EA organizations tried to partner with FiveThirtyEight?

Some more ideas that are related to what you mentioned :

  • Exploring / Exploiting interventions on growth in developing countries. So for instance, what if we took an entire country and spent about 100$ or more per households (for a small country, that could be feasible) ? We could do direct transfer as GiveDirectly but I'd expect some public goods funding to be worth trying aswell.
  • Making AI safety prestigious setting up an institute that would hire top researchers for safety aligned research. I'm not a 100% sure but I feel like top AI people often go to Google in big part because they offer great working conditions. If an institute offered these working conditions and that we could hire top junior researchers quite massively to work on prosaic AGI alignment, that could help making AI Safety more prestigious. Maybe such an institute could run seminars, give some rewards in safety or even on the long run host a conference.

I feel like a number of these maybe could be fitted under a single very large organization. Namely:

  • Max-Planck Society (MPG) for EA research
  • EA university
  • Forecasting Organization
  • EA forecasting tournament
  • ML labs
  • Large think tank

Basically, a big EA research University with a forecasting, policy research and ML/AI safety department.

I'd also add non-profit and for-profit startup incubator. I think Universities would be much better if they made it possible to try something entrepreneurial without having to fully drop-out.

FYI this sentence misuses the term "existential catastrophe":

If humanity is victim to an existential catastrophe that wipes out civilization (most humans, technology, most trade, etc.) but does not kill everyone, we need easy ways to restart.

If it's possible to restart, then it's not an existential catastrophe."Global catastrophe" would be a more appropriate term here.

Good catch. We agree and updated it to global catastrophe. 

I basically agree and am glad you highlighted this. 

Nit-pick: But it could be an existential catastrophe even if it's possible to "restart" civilization, if we're locked into a much worse trajectory than was previously attainable. E.g. if we'll recover in terms of population, tech, and GDP, but never expand beyond the Earth, never end factory farming, never have huge numbers of digital minds living excellent lives, or whatever.

See also Venn diagrams of existential, global, and suffering catastrophes.

Thanks for the great post. I appreciate the idea of a EA university or a network of institutes like the Max-Planck Society. They both aim towards the idea of rigorous EA research. I would like to drop another similar idea (I don't know whether it was already discussed and dismissed) which would allow everyone from academics to participate in the EA related research:

 Fund an Open Access EA Journal [specifically focused on EA causes and cause prioritisation] without major hurdles for scientists to publish with an up-to-date Peer Review process, e.g. with paying the referees a reasonable fee, allowing for comments after publishing, etc. (I am not an expert in establishing Journals and how a peer review process should be optimised, so take this idea with a grain of salt). This would allow academics around the world to participate in high-quality research on classic EA topics such as longtermism, AI safety, biosecurity, global health and development, animal welfare, and cause prioritisation. It should be a serious journal with a different focus but a high quality level that graduates may use the published Papers as milestone in their career. 

 Maybe such an official Journal could be (at least perceived?) as more rigorous compared to a Forum with comment section?

One of our suggestions was to buy an existing journal as it might be easier than creating a new one. However, we think that there are a lot of reasons why either option might fail since most problems in academia are likely on a deeper level than journals. I guess other interventions are just much more effective. But I could be persuaded if someone presents good ideas addressing our concerns. 

This form of the post has got more votes than the original (which was a question post). Why do poeple think that is?

- do people prefer posts to questions?
- is this better explained/worked through?

The difference is not that big (154/125=1.232), so it could be unrelated to the quality or format, and instead have to do with timing or other things.

One big difference is the inclusion of several examples in the post itself here, and credit for that should go to the authors, whereas users may give most of the credit for the examples in your question to the corresponding answers, not the question itself. If someone wanted to upvote specific examples in this post, they'd upvote this entire post, whereas for the question, they could upvote specific answers instead (or too). If you include karma from the question itself and the answers, there's far more in your question, although probably substantial double counting from users upvoting the post and answers.

I wouldn't read too much into it due to randomness, timing, etc.

But my hunch is that posts are preferred because it provides slightly more value. Rather than having to think of answers yourself or sorting the current answers, you can just skim the headlines. 

Fascinating ideas!

#7 should say 'prizes' instead of 'prices'.

On number 9, I have a somewhat different approach I am discussing and trying to take steps to make happen, organizing in the gitbook here ... feedback please!

No a 'mega-project' yet but it could be.

Hi everyone, I guess my comment relates to “public policy” mega-project idea. Sorry if it’s not the right place to ask for this, but I’d like help requesting funding to pass a bill in California legislature AB 2764 that would ban new animal factory farms and slaughterhouses. We need to pass this bill because all animals deserve compassion, animal agriculture accelerates climate change, and animal agriculture poses risks to public health of humans. If you’d like to help in any way to pass this bill, please reach out to me at 650-863-1550 by text or donate to Compassionate Bay or Direct Action Everywhere non-profits. Thank you! -Rasa

Megaprojects are generally  poorly executed, over budget, and over time, so am I correct in taking the "megaproject" designation as really an anti-endorsement of all these activities? 

Strong-downvoted for unhelpful sarcasm.

Do you specifically object to the term megaproject, or rather to the idea of launching larger organizations and projects that could potentially absorb a lot of money?

If it's the latter, the case for megaprojects is that they are bigger bets, with which funders could have an impact using larger sums of money, i.e., ~1-2 order of magnitudes bigger than current large longtermist grants. It is generally understood that EA has a funding overhang,  which is even more true if you buy into longtermism, given that there are few obvious investment opportunities in longtermism.

I agree that many large-scale projects often have cost and time overruns (I enjoyed this EconTalk episode with Bent Flyvberg on the reasons for this).  But, if we believe that a non-negligible number of megaprojects do work out,  it seems to be an area we should explore.

Maybe it'd be a good idea to collect a list of past megaprojects that worked out well, without massive cost-overruns.  Reflecting on this briefly, I think of the Manhattan Project,   prestigious universities (Oxbridge, LMU, Harvard), and public transport projects like the TGV