Hide table of contents

Ben Todd (CEO of 80,000 Hours) says "Effective altruism needs more 'megaprojects'. Most projects in the community are designed to use up to ~$10m per year effectively, but the increasing funding overhang means we need more projects that could deploy ~$100m per year."
https://twitter.com/ben_j_todd/status/1423318852801290248

What are some $100m projects that you think might be worth consideration?

By megaproject, I'm referring to any project that could eventually be scaled up to $100 million, not ones that are planned from the start to cost $100 million. In many cases, this could include very small efforts that would have to achieve multiple levels of success to eventually get $100Million+ per year.

New Answer
New Comment

56 Answers sorted by

Filling the $100m funding gap in nuclear, since the MacArthur Foundation is pulling out of nuclear policy.

"Since 2015 alone, MacArthur directed 231 grants totaling >$100m in some cases providing more than half the annual funding for individual institutions or programs."
"MacArthur was providing something like 40 to 55 percent of all the funding worldwide of the non-government funding worldwide on nuclear policy”
https://t.co/srsq45ejc7?amp=1 

Out of all the ideas, this seems the most shovel-ready. 

MacArthur will (presumably) be letting go of some staff who do nuclear policy work, and would (presumably) be happy to share the organisations they've granted to in the past. So you have a ready-made research staff list + grant list.

All ("all" :) ) you need is a foundation and a team to execute on it. Seems like $100 million could actually be deployed pretty rapidly. 

Possibly not all of that money would meet EA standards of cost-effectiveness though - indeed MacArthur's withdrawal provides some evidence that it isn't cost effective (if we trust their judgement).

Here's the interesting, frustrating evaluation report:  https://www.macfound.org/media/article_pdfs/nuclear-challenges-synthesis-report_public-final-1.29.21.pdf[16].pdf
Looks to me like a classic hits-based giving bet - you mostly don't make much impact, then occassionaly (Nixon arms control, H.W. Bush's START and Nunn-Lugar, maybe Obama JCPOA/New START) get a home run.

To clarify, this is $100m over around 5 years, or $20m/year - which is a good start, but far less than $100m/year.

I agree with this. As the article says, multiple funders are pulling out of nuclear arms control, not just MacArthur. So it would be a good idea for EA funders like Open Phil to come in and close the gap. But in doing so, we should understand why MacArthur and other funders are exiting this field and learn from them to figure out how to do better.

I misread this as "nuclear power", not "nuclear arms control"  😂

This is a weird one that is illustrative:

Taking over the US private prison system (described here).

Why:

  • Benefits from returns to scale, maybe only available as a "mega project"
  • Could literally make a profit (CEA is infinite, pretty much the only way to beat GiveWell?)
  • It gives access to institutions, even political capital for reform aligned to social change cause areas
  • Almost no one else would do this
  • Probably a lot of bad things going on inside of them EAs could improve

There's a ton of drawbacks. These include barriers to entry like regulations and capture which could make this impractical. Once inside, implementation issues such as cultural/institutional challenges will be far outside the typical circle of competence of EA. 

But I think that's the point—this idea has a flavor orthogonal to "New R&D/policy institute for X".

Certaintly innovative, although I wonder about the PR consequences

I'm late to this, but I wonder if Charles' analysis ought to extend beyond private prisons to address all the ways in which prisons and jails have come to privatize essential services. This includes telephone calls and digital communication, which are largely controlled by a legal monopoly, along with medical treatment and food preparation.

The hyperlinked stories and legal cases are but a few examples of the potentially life-altering negative outcomes that have come out of privatization. One of the major challenges with combating this trend is that do... (read more)

3
Charles He
This is a great and deep comment. I think it’s extremely generous to call my little blurb above an “analysis”. I am not informed and I am not involved in this area of prison or justice reform.  I’m writing this because I don’t want anyone to “wait” on me or anyone else. If you are reading this and want to dedicate some time on this cause or intervention, you should absolutely do so! Again, thanks for this comment.

I love this. Could be big or small nearly anywhere in the world. Some precedent too: Prison reform charity Nacro joins bid to run jails | Prisons and probation | The Guardian

6
tomstocker
I think why I like this so much is that it isn't another idea that is fiddling on the margins of a problem with a complicated theory of impact - it just provides a project vehicle to solve one of the more tractable key problems head on.

Love this! We could also use prisons as a place where social scientists could study how to optimize ethical development amongst criminals. These samples are so hard to access, but could produce so much impactful insight on when and why ethical decision-making fails, and how to improve ethical decision-making under conflict. This could also be coupled this with a grant competition that would fund the best ideas on how to rehabilitate inmates and improve their ethical decision-making both while in prison and after being reintegrated back into society.

Build up an institution that does the IGM economic experts survey with every scientific field, with paid editors, additionally probabilistic forecasts, monetary incentives for the experts maybe. https://www.igmchicago.org/igm-economic-experts-panel/

[anonymous]21
0
0

I like this idea in general, but would it ever really be able to employ $100m+ annually? For comparison, GiveWell spends about $6 million/year and CSET was set up for $55m/5 years (11m/year)

I think you’re right. Even if the experts were paid really well for their participation, say 10k per year (maybe as a fixed sum or in expectation given some incentive scheme), and you might have on the order of 50 experts each for 20(?) fields, then you end up with 10 million per year. But probably it wouldn’t even require that, as long as it’s prestigious and is set up well with enough buy-in. Paying for their judgement would make the latter easier I suppose.

I would upvote if someone wrote a quick summary of this and a number of the other ideas which aren't immediately clear on first reading.

7[anonymous]
I think the gist of this idea might be something like a massively-scaled up prediction platform that focuses on recruiting subject-matter experts and pays them to make predictions on questions relevant to their expertise while perhaps additionally discussing important/neglected trends in their fields. 

The Center for Election Science could easily make efficient use of greater than $50M a year with infrastructure and ballot initiatives. We've already laid out a plan on how we would spend it. We could also potentially build towards some hyper-aggressive $100M years by including lobbying in the remaining states that don't allow ballot initiatives. In any case, we are woefully underfunded relative to our goals and could at the very least surpass the $50M threshold in a couple of years with sufficient funding. If even greater funding were available, we could build in lobbying following more state-level wins.

For clarity, our lack of funding has already cost us approval voting campaign opportunities and is a big issue for us.

Okay, but I'm not persuaded that the Center for Election Science is scientific. I think it should be called "The Center for Approval Voting (especially the single-winner district kind)™"

I studied electoral systems for a school project and reached very different conclusions, for instance: that all single-winner-district systems are inherently  non-proportional and subject to gerrymandering. I went so far as to design my own system (I suppose its merits are debatable — but never debated). In emails from the CES I see none of the insights I gained in my ... (read more)

Finally get acceptable information security by throwing money at the problem.

Spend $100M/year to hire, say, 10 world-class security experts and get them everything they need to build the right infrastructure for us, and for e.g. Anthropic.

Strong second - we should build up secure open computing from bare metal (secure, open verifiable CPUS, memory, etc.) to the OS, to compilers, to a secure applications layer.

5
kokotajlod
Is this something we could purchase for a few hundred million in a few years?
9
Davidmanheim
I discussed this with a couple people ca. 2 years ago, and thought it was likely that a company like Google could design and produce a full stack secure system as a moderately large internal project. And some groups are already doing parts of this - for example, a provably secure OS microkernel, for far less than what we'd be able to spend. As a fermi estimate on the high end, if we hire 10 top hardware design people for $500k/year each, throw in the same number of OS design people, and compiler designers at the same cost, and a team of 50 great people to do the rest of the development and testing at $300k/year, $100m means that we have 3 years to do this - and it's an open source project, so we'd get universities, etc. working on this as well.  (i.e. we could not mass produce the hardware at theses prices, but that's commercialization, not design, and it should be funded by sales.)

(not an expert) My impression is that a perfectly secure OS doesn't buy you much if you use insecure applications on an insecure network etc.

Also, if you think about classified work, the productivity tradeoff is massive: you can't use your personal computer while working on the project, you can't use any of your favorite software while working on the project, you can't use an internet-connected computer while working on the project, you can't have your cell phone in your pocket while talking about the project, you can't talk to people about the project over normal phone lines and emails... And then of course viruses get into air-gapped classified networks within hours anyway. :-P

Not that we can't or shouldn't buy better security, I'm just slightly skeptical of specifically focusing on building a new low-level foundation rather than doing all the normal stuff really well, like network traffic monitoring, vetting applications and workflows, anti-spearphishing training, etc. etc. Well, I guess you'll say, "we should do both". Sure. I guess I just assume that the other things would rapidly become the weakest link.

In terms of low-level security, my old company has a big line of business designing chips themselves to be more secure; they spun out Dover Microsystems to sell that particular technology to commercial (as opposed to military) customers. Just FYI, that's just one thing I happen to be familiar with. Actually I guess it's not that relevant.

5
Davidmanheim
Agreed that secure low level without application security doesn't get you there, which is why I said we need a full stack - and even if it wasn't part of this, redeveloping network infrastructure to be done well and securely seems like a very useful investment. But doing all the normal stuff well on top of systems that still have insecure chips, BIOS, and kernel just means that the exploits move to lower levels - even if there are fewer, the differences between 90% secure and 100% secure is far more important than moving from 50% to 90%. So we need the full stack.  

I see enormous value in it and think it should be considered seriously.

On the other hand, the huge amount of value in it is also a reason I'm skeptical about it being obvious to be achievable: there are already individual giant firms who'd internally at multi-million annual savings (not to talk about the many billions the first firm marketing something like that would immediately earn) from having a convenient simple secure stack 'for everything', yet none seems to have something close to it (though I guess many may have something like that in some sub-sys... (read more)

2
Davidmanheim
I think the budget to do this is easily tens of millions a year, for perhaps a decade, plus the ability to hire the top talent, and it likely only works as a usefully secure system if you open-source it. Are there large firms who are willing to invest $25m/year for 4-5 years on a long-term cybersecurity effort like this, even if it seems somewhat likely to pay off? I suspect not - especially if they worry (plausibly) that governments will actively attempt to interfere in some parts of this.
3
FlorianH
Agree with the "easily tens of millions a year", which, however, could also be seen to underline part of what I meant: it is really tricky to know how much we can expect from what exact effort. I half agree with all your points, but see implicit speculative elements in them too, and hence remain with, a maybe all too obvious statement: let's consider the idea seriously, but let's also not forget that we're obviously not the first ones thinking of this, and in addition to all other uncertainties, keep in our mind that none seems to seriously have very much progress in that domain despite the possibly absolutely enormous value even private firms might have been able to make from it if they had serious progress in it.

Epistemic status: Confused person with zero expertise in this area

Who is "us" in this scenario? I assume it's meant to be "organizations with access to infohazardous bio/AI data"?

If so, what makes you think of the current infosec of these orgs as "unacceptable"? If you think they'd disagree with this characterization, do you have a sense for why?

If not, what do you see as some plausible consequences of weak infosec that could plausibly total $100m in damages for EA orgs if they came to pass, given that EA is a network of lots of organizations, with pretty ... (read more)

This is my impression based on (a) talking to a bunch of people and hearing things like "Yeah our security is unacceptably weak" and "I don't think we are in danger yet, we probably aren't on anyone's radar" and "Yeah we are taking it very seriously, we are looking to hire someone. It's just really hard to find a good security person." These are basically the ONLY three things I hear when I raise security concerns, and they are collectively NOT reassuring. I haven't talked to every org and every person so maybe my experience is misleading. also (b) on priors, it seems that people in general don't take security seriously until there's actually a breach.  (c) I've talked to some people who are also worried about this, and they told me there basically isn't any professional security person in the EA community willing to work full time on this.

 

I will go further than that. Everyone I know in infosec, including those who work for either the US or the Israeli government, seem to strongly agree with the following claim:
"No amount of feasible security spending will protect your network against a determined attempt by an advanced national government (at the very least, US, Russia, China, and Israel) to get access. If you need that level of infosec, you can't put anything on a computer."

If AI safety is a critical enabler for national security, and/or AI system security is important for their alignment, that means we're in deep trouble.

Makes sense. Just to clarify — the phrasing here makes me think these are organizations with potentially dangerous technical knowledge, rather than e.g. CEA. Is that right?

5
kokotajlod
Yes.

https://evervault.com/ are launching in October and generally working on problems in this space

Here's a few suggestions for near-term megaprojects: 

- Longevity research
- Meat-replacement mega-cost reduction investments (leapfrogging current tech) 
- Eliminating disease-bearing mosquitoes 
- Eliminating all vaccine-preventable diseases worldwide 
- Developing cheap, universal metagenomic scanning for biosecurity (Also see this slightly less ambitious version, mentioned by Alex in a different answer.)
- Large-scale governance reform initiatives 
- Universally available, validated, well-build apps for CBT to reduce depression / increase happiness 
- AI safety (We're doing this one already, so the key players may not have room for funding.)

I suggest you split this into different comments so each can be upvoted seperately

For AI safety - maybe Redwood has the most room for funding? They seem to be the most interested in growth (correct me if I'm wrong). And even if the existing players don't have more room, other ways need to be thought of to scale up further through funding as the field is clearly still too small to compete in the race against the titanic field of AI capabilities.

Agree longevity needs to be funded more as well, though lots of aging billionaires like Bezos seem to be throwing tons of money at it these days too so maybe EA money would be much less useful/uniquely needed there than e.g. AI alignment.

Impact certificates. Announce that we will purchase NFT's representing altruistic acts created by one of the actors. (Starting now, but with a one-year delay, such that we can't purchase an NFT unless it's at least a year old.) Commit to buy $100M/year of these NFTs, and occasionally reselling them and using the proceeds to buy even more. Promise that our purchasing decisions will be based on our estimate of how much total impact the action represented by the NFT will have. 

Promise that our purchasing decisions will be based on our estimate of how much total impact the action represented by the NFT will have.

It may be critical that the purchasing decisions will somehow account for historical risks (even ones that did not materialize and are no longer relevant), otherwise this approach may fund/incentivize net-negative interventions that are extremely risky (and have some chance of being very beneficial). I elaborated some more on this here.

I don't understand. Can you explain more what this project would do and how it would create change?

Also, this project seems to involve commitment of i) hundreds of millions of dollars of funding and ii) reliable guarantees that these will be used cost effectively.

These (extraordinarily) strong promises are structurally necessarily and also seem only achievable by "centralization".

Given this centralization, what is the function or purpose of the NFT?

(Note that my question isn't about technical knowledge about "blockchain" or "NFTs" and you can assume gears knowledge of them and their instantiations up through 2020.)

9
kokotajlod
Think of it like a grants program, except that instead of evaluating someone's pitch for what they intend to do, you are evaluating what they actually did, with the benefit of hindsight. Presumably your evaluations will be significantly more accurate this way. (Also, the fact that it's NFT-based means that you can recruit the "wisdom of the efficient market" to help you  in various ways, e.g. lots of non-EAs will be buying and selling these NFTs trying to predict what you will think of them, and thus producing lots of research you can use.) I don't think it should replace our regular grants programs. But it might be a nice complement to them. I don't see what you mean by centralization here, or how it's a problem. As for reliable guarantees the money will be used cost effectively, hell no, the whole point of impact certificates is that the evaluation happens after the event, not before. People can do whatever they want with the money, because they've already done the thing for which they are getting paid.
1
Charles He
  But the reason why you would evaluate someone's pitch as opposed to using hindsight is that nothing would be done without funding? I think I am using centralization in the same way that cryptocurrency designers/architects talk about crypto currency systems actually work ("centralization pressures"). The point of NFTs, as opposed to you, me, or a giant granter producing certificates, is that it is part of a decentralized system, not under any one entity's control.  My understanding is that this is the only logical reason why NFTs have any value, and are not a gimmick.  They don't have any magical power by themselves or have any special function or information or anything like that. Under this premise, centralization is undermined, if any other structural component of the system is missing.  For example, if the grantors or their decisions come from a central source. Then the value of having a decentralized certificate is unclear. Note undermining "centralization" is sort of like having a wrong step in a math theorem, it's existentially bad as opposed to a reduction in quality or something. I meant that you have written out two distinct promises here that seem to be necessary for this system to structurally work in this proposal. One of these promises seem to be high quality evaluation:

Once it's established that you will be giving $100M a year to buy impact certificates, that will motivate lots of people already doing good to mint impact certificates, and probably also motivate lots of people to do good (so that they can mint the certificate and later get money for it)

By buying the certificate rather than paying the person who did the good, you enable flexibility -- the person who did the good can sell the certificate to speculators and get money immediately rather than waiting for your judgment. Then the speculators can sell it back and forth to each other as new evidence comes in about the impact of the original act, and the conversations the speculators have about your predicted evaluation can then help you actually make the evaluation, thanks to e.g. facts and evidence the speculators uncover. So it saves you effort as well.

5
Charles He
Ok, I see what you're saying now. I might see this as a creating bounty program for altruistic successes, while at the same time creating a "thick" market for bounties that is crowd sourced, hopefully with virtuous effects.
5
kokotajlod
Thats a succinct way of putting it, nice!

Hire ~5 film-studios to each make a movie that concretely shows an AI risk scenario which at least roughly survives the rationalist fiction sniff test. Goal: Improve AI Safety discourse, motivate more smart people to work on this.

Hell yeah! Get JGL to star - https://www.eaglobal.org/speakers/joseph-gordon-levitt/

(Sentinel is a system for testing new diseases such that unknown pathogens could be recognised from the first sample. Listen to the podcast alexrjl has linked) 

What about creating academic institutes in reputable universities to tackle important problems, eg similar to FHI or CSER, creating research prizes, and sponsoring conferences. I'm mostly thinking about AI Safety, but it may be useful in other areas too.

Hard science funding seems able to absorb this scale of funding, though this might not count as 'EA-specific' projects:
On climate: carbon capture, new solar materials, new battery R&D, maybe even fusion as 'hits-based giving'?
On bio preparedness there's quite a lot, e.g. Cassidy Nelson recommendations, Andy Weber recommendations

Something that could increase economic growth, dramatically reduce inequality of opportunity, and improve well-being of people worldwide:

Try to get as many people connected to the internet with a personal device as possible. 

The stat is that ~50% of the world is connected to the internet is misleading. To be connected you must have used a networked device once in three months, which is far from what most people would expect. 
 

Source: International Telecommunication Union ( ITU ) World Telecommunication/ICT Indicators Database


The importance of internet connectivity is hard to understate. It's necessary to function as 21st-century citizens and is the backbone of our societies. It's also necessary for securing various human rights. 

Some quick reasons why internet access is important: 

  • Grants access to free education on just about anything 
  • Access to banking, communication technologies, etc.
  • Increase economic growth, which well-being is somewhat a function of as internet access effectively increases the computational power of the economic system and can 'improve' the substrate upon which it runs (people).
  • Increase awareness of EA in general  

Wrote this quickly so apologies for the brevity. I've been working on a longer post where I dive into this in a lot more detail. 

My very uninformed sense is that starlink might make internet access a lot easier. Metaculus question writing opportunitiy.

4
alexlyzhov
Some people have been saying that Starlink system's limit is 0.5M consumers even after they release a whole lot more satellites: https://www.techdirt.com/articles/20200928/09175145397/report-notes-musks-starlink-wont-have-capacity-to-truly-disrupt-us-telecom.shtml. This would mean you can't expect it to turn even 0.1% of unconnected people to netizens.
2
samhbarton
Yeap, it's incredibly exciting.  I see a few issues with it in this context, though.  In the short-run, it will be prohibitively expensive for most of the world's population, and it doesn't solve for the device ownership necessity.  I also don't like the idea internet access being in the control of a company that is subject to the national laws. I feel that we need a censorship-resistant internet, especially in the existing climate. We're increasingly seeing crack-downs across the world, and I don't the US will be immune from increased internet suppression. 

I think this would be broadly useful and in particular increase the reach of mobile payment-based activities like GiveDirectly. I'd be curious about estimates of how cost-effective increasing internet penetration would be, compared to throwing more money at GD.

Mozilla have a fellowship aimed at this: https://foundation.mozilla.org/en/what-we-fund/fellowships/fellows-for-open-internet-engineering/

Developing new climate models has costs in the hundreds of millions of dollars. Useful longtermist climate modelling could include:

[anonymous]28
0
0

I don't climate research as very valuable. The value of information would only be high if this research would change how people act. Climate inaction seems to be mainly political inertia, not lack of information about potential catastrophe. 

3
HaydnBelfield
Do you mean just the fourth bullet, or do you think this about all four?  The 1980s nuclear winter and asteroid papers (I'm thinking especially Sagan et al, and Alvarez et al) were very influential in changing political behaviour - Gorbachev and Reagan explicitly acknowledged that on nuclear, the asteroid evidence contributed to the 90s asteroid films and the (hugely successful!) NASA effort to track all 'dino-killers'. On the margin now, I think more scary stuff would be motivating. There's also VOI in resolving how big a concern nuclear winter is (eg some recent papers are skeptical) - if it turned out to not be as existential as we thought, that would change cause prioritisation for GCRs. On geoengineering (sorry 'climate interventions'(!)), note 'getting more climate modelling' is a key aim for e.g. Silver Lining.  On the fourth one, on the margin, I think more research - especially if it were the basis for an IPCC special report - would be influential. There's also VOI for our cause priotisation. It just is really remarkable how understudied it is! https://www.pnas.org/content/114/39/10315 https://forum.effectivealtruism.org/posts/HaXxEtx4QdykBjJi7/betting-on-the-best-case-higher-end-warming-is
6[anonymous]
i was just referring to the last bullet re climate change. eg in the last IPCC report, it would have been reasonable for govts to believe that there was a >10% chance of >6C of warming and that has been true since the 1970s, without having any impact. The political response to climate change seems to be influenced by most mainstream media coverage and public opinion in some circles which it would be fair to characterise as 'very concerned' about climate change.  An opinion poll suggests that 54% of British people think that climate change threatens human extinction (depending on question framing). I agree that in a rational world we want to know how bad climate change could be, but the world isn't rational. If you're just talking about EA cause prioritisation, the cost-benefit ratio looks pretty poor to me. Wrt reducing uncertainty about climate sensitivity, you're talking costs of $100m per year to have a slim chance of pushing climate change up above AI, bio, great power war for major EA funders. Or we might find out that climate change is less pressing than we thought in which case this wouldn't make any difference to the current priorities of EA funders.  I also don't see how research on solar geoengineering could be a top pick - stratospheric aerosol injection just doesn't seem like it will get used for decades because it requires unrealistic levels of international coordination. Also, I don't think extra modelling studies on solar geo would shed much light unless we spent hundreds of millions. Climate models are very inaccurate and would provide much insight into the impacts of solar geo in the real world. There might be a case for regional solar geo research, though. (fwiw, i really don't rate that Xu and Ramanathan paper. they're not using existential in the sense we are concerned about. They define it as "posing an existential threat to the majority of the population". The evidence they use to support their conclusions is very weak. For example, they not
0
HaydnBelfield
Interesting first point, but I disagree. To me, the increased salience of climate change in recent years can be traced back to the 2018 Special Report on Global Warming of 1.5 °C (SR15), and in particular the meme '12 years to save the world'. Seems to have contributed to the start of School Strike for Climate, Extinction Rebellion and the Green New Deal. Another big new scary IPCC report on catastrophic climate change would further raise the salience of this issue-area. I was thinking that $100m would be for all four of these topics, and that we'd  get cause-prioritisation VOI across all four of these areas. $100m for impact and VOI across all four seems pretty good to me (however I'm a researcher not a funder!) On solar geo, I'm not an expert on it and am not arguing for it myself, merely reporting that its top of the 'asks' list for orgs like Silver Lining. I actually rather like the framing in Xu & Ram - I don't think we know enough about >5 °C scenarios, so describing them as "unknown, implying beyond catastrophic, including existential threats" seems pretty reasonable to me. In any case, I cited that more to demonstrate the lack of research thats been done on these scenarios.
3[anonymous]
On the last point, during the early Pliocene, early hominids  with much worse technology than us lived in a world in which temperatures were 4.5C warmer than pre-industrial. It would be a surprise to me if this level of warming would kill off everyone, including people in temperate regions.  There's more to come from me on this topic, but I will leave it at that for now

I definitely want to see more modeling of supervolcano and comet disasters.

Take some EAs involved in public outreach, some journalists who made probabilistic forecasts on there own volition (Future Perfect people, Matt Yglesias, ?), and buy them their own news media organization to influence politics and raise the sanity- and altruism-waterline.

We could buy (a significant number of shares in) media companies themselves and shift their direction. Bezos bought the Washington Post for $250 million. Some are probably too big, like the New York Times at a $8 billion market cap and Fox Corporation at $20 billion.

I generally agree, although I think these >$1B general audience entities are too expensive for EAs. Whereas I think it would make sense to buy media companies and consultancies that are somewhat focused on global security, AI and/or econ research. e.g. Foreign Policy magazine, Wired, GZero Media. Stratfor. Economist Intelligence Unit, and so on. At least, I think the value of info from trying out buying up one or more smaller entities, to see how one could steer them, or bolster them with some EA talent, could be high - the most similar things I can think of EAs having done previously were investing in DeepMind and OpenAI.

Another way of thinking about this question is - are there other entities that are of less value to invest in than DM/OAI, but more than the media/consulting orgs that I mentioned?

5
Davidmanheim
Who should buy them? I'm concerned that it would look really shady for OpenPhil to do so, but maybe Sam Bankman-Fried or another very big EA donor could do it - but then the purchaser needs to figure out who to pick to actually manage things, since they aren't experts themselves. (And they need to ensure that their control doesn't undermine the publication's credibility - which seems quite tricky!)
5
RyanCarey
It could only be billionaires who are running out of donation targets. If Bezos can buy WaPo, then less prominent billionaires can buy less popular media with much less (though not zero) controversy. But I agree that it only works well if you have EA-leaning talent to work there, especially at the executive level.

Matt makes lots of money on his independent substack now, so that feels less urgent, but funding other things like future perfect in other news sources as the Rockefeller Foundation does now seems great.

2
MaxRa
Urgent doesn‘t feel like the right word, the question to me is whether his contributions could be scaled up well with more money. I think his substack deal is on the order of 300k per year, but maybe he could found and lead a new news organization, hire great people that want to work with him and do more rational, informative and world-improvy journalism?
2
HStencil
I would be extremely surprised if he had any interest in doing this, given what he’s said about his reasons for leaving Vox.
2
MaxRa
Thanks, didn't see what he said about this. Just read an Atlantic article about this and I don't see why it shouldn't be easy to avoid the pitfalls from his time with Vox, and why he wouldn't care a lot about starting a new project where he could offer a better way to do journalism. https://www.theatlantic.com/ideas/archive/2020/11/substack-and-medias-groupthink-problem/617102/ Also, the idea of course is not at all dependent on him, I suppose there would be other great candidates, Yglesias just came to mind because I really like his work. 
2
HStencil
Yeah, I guess the impression I had (from comments he made elsewhere — on a podcast, I think) was that he actually agreed with his managers that at a certain point, once a publication has scaled enough, people who represent its “essence” to the public (like its founders) do need to adopt a more neutral, nonpartisan (in the general sense) voice that brings people together without stirring up controversy, and that it was because he agreed with them about this that he decided to step down.
2
MaxRa
Interesting, the Atlantic article didn't give this impression. I'd also be pretty surprised if you had to become essentially the cliche of a moderate politician if you're part of the leadership team of a journalistic organization. In my mind, you're mostly responsible for setting and living the norms you want the organization to follow, e.g.  * epistemic norms of charitability, clarity, probabilistic forecasts, scout mindset * values like exploring neglected and important topics with a focus on having an altruistic impact?  And then maybe being involved in hiring the people who have shown promise and fit?
1
HStencil
Yeah, I mean, to be clear, my impression was that Yglesias wished this weren't required and believed that it shouldn't be required (certainly, in the abstract, it doesn't have to be), but nonetheless, it seemed like he conceded that from a practical standpoint, when this is what all your staff expect, it is required. I guess maybe then the question is just whether he could "avoid the pitfalls from his time with Vox," and I suppose my feeling is that one should expect that to be difficult and that someone in his position wouldn't want to abandon their quiet, stable, cushy Substack gig for a risky endeavor that required them to bet on their ability to do it successfully. I think too many of the relevant causes are things that you can't count on being able to control as the head of an organization, particularly at scale, over long periods of time, and I'd been inferring that this was probably one of the lessons Yglesias drew from his time at Vox.
1
Nathan Young
Or indeed experimenting with different incentives in news production. What would EAs do if they all had £10 to spend on news production.

Wouldn't they lose readers if they left their organizations? Is that what you mean? The fact that Future Perfect is at Vox gets Vox readers to read it.

2
MaxRa
In the short term yes, but my vision was to see a news media organization under the leadership of a person like Kelsey Piper that is able to hire talented reasonably aligned journalists to do great and informative journalism in the vein of Future Perfect. Not sure how scalable Future Perfect is under the Vox umbrella, and how freely it could scale up to its best possible form from an EA perspective.

I have claimed that the first few hundred million dollars of preparation for agricultural and electricity disrupting GCRs is competitive with AGI safety for the longterm, and preparation for agricultural GCRs is more cost effective than GiveWell interventions. Since these catastrophes could happen right away, I think it does make sense to scale up quickly to $100 million per year to get the preparation fast. Beyond research, this money could be used for piloting new technologies and developing response plans and training. To maintain $100 million per year may then be lower cost effectiveness than AGI safety at the expected margin, but would still provide additional value and may be competitive with other priorities. Projects could include subsidizing resilient food sources such as seaweed, cellulosic sugar, methane single cell protein, etc. Or building factories flexibly such that they could switch quickly from producing animal feed or energy to human food. These could easily be many billions of dollars per year.

No idea what it would cost, but we should get to work on cloning John von Neumann: https://fantasticanachronism.com/2021/03/23/two-paths-to-the-future/

Interesting! Do you know anything about the state of regulations around this? 

(sorta related, there are several pet cloning services)

I'm not sure what are the potential downsides of such a wide-spread tech, but it seems like something which can have high scalability if done as a for-profit company.

3
Davidmanheim
Yeah, cloning humans is effectively illegal almost everywhere. (I specifically know it's banned in the US and Israel, I assume the EU's rules would be similar.)

The Sustainable Development Goals - and their predecessor, the MDGs - are like a megaproject led by the UN. Some of these are already aligned with EA priorities, such as the following:

  • Eradicating extreme poverty (Goal 1, Target 1.1)
  • Ending hunger (Goal 2, Target 2.1) and malnutrition (Target 2.2)
    • Fortify Health aims to improve health by providing fortified wheat flour
  • Good health and well-being (Goal 3)
  • Clean water and sanitation (Goal 6)
  • Ending energy poverty (Goal 7, Target 7.1)
  • Increasing the share of renewable energy (Target 7.2) and energy efficiency (Target 7.3)
  • Promoting clean energy innovation (Target 7.A)
  • Decent work and economic growth (Goal 8)

The Economist has written that Goal 1 (ending poverty) should be "at the head of a very short list." In my opinion, if we're going to do a megaproject, we should take a handful of the SDG targets (such as 1.1, ending extreme poverty) and spend billions of dollars aggressively optimizing them.

An institute for the science of suffering.

Do you know about QRI? They're pretty close to what you're describing. https://www.qualiaresearchinstitute.org/

4
RobertDaoust
Yes I know, thank you ADS, but I rather have in mind something like "Toward an Institute for the Science of Suffering" https://docs.google.com/document/d/1cyDnDBxQKarKjeug2YJTv7XNTlVY-v9sQL45-Q2BFac/edit#

You can maybe make very good civilizational refuges for 100M/year, though this is probably considerably more capital than MVPs I'd like to consider.

I did some more thinking (still not full Fermis) and now think that this is a >1B project even for just a sufficiently good MVP, possibly considerably more. 

Though most of the cost is upfront cost like digging, and constructing full bunkers with individual nuclear power plants. The running cost should be considerably lower than <100M/year, unless I'm missing something important.

Is there a good writeup anywhere on cost estimates for this kind of refuge? Or what it would require?

Not that I know of, Nick Beckstead wrote a moderately negative review of civilizational refuges 7 years ago (note that this was back when longtermist EA had a lot less $s than we currently do). 

One reason I'd like to write out a moderately detailed MVP is that then we can have a clear picture for others to critique concrete details of, suggest clear empirical or conceptual lines for further work, etc, rather than have most of this conversation a) be overly high-level or b) too tied in with/anchored to existing (non-longtermist) versions of what's currently going on in adjacent spaces. 

Funding a „serious“ prediction market.

Not sure if 100M is necessary or sufficient if you want many people or even multiple organizations to seriously work full-time on forecasting EA relevant questions. Maybe could also be used to spearhead its usage in politics.

www.ideamarket.io is working on something that's in the same vein. It's not a prediction market, but seeks to use markets to identify credible/trustworthy sources. 

Disclaimer: i started working with Ideamarket a month ago

I hope it’s ok to mention something I’d like to do at Foresight Institute:

Crowdsource + Crowdfund Civilization Tech Map

Goals

  • Build on this map for Civilizational Self-Realization (scroll to end of article) to create an interactive technology map for positive long-term futures that is crowdsourced (Wikipedia-style) and allows crowdfunding (Kickstarter-style)
  • The map surveys the ecosystem of areas relevant for civilizational long-term flourishing, from health, AI, computing, biotech, nanotech, neurotech, energy, space tech, etc.  
  • The map branches out into milestones in each area, and either lists projects solving them, or requests projects to solve them, including options to fund either
  • Crowdsourcing of milestones and requests for projects will get it very wrong at first but can get continuously course corrected, e.g. via prediction markets 
  • Crowdfunding makes more and more people have skin in the game for the long-term future, e.g. via tokenization, retroactive public goods funding, or a similar mechanism
  • In sum, the map can serve as a north star to coordinate those seeking to work toward positive futures and those seeking to fund such work.

Challenge prize(s) to incentivise the development of innovative solutions in priority areas. These could be prizes for goals already suggested by people in this thread  (e.g. producing resilient food sources, drastic changes to diagnostic testing, meat alternatives underinvested in by the market) or others. 

Quotes from a Nesta report on challenge prizes (caveat that I haven't spent any time looking up opposing evidence/perspectives):
 

By guiding and incentivising the smartest minds, prizes create more diverse solutions. Because prizes only pay out when a problem has been solved, you can support long shots, radical ideas and unusual suspects while minimising risk...

The high profile of a prize can raise public awareness and shape the future development of markets and technologies. Prizes can help identify best practice, shift regulation and drive policy change...

For the Ansari XPRIZE, 26 teams spent $100 million chasing the $10 million prize, jump starting the commercial space industry.

 

See also Musk's $100m prize for carbon capture tech

Buy up scarce resources which are being used for bad things and just sit on them. Like the thing where you buy rainforest to prevent logging. Coal mines, agricultural land used for animals, GPUs?!

Interesting idea! I think this works much better when supply is constrained, eg land, and not when supply is elastic (eg GPUs). I'm curious whether anyone has actually tried this

Feels like buying GPUs would just increase their production.

2
GMcGowan
That's true. I just listened to the most recent 80k podcast where they joke about buying up GPUs so it was in my head :) 
1
Nathan Young
Haha, fair :)

Scaling up carbon removal and other promising climate-related technologies before governments are willing to fund them. A lot like what Stripe and Shopify have been doing, but about an order of magnitude bigger. If the timing is right (I'm not sure it is) this strategy could get a fair bit of leverage by driving costs down and accelerating even larger-scale deployments.

https://www.drawdown.org/solutions

Launching a Nucleic Acid Observatory, as outlined recently by Kevin Esvelt and others here (link to paper). With $100m one could launch a pilot version covering 5 to 10 states in the US.

Activist investment fund which invests in large companies and then leans on them to change their policies. Examples abound in climate change, but other than that:

  • Food related companies to stop factory farming
  • Biotech companies to stop them from doing gain of function or mirror life research

Bezos bought the Washington Post for $250 million. We could try to buy some other media groups, or at least a significant number of shares in them. Some are probably too big, like the New York Times at $8 billion market cap and Fox Corporation at $20 billion.

I think food-related companies are also probably too big relative to impact, with market caps in the billions or tens of billions of dollars for Tyson, Pilgrim's Pride, JBS SA, McDonald's. You could buy shares in smaller ones, but they also probably have a disproportionately smaller share of farmed animals, although getting a few of them to improve animal welfare policies could make the big ones look bad and push them to follow.

Responding as a member of the ALLFED team.

A network of reliable, long distance shortwave radio systems that do not depend on external sources of electricity and are unable to be disabled by widespread cyber attack, EMP, or most other threats to the global communication infrastructure.

In a wide range of catastrophes, communication systems are a critical vulnerability which, if disrupted, delay societal recovery from the disaster. A highly resilient and reliable system is HAM shortwave radio, which allows reliable, low cost communication to a significant fraction of
the global population. Maintaining key high speed communication
channels during a catastrophe would greatly increase disaster resilience beyond flyer distribution, potentially at relatively little additional cost. A backup shortwave radio communication system would facilitate the timely advice on where to locate clean water sources, identify sensible relocation options, allow improved international cooperation, and allow
coordination about the nature and likely duration of the outage.

We’ve identified HAM shortwave radios as key electronic equipment that is both likely to be highly resilient to global communication disruption on large or small scales, and as relatively easy to distribute. Another interesting use for these radios is distribution to power grid stations
for use to aid blackstart communications after large scale electrical grid collapse.

While several network configurations may serve GCR reduction purposes, our preliminary network design involves around a dozen central stations receiving and broadcasting globally, a network of several hundred two-way NVIS transceiver networks operated by trained personnel, and a few
thousand distributed receiver-only radios. The network would utilize SSB communications to lower power requirements. To cover the entire earth’s population we estimate the total construction and shipping cost at between USD $2 million and $10 million, scaling roughly proportionally with the fraction of global population able to be reached by the network. 

Total costs would therefore reasonably reach into the tens to hundreds of millions for this sort of mega project, depending on the spatial density of the network.

Announce $100M/year in prizes for AI interpretability/transparency research. Explicitly state that the metric is "How much closer does this research take us towards, one day when we build human-level AGI, being able to read said AI's mind, understand what it is thinking and why, what its goals and desires are, etc., ideally in an automated way that doesn't involve millions of person-hours?"
(Could possibly do it as NFTs, like my other suggestion.)

I don't know the area well, but I guess that one option would be to invest in relevant AI companies, to be able to influence their decision-making (and it could also be profitable). I guess that one could in principle invest very large sums in that. And unlike some other suggested projects, it is maybe not necessarily logistically complicated (though it depends on the set-up). Cf. Ryan's comment.

Proof of concept for a geoengineering scheme (could be controversial)

9 PACs have raised/spent more than $100m (source). So an EA PAC?

Although I guess Sam Bankman-Fried was the second-largest donor to Biden (coindesk, Vox), and Dustin Moskovitz gave $50m; and they're both involved with Future Forward and Mind The Gap, so maybe EA is already kinda doing this.

Create global distributed governments.

Governments as they exist today seem antiquated to me as they are linked to particular geographic regions, and the particular shapes and locations of those regions are becoming increasingly irrelevant.

Meanwhile some governments are good at providing for their people – social security, health insurance, enforcement of contracts, physical protection, etc. – so that’s fine, but there are also a lot of governments that are weak in one or more of these critically important departments.

If there were a market of competing global governments, we’d get labor mobility without anyone actually having to move. The governments that provide the best services for the cheapest prices would attract the most citizens.

These governments could draw on something like Robin Hanson’s proposal for a reform of tort law to incentivize a market for crime prevention, could use proof of stake (where the stake may be a one-time payment that the government holds in escrow or a promise of a universal basic income to law-abiding citizens) for some legal matters, and could use futarchy for legislation.

They could also provide physical services, such as horizontal health interventions and physical protection in countries where they can collaborate with the local governments.

An immediate benefit would be the reduction of poverty and disease, but they could also serve to unlock a lot of intellectual capacity by giving people the spare time to educate themselves on matters other than survival. They could define protocols for resolving conflicts between countries and lock in incentives to ensure that the protocols are adhered to. (I bet smart contracts can help with this.)

That way, they could form a union of autonomous parts sort of like the cantons of Switzerland. Such a union of global distributed governments could eventually become a de-facto world government, which may be beneficial for existential security and for enabling the Long Reflection.

Such a government could be bootstrapped out of the EA community. A nonpublic web of trust could form the foundation of the first citizens. If the system fails even when the citizenry is made up largely of highly altruistic, conscientious people who can pay taxes and share a similar culture, it’s probably not ready for the real world. But if it proves to be valuable, it can be gradually scaled up to a broader population spanning more different cultures.

I’ve come to feel like it’s a red flag if such a project bills itself as a distributed state or something of the sort. There seems to be a risk that people would start such a project only to do something grand-sounding rather than solve all the concrete problems that a state solves.

I’d much rather have a bunch of highly specialized small companies that solve specific problems really well (and also don’t exclude anyone based on their location or citizenship) than one big shiney distributed state that is undeniably state-like but is just as flawed as most ge... (read more)

As I alluded to in a comment to KHorton's related post, I believe SoGive could grow to spend something like this much money.

SoGive's core idea is to provide EA style analysis, but covering a much more comprehensive range of charities than the charities currently assessed by EA charity evaluators.

As mentioned there, benefits of this include:

  • SoGive could have a broader appeal because we would be useful to so many more people; it could conceivably achieve the level of brand recognition achieved by charity evaluators such as Charity Navigator, which have high levels of brand recognition in the US (c50% with a bit of rounding).
  • Lots of the impact here is the illegible impact that comes from being well-known and highly influential; this could lead to more major donors being attracted to EA-style donating, or many other things.
  • There's also the impact that could come from donating to higher impact things within a lower impact cause area, and the impact of influencing the charity sector to have more impact

Full disclosure: I founded SoGive.

This short comment is not sufficient to make the case for SoGive, so I should probably right up something more substantial.

The Human Diagnosis Project (disclaimer: I currently work there). If successful, it will be a major step toward accurate medical diagnosis for all of humanity.

Creating a new academic institute - the EA university - that houses a lot of EA research and (somehow) avoids the many issues seen in traditional academia.