Hide table of contents

Rigor: Quickly written, rant-style. Comments are appreciated. This post doesn't get into theories on why this phenomenon exists or the best remedies to improve on it.
Facebook post here

One common pattern in many ambitious projects is that they typically feature a core of decisive small or flimsy pet theories, and typically there's basically no questioning or researching into these theories. This is particularly the case for the moral missions of large projects.

A group could easily be $100 Billion spent on an initiative, before $100K is spent on actually red-teaming the key ideas.[1][2]

I had one professor in college who was a literal rocket scientist. He clearly was incredibly intelligent and had spent many years mastering his craft.

I had a conversation with him at his office, and he explained that he kept on getting calls from his friends in finance to make 3x his salary. But he declined them because he just believed it was really important for humanity to make space settlements in order for it to survive long-term.

I tried asking for his opinion on existential threats, and which specific scenarios these space settlements would help with. It was pretty clear he had barely thought about these. From what I could tell, he probably spend less than 10 hours seriously figuring out if space settlements would actually be more valuable to humanity than other alternatives. But he was spending essentially his entire career, at perhaps a significant sacrifice, trying to make them.

I've later worked with engineers at some other altruistic organizations, and often their stories were very similar. They had some hunch that thing X was pretty nice, and then they'd dedicate many years doing highly intelligent work to pursue it.

This is a big deal for individual careers, but it's often a bigger one for massive projects.

Take SpaceX, Blue Origin, Neurolink, OpenAI. Each of these started with a really flimsy and incredibly speculative moral case. Now, each is probably worth at least $10 Billion, some much more. They all have very large groups of brilliant engineers and scientists. They all don't seem to have researchers really analyzing the missions to make sure they actually make sense. From what I can tell, each started with a mission that effectively came from a pet idea of the founder (in fairness, Elon Musk was involved in 3 of these), and then billions of dollars were spent executing these.

And those are the nicer examples. I'm pretty sure Mark Zuckerberg still thinks Facebook is a boon to humanity, based on his speculation on the value of "connecting the planet".

"Founded in 2004, Facebook's mission is to give people the power to build community and bring the world closer together."[3]

Now, Facebook/Meta has over 60,000 employees and a market cap of around $1 Trillion. Do they have at least 2 full-time employee equivalents (~0.003% of the company) doing real cost-benefit analyses on if Facebook is actually expected to achieve its mission statement? They had to be serious about their business risks, one time, in their SEC filings. Surely they could do similar about their moral risks if they wanted to.

Historically, big philanthropy has also been fairly abysmal here. My impression is that Andrew Carnegie spent very little, if anything, to figure out if libraries were really the best use of his money, before going ahead and funding 3,000 libraries.

I'm also fairly confident that the donors of new Stanford and Harvard programs haven't done the simplest of reasonable analyses. At the very least, they could pay a competent person $100,000 to do analysis, before spending $50 Million in donations. But instead, they have some quick hunch that "leadership in government at Harvard" deserves $50 Million, and they donate $50 Million.[4]

Politics is even bigger, and politics is (even) worse. I rarely see political groups seriously red-teaming their own policies, before they sign them into law, after which the impacts can last for hundreds of years.

I'm not sure why society puts up with all of this. When someone in power basically says something like, 

"I have this weird hunch that X is true, and therefore I'm dedicating major resources into it, but also, I haven't bothered to have anyone spend a small amount of time analyzing these points, and I'm never going to do so.", 

it really should be taken with about the same seriousness of,

"I'm dedicating major resources into X because of the flying spaghetti monster."

But as of now, this is perfectly normal.

I have thoughts on why this is (and I'm curious to get others), and of course, I'm interested in the best ways of fixing it, but that's probably enough for one post.

Clarifications / Responses

Jason Crawford wrote,

I worry that this approach would kill the spark of vision/drive that makes great things happen.

I think major tech initiatives can be great and high-EV, I just think that eventually, someone should make sure they are the best use (or at least a decent use) of their resources. Maybe not when there are 3 people working on them, but at very least when you get to a thousand.

A few people on the Facebook post wrote that the reason for these sorts of moves is that they're not really altruistic.

Sahar wrote,

I’m not sure most people who donate to Harvard do it because they think it’s the most effective. I’d imagine they do it for the prestige and social acclaim and positioning before anything else

Eric wrote,

I think the most obvious explanation is that people are doing the things they want/like to do, and then their brain's PR department puts a shiny altruistic motive sheen on it. Presumably your physics professor just really enjoyed doing physics and wasn't interested in doing finance, and then told himself an easily believable story about why he chose physics. Society puts up with it because we let people do what they want with their time/effort as long as it's not harmful, and we discount people's self-proclaimed altruistic motives anyway. Even if people do things for obviously irrational reasons or reasons that everyone else disagrees with, as long as it seems net positive people are fine with it (e.g. the Bahai gardens in Haifa Israel, or the Shen Yun performances by Falun Gong).

I largely agree that these are reasonable explanations. But to be clear, I think these sorts of explanations shouldn't act as reasons for this phenomenon to be socially acceptable. If we find patterns of people lying or deluding themselves for interesting reasons, we should still try to correct them.


[1] To be clear, there's often red-teaming of the technical challenges, but not the moral questions.

[2] SpaceX has a valuation of just over $100 billion, and I haven't seen a decent red-team from them on the viability of mars colonies vs. global bunkers. If you have links, please send them!

[3] https://investor.fb.com/resources/default.aspx

[4] This is based on some examples I remember hearing about, but I don't have them in front of me.

Comments57
Sorted by Click to highlight new comments since: Today at 9:13 AM

Very interesting - I've been thinking about a generalized theory of bikeshedding that also applies to careers, where some people will have initial exposure to a career through, say, an internship, and because they then know that topic very well and are ambiguity averse, they'll just continue with it until the end of their lives. Because they do value impact they'll post-hoc rationalize their choice as very important and then fall prey to the sunk cost fallacy.

I had similar thoughts on Gates recently after watching his netflix documentary:

"The Gates foundation focuses on Water, Sanitation and Hygiene (WASH), because diarrheal deaths are about 1m/year. 

They invested quite heavily in this and also seem to routinely leverage money from governments, and influence the discourse on the relative priority of WASH within global development. This could be net negative because global health might not be as effective as other economic development interventions (c.f. the work by Lant Pritchett).

He seems to have spent an extraordinary amount of money on WASH and just generally global development.

What caused him to focus on this? And what is thus the more distal cause for the EA focusing on global health? Thinking about this might uncover non-optimal path dependency. 

There seem to be a few causes:

  • because he read a NYT article by Nicholas Christofis on diarrheal disease, which because it affects people directly. 
  • because he experienced burnout at Microsoft and wanted to do something more meaningful and direct
  • He personally went to India and vaccinated children himself giving him an emotional attachment to the cause

I used to be quite the fan of Gates until now, and though I thought his foundation could have done better if it were more flexible, I always thought he gets things roughly right."

Yea. A whole lot of "charity founder" stories are a lot like that. Like,

"I, by chance, wound up in rural Kenya. And when I was there, I came across a kid without pencils. And this kid obviously would have been helped by pencils. So I devoted the next 20 years of my life to help kids in Kenya get pencils."

That reminds me of a charity I was faux-promoting to friends in high school: Bookmarks for the Poor.

Based on you description of the documentary, I wonder to what extent Gates' explanations reflect his actual reasoning. He seems very cautious and filtered, and I doubt an explanation of a boring cost-benefit analysis would make for a good documentary.

Not that I think there necessarily was a good cost-benefit analysis, just that I wouldn't conclude much either way from the documentary.

Good point- but it's impossible to know if there are hidden reasons for his behavior. However, I find  my theory more plausible: he didn't think much about social impact initially, made a lot of money at Microsoft, then turned towards philanthropy, and then selected a few cause areas (US education, global health, and later clean energy), partially based on cost-effectiveness grounds (being surprised that global health is so much more effective than US healthcare), but it seems unlikely that he systematically commissioned extensive cause prioritization work OpenPhil style and then after lengthy deliberation came down on global health being a robustly good buy that is 'increasingly hard to beat'. 

The Gates documentary was part of what pushed me towards "okay, earning-to-give is unlikely to be my best path, because there seems to be a shortage in people smart enough to run massive (or even midsized) projects well." I guess the lack of red-teaming is a subset of constrainedness (although is it more cognitive bias on the funders, vs lack of "people / orgs who can independently red-team ideas"? Prolly both).

[anonymous]2y26
0
0

I really like this post, and I think it is a useful way to frame putting EA thinking on a foundation that is minimal and light on more substantive philosophical commitments. You don't have to be a total utilitarian, master of expected utility theory and metanormativism to be an EA. You just have to commit to the premise: if I am going to try to do good, I should  spend some time rationally thinking about what would be the best way to do that. As it is, the vast majority of people don't do this. I think that on reflection most people would agree that the way people think about doing good is really pathological. Most departures from EA thinking are not, then, grand philosophical disagreements; they just stem from the fact that people have gone with their passion or fallen into an area or done what feels right

Small suggestion: drop the Facebook example and find a better one. Facebook was obviously not founded out of a grand prosocial vision, that was a pretty clear case of “greenwashing“ afterwards.

[anonymous]2y5
0
0

side point - has anyone done an analysis on the social costs of facebook and instagram? they  seem to me like nothing more than enormous compulsive time sink.

Anecdotal, but: Facebook is the reason I'm married and has been, I think, enormously valuable to me as a way to connect with people in my life (relative to spending the same amount of time reading or playing video games or trying some much less efficient way of keeping up with people). I expect I'd be much lonelier without it.

Non-anecdotally, willingness-to-pay data for Facebook seems to indicate that people find it valuable (though you could argue that they are wrong in some way, if you want to e.g. make a Facebook/smoking analogy).

Overall, I suspect that Facebook (like most social networks) creates obvious harm for some users, and non-obvious benefit for many users*, leading people to see it as a more negative entity than it actually is.

*Not just extreme cases like mine — my parents, for example, mostly use Facebook to keep up with old friends and extended family, and it seems like a much better way to do this than e.g. trying to do regular phone calls with all of those people. I expect this use case is really common.

[anonymous]2y9
0
0

That's interesting. My sense is that most people I know who use facebook view it as a compulsive time sink. I look back and think that probably 99% of the time I used to spend on it was wasted and that I am much better off having deactivated my account. I don't feel I have lost anything in terms of keeping in touch with people from not being on there - people can email me, call me or whatsapp me fairly easily if they want to get in touch. People I know who have left it are happy to have done so and regret not doing it earlier. 

Yeah I don't think the willingness to pay argument works because my claim is that it's like compulsive gambling - people's willingness to spend time on it isn't a sign that it is valuable. I do find it a bit of a grim view of human potential for fulfilment that the value of a business is that it aspires to do something that is very arguably marginally better than watching TV or playing video games. 

I'm surprised to hear that so many people you speak with feel that way. My experience of using Facebook (with an ad blocker) is that it's a mix of interesting thinkposts from friends in EA or other academic circles + personal news from people I care about, but would be unlikely to proactively keep in touch with (extended family, people I knew in college, etc.). 

I certainly scroll past my fair share of posts, but the average quality of things I see on FB is easily competitive with what I see on Twitter (and I curate my Twitter carefully, so this is praise).

As a random sample, when I open Facebook now, the posts I see are:

  1. A question in an EA group about making wills (I'd have answered it if someone else hadn't already, and I'm glad that my friends in the group are seeing that post)
  2. A cute, nerdy parenting anecdote from Scott Aaronson
  3. A nice personal update with a lovely photo from an acquaintance (scroll past, but happy to see he's well)
  4. An irrelevant update from a page I followed in high school — I unfollowed it immediately and won't ever see it again
  5. A post from Ozzie Gooen on Zvi's recent SFF grant writeup (I happened to know about this already, but if I didn't, I'd be really glad I saw his post)
  6. An amusing Twitter screenshot
  7. A post from a friend about her recent sobriety and the journey she took to get there (I hadn't been aware of her struggles, but this is someone I really like; I read her story with interest and came away feeling hopeful)

I wonder whether FB looks different for people who see it as a time sink, or if they just have a higher bar for "good use of idle time" than I do.

Also anecdotally I have found Facebook quite positive since I installed a feed blocker. Now I just get event invites, notifications from groups I'm interested (which are much easier to curate than a feed), a low-overhead messaging service, and the ability to maintain shallow but genuinely friendly relationships and occasionally crowdsource from a peer group in more helpful ways than Google.

Overall I'd say it's comfortably though not dramatically net positive like this - though given that it involves deliberate hacking out of one of the core components of the service I wouldn't take it as much of a counterpoint to 'Facebook is generally bad'.

Facebook was obviously not founded out of a grand prosocial vision, that was a pretty clear case of “greenwashing“ afterwards.

What makes you say this? Is there some definitive story of Facebook's founding that you think proves your point?

It's just a judgement call. Something I thought seemed obvious to most people, but perhaps not so obvious.

As I noted below, my epistemic basis is only that when he first started spouting that “everything should be social” stuff it looked like he was just mouthing the words.

Still, according to Encyclopedia Britannica

it began at Harvard University in 2003 as Facemash, an online service for students to judge the attractiveness of their fellow students.

So it's pretty clear that the founding was not a grand prosocial vision. Whether the later talk was 'greenwashing' or 'realization that actually this can do lots of good' is perhaps less clear-cut.

I think you're right here. 

From my point of view, I think some of the Facebook employees felt motivated by the mission, and I think Mark really believes it. But at the same time, I could easily imagine there could be better examples. 

If others here have suggestions, do feel free to raise them!

Maybe our friend Mark believes it now, but if so I think it’s because he convinced himself/motivated reasoning. My epistemically basis: when he first started spouting that “everything should be social” stuff it looked like he was just mouthing the words.

I recently gave a talk on one of my own ambitious projects at my organization, and gave the following outside view outcomes in order of likelihood.

  1. The project fails to gain any traction or have any meaningful impact on the world.
  2. The project has an impact on the world, but despite intentions the impact is negative, neutral or too small to matter.
  3. The project has enough of a positive outcomes to matter.

In general, I'd say that outside view this is the most likely order of outcomes of any ambitious/world-saving project. And I was saying it specifically to elicit feedback and make sure people were red-teaming me morally.

However, it's not specifically clear to me that putting more money into research/thinking improves it much? 

For one thing, again the most likely outcome is that the project fails to gain any traction or have any impact at all, so you need to be de-risking the likelihood of that through classic lean-startup MVP style stuff anyway, you shouldn't wait on that, and spend a bunch of money figuring out the positive or negative effects of an intervention at scale that won't actually be able to scale (most things won't scale).

For another, I think that a lot of the benefit of potentially world changing projects is through hard to reason about flow through effects. For instance, in your example about Andrew Carnegie and libraries, a lot of the benefits would be some hard to gesture at stuff related to having a more educated populace and how that effects various aspects of society and culture. You can certainly create Fermi estimates and systems models but ultimately people's models will be very different, and missing one variable or relationship in a complex systems model of society can completely reverse the outcome.

Ultimately, it might be better to use the types of reasoning/systems analyis that work under Knightian Uncertainty, things like "Is this making us more anti-fragile?  is this effectuative and allowing us to continually build towards more impact? Is this increasing our capabilities in an asymmetric way?" 

This is the exact type of reasoning that would cause someone intuitively to think that space settlements are important - it's clearly a thing that increases the anti-fragility of humanity, even if you don't have exact models of the threats that it may help against. By increasing anti-fragility, you're increasing the ability to face unknown threats.  Certainly, you can get into specifics, and you can realize it doesn't make you as anti-fragile as you thought, but again, it's very easy to miss some other specifics that are unknown unknowns and totally reverse your conclusion.

I ultimately think what makes sense is a sort of culture of continuous oversight/thinking about your impact, rather than specific up front research or a budget. Maybe you could have "impact-analysisathons" once a quarter where you discuss these questions. I'm not sure exactly what it would look like, but I notice I'm pretty skeptical at the idea of putting a budget here or creating a team for this purpose. I think they end up doing lots of legible impact analysis which ultimately isn't that useful for the real questions you care about.

This is the exact type of reasoning that would cause someone intuitively to think that space settlements are important - it's clearly a thing that increases the anti-fragility of humanity, even if you don't have exact models of the threats that it may help against. By increasing anti-fragility, you're increasing the ability to face unknown threats.  Certainly, you can get into specifics, and you can realize it doesn't make you as anti-fragile as you thought, but again, it's very easy to miss some other specifics that are unknown unknowns and totally reverse your conclusion.

This will be a good argument if Musk built and populated Antarctica bunkers before space. 

It's pretty clear that being multiplanetary is more anti-fragile? It provides more optionality, allows for more differentiation and evolution, and provides stronger challenges.

I agree it provides stronger challenges. I think I disagree with the other claims as presented, but the sentence is not detailed enough for me to really know if I actually disagree.

Meta: thanks for turning this into a forum post! It seems like it's sparked good discussion that wouldn't have happened if it was solely on Facebook.

It seems like it's sparked good discussion that wouldn't have happened if it was solely on Facebook.

Agreed! I think much of the time the posts do worse on the EA Forum (especially when they are posted on Shortform, I think), but this seems like a significant exception.

Great article!

At risk of losing my "EA Card" and being permanently cast out of this community with no recourse, could we perhaps maybe "red team" EA itself? [ducks]

I feel really bad that you feel like you need to duck here, though I can understand why.

Ideally, I think that red-teaming EA  should just really be central to EA. 

I think some key people do things like this, but it's tough for junior people to do, because it's really hard to tell between "a good person doing a red-team" from a "grumpy and disgruntled person." 

For those (brave) reading this, I do recommend more red-teaming, but I would note that it needs to be done a bit carefully. 

For those particularly daring, you can red-team the EA funders :)

Thanks Ozzie! Glad to see I'm still welcome here!!

I'm sold on the idea of red-teaming (especially for EA and for red-teaming the idea of red-teaming itself). A few concerns:

  1. I have no idea where to begin to start working at red-teaming
  2. I have no idea if I'd even be any good at red-teaming 

Sorry, maybe these concerns have already been covered really well and I missed it. Thanks!

My take is that visionary engineers tend to start by imagining an interesting mechanism they could build, and then hunting for inspiring justifications so that people will give them the resources to do it.

Facebook, space colonies, neurolink, and OpenAI all sort of fit the bill.

There’s a trust that technology is usually good, and that the hard thing is to find an interesting mechanism, coordinate, inspire, and accomplish building it. Once done, people will find uses for it. At least the sales team will find clients, anyway.

Engineering is, in what I’ve experienced as the culture of engineers, the process of realizing the things that can be build.

World betterment happens on its own, because technology is usually good. If not, can be debugged or improved with further technology.

I think this is true-ish. It’s really pretty hard to imagine a novel mechanism that has any utility at all, and get it working. Especially if you need lots of money and other people to work with you.

In a way, it’s even better to lean on a simple story you can think up in 10 minutes to justify your project. Because that’s how much time your funders will spend. And the public. And regulators. There’s a strong short-term incentive to pursue projects that seem best after the least amount of thought.

The long term is made out of a series of short terms.

Of course, we know the failure modes. Externalities, existential threats, hijacking of human psychology, and regulatory capture are a few.

But if you want to understand the reason why engineers act this way, perhaps one place to start is by imagining that tractable, interesting, juicy ideas in a person’s skill set are not very fungible and pretty uncommon.

If you then wanted to better steer these sorts of decisions, how would you do that? How would you get the engineers on board?

I think that 80,000 Hours helps to demonstrate that with reasonable evidence and thought, it's possible to inform smart people (including engineers) to do more valuable things.

A big issue is that there's just a dearth of decent information out there. I think that if this can be remedied, things will continue to improve. This includes the fact that many of these flimsy theories are much worse than I think a lot of people assume. If people can help point that out, I'd expect there would be less public reliance on them.

80,000 Hours does seem like a relevant reference class here, and they've certainly had an important impact in pushing me away from my original career plans into something I think is more high-impact.

Another example might be Project Drawdown, which publishes "a how-to guide for employees pushing for sweeping climate action and includes EIGHT KEY LEVERAGE POINTS to help the world reach drawdown."

A third is the SENS foundation, which argues for an anti-aging strategy.

80k thinks about causes on the highest level, as opposed to Project Drawdown (climate change focused) or SENS (aging/health focused).

So there seems to be support for your vision. Many other people seem to believe that it's high-leverage to concentrate on helping engineers choose impactful projects.

One difference between your vision and all of these approaches is that you're focused primarily on a negative approach, knocking down flimsy pet theories. By contrast, 80k, Project Drawdown, SENS, and most other examples of this sort of project focus on a positive approach, highlighting the projects and cause areas they think are most important.

There are certainly some examples of a negative approach within, say, 80k or Givewell. Usually, it's a motivating example (i.e. PlayPumps), or a targeted argument (i.e. 80k's articles against the impact of becoming a doctor). These can be valuable, of course! It's just not the majority of the public-facing material in these examples. Though I expect that all these organizations have a big pile of investigations they've done of charities, causes, and interventions that they've looked into and ultimately concluded are not worth highlighting.

So if we're using 80k as a reference class, it may be ultimately necessary to also create and focus on a positive agenda for the information you're presenting. What sorts of engineering projects are important, tractable, and neglected?

One difference between your vision and all of these approaches is that you're focused primarily on a negative approach, knocking down flimsy pet theories. By contrast, 80k, Project Drawdown, SENS, and most other examples of this sort of project focus on a positive approach, highlighting the projects and cause areas they think are most important.

Agreed. I see the pattern here "flimsy ideas, enormous initiatives" as clear examples of large-scale failures. The solutions that these problems hint at are another important conversation.
 

So if we're using 80k as a reference class, it may be ultimately necessary to also create and focus on a positive agenda for the information you're presenting. What sorts of engineering projects are important, tractable, and neglected?

Also agreed. I think a lot of EA analysis now would ideally be used to help inspire future altruistic programs. (Charity Entrepreneurship as perhaps the most obvious example). I think that we'll be seeing more work like this over the next few years.

I strongly agree with this post and it's message.

I also want to respond to Jason Crawford's response. We don't necessarily need to move to a situation where everyone tries to optimize things as you suggest, but at this point it seems that almost no one tries to optimize for the right thing. I think even changing this to a few percents of entrepreneurial work or philanthropy could have tremendous effect, without losing much of the creative spark people worry we might lose, or maybe gain even more, as new directions open.

I disagree with Crawford's s take. It seems to me that effective altruists have managed to achieve great things using that mindset during the past years - which is empirical evidence against his thesis.

[anonymous]2y10
0
0

Yeah I second this. He argues that many great things have been achieved for civilisation without people trying to optimise for doing the best thing, or spending any time rationally examining what might be best. But then this is just because this is how nearly all human decisions have ever been made. Even if, by random chance, only 0.001% of projects happen to have been the optimal thing for that person to do, we would still  be able to point to lots of example of extreme success stories. But this does almost nothing to undermine the case that the world would be a lot better if more people actually tried to do the best thing.

Just to add a bit more detail: I think that Jason Crawford saw a repeated pattern of beginning entrepreneurs spend a lot of time prioritizing and making models, and failing at this process. 

I think I agree with him on the specific question of:
"Should small entrepreneur teams, with typical software ventures, spend several months prioritizing projects? Or should prioritization be a pretty short thing, and then they go off to experiment?"

That said, I think in the greater scheme of things, ecosystems can help with prioritization. For example:
- VCs prioritize between fields
- Think tanks writing reports about exciting industries (for people like entrepreneurs to read)
- People starting megaprojects, that won't have great feedback for 5+ years

Also, Facebook does research its own technology’s effects on users. I can’t vouch for the quality of the research, but it exists.

https://www.google.com/amp/s/www.wsj.com/amp/articles/facebook-knows-instagram-is-toxic-for-teen-girls-company-documents-show-11631620739

Agreed, and I think these are positive steps. They seem pretty narrow in scope, but it's at least something.

I actually find it quite easy to believe that Musk's initiatives are worth more than the whole EA movement - though I'm agnostic on the point. Those ideas exist in a very different space from effective altruism, and if you fail to acknowledge deep philosophical differences and outside view reasons for scepticism about overcommitting to one worldview you risk polarising the communities and destroying future value trades between them. For example:

  • Where EA starts (roughly) from an assumption that you have a small number of people philosophically committed to maximising expected welfare, Musk's companies start with a vision a much larger group of people find emotionally inspiring, and a small subset of them find extremely inspiring. Compare the 35-50 hour work weeks of typical EA orgs' staff vs the 80-90 common among Tesla/SpaceX employees - the latter seem to be far more driven, and I doubt that telling them to go and work on AI policy would a) work or b) inspire them to anywhere near comparable productivity if it did.
  • Musk's orgs are driven by a belief that they can one day make a profit from what they do, and that if they can't, they shouldn't succeed.
  • Most EA orgs have no such market mechanism, even in the long term. And EA research has perverse incentives that we rarely seem to recognise - researchers gain prestige for raising 'interesting questions' that might minimally if at all affect anyone's behaviour (eg moral uncertainty, cluelessness, infinite ethics, doomsday arguments etc), and they're given money and job security for failing to answer them in favour of ending every essay with 'more research needed'.
  • In particular they're incentivised to produce writings that encourage major donors to fund them. One plausible way of doing this for eg, is to foster an ingroup mentality, encouraging the people who take them seriously to think of themselves as custodians of a privileged way of thinking (cf the early rationality movement's dismissal of outsiders as 'NPCs'). I don't know of any meta-level argument that this should lead to more reliable understanding of the world than, say the wisdom of crowds.
  • As Halffull discussed in detail in another comment, Musk's initiatives are immensely complicated, and a priori reasoning about them might essentially be worthless. We could spend lifetimes considering them and still not have meaningfully greater confidence in their outcomes - and we'd have marked ourselves as irrelevant in the eyes of people driven to work on them. Or we could work with the people who're motivated by the development of such technologies and encourage what Halffull's 'culture of continuous oversight/thinking about your impact' - which those companies seem to have, at least compared to other for-profits .
  • Empirically, the EA movement has a history of ignoring or rejecting certain causes as being not worthy of consideration, then coming to view them as significant after all. See GWWC's original climate change research which basically dismissed the cause vs Founders Pledge's more recent research which takes it seriously as one of their top causes, Open Philanthropy Project's explicit acknowledgement of their increasing concern with 'minor' global catastrophic risks, or just see all the causes OPP have supported with their recent grants (how many EAs would have taken you seriously 10 years ago if you'd thought about donating to US criminal justice reform?). I would say we have a much better track record of unearthing important causes that were being neglected than of providing good reasons to neglect causes.

I really didn't mean for this post to be saying much about effective altruism, and especially, I wasn't using it to argue that "effective altruism is better than Elon Musk."

All that said, as to my own opinions, I think Elon Musk is clearly tremendously important. It's quite possible, based on rough numbers like market value, that he singlehandedly is still a lot more valuable in expectation than effective altruists in total. His net worth is close to $300 Billion, and among all EAs, maybe we're at $60 Billion. So even if he spent his money 1/4th as effectively, he could still do more good. 

However, that doesn't mean that he doesn't have at least some things to learn from the effective altruism community.

On planning, I'm not convinced that Elon Musk is necessarily a master strategist, who was playing some 8-dimensional planning on these topics. He clearly has a lot of talents helping him. I'd expect him to be great at some things, and imagine it will take a while before anyone (including him) could really tell which specific things led to his success.

In my experiences with really top-performing people, often they just don't think all that much about many of these issues. They have a lot of things to think about. What can look like "a genius move of multi-year planning" from the outside, often looks to me like "a pretty good guess made quickly". 

No-one's saying he's a master strategist. Quite the opposite - his approach is to try stuff out and see what happens. It's the EA movement that strongly favours reasoning everything out in advance.

What I'm contesting is the claim that he has 'at least some things to learn from the effective altruism community', which is far from obvious, and IMO needs a heavy dose of humility. To be clear, I'm not saying no-one in the community should do a shallow (or even deep) dive into his impact - I'm saying that we shouldn't treat him or his employees like they're irrational for not having done so to our satisfaction with our methods, as the OP implies.

Firstly, on the specific issue of whether bunkers are a better safeguard against catastrophe, that seems extremely short termist. Within maybe 30-70 years if SpaceX's predictions are even faintly right, a colony on Mars could be self-sustaining, which seems much more resilient than bunkers, and likely to have huge economic benefits for humanity as a whole. Also, if bunkers are so much easier to set up, all anyone has to do is found an inspiring for profit bunker-development company and set them up! If no-one has seriously done so at scale, that indicates to me that socially/economically they're a much harder proposition, and that this might outweigh the engineering differences.

Secondly, there's the question of what the upside of such research is - as I said, it's far from clear to me that any amount of a priori research will be more valuable than trying stuff and seeing what happens.

Thirdly I think it's insulting to suppose these guys haven't thought about their impact a lot simply because they don't use QALY-adjacent language. Musk talks thoughtfully about his reasons all the time! If he doesn't try to quantify the expectation, rather than assuming that's because he's never thought to do so, I would assume it's because he thinks such a priori quantification is very low value (see previous para) - and I would acknowledge that such a view is reasonable. I would also assume something similar is true for very many of his employees, too, partly because they're legion compared to EAs, partly because the filtering for their intelligence has much tighter feedback mechanisms than that for EA researchers.

If any EAs doing such research don't recognise the validity of these sorts of concerns, I can imagine it being useless or even harmful.

It seems like we have some pretty different intuitions here. Thanks for sharing!

I was thinking of many of my claims as representing low bars. To me, "at least some things to learn from a community" isn't saying all that much. I'm sure he, and us, and many others, have at least some things that would be valuable to learn from many communities.

"Thirdly I think it's insulting to suppose these guys haven't thought about their impact a lot simply because they don't use QALY-adjacent language" -> A lot of the people I knew, in the field (including the person I mentioned), pretty clearly hadn't thought about the impact a whole lot. It's not just that they weren't using QALYs, it's just that they weren't really comparing it to similar things. That's not unusual, most people in most fields don't seem to be trying hard to optimize the impact globally, in my experience. 

I really don't mean to be insulting to them, I'm just describing my impression. These people have lots of other great qualities.

One thing that would clearly prove me wrong would be some lengthy documents outlining the net benefit, compared to things like bunkers, in the long-term. And, it would be nice if it were clear that lots of SpaceX people paid attention to these documents.

A lot of the people I knew, in the field (including the person I mentioned), pretty clearly hadn't thought about the impact a whole lot. It's not just that they weren't using QALYs, it's just that they weren't really comparing it to similar things.

Re this particular example, after you had the conversation did the person agree with you that they clearly hadn't thought about it? If not, can you account for their disagreement other than claiming that they were basically irrational?

I seem to have quite strongly differing intuitions from most people active in central EA roles, and quite similar ones (at least about the limitations to EA-style research) to many people I've spoken to who believe the motte of EA but are sceptical of the bailey (ie of actual EA orgs and methodology). I worry that EA has very strong echo chamber effects reflected in eg the OP, in Linch's comment below, and Hauke's about Bill Gates, in various other comments in this thread suggesting 'almost no-one' thinks about these questions with clarity and in countless of other such casual dismissals I've heard by EAs of smart people taking positions not couched in sufficiently EA terms.

FWIW I also don't think claiming someone has lots of other great qualities is inconsistent with being insulting to them.

I don't disagree that it's plausible we can bring something. I just think that assuming we can do so is extremely arrogant (not by you in particular, but as a generalised attitude among EAs). We need to respect the views of intelligent people who think this stuff is important, even if they can't or don't explain why in the terms we would typically use. For PR reasons alone, this stuff is important - I can only point to anecdotes, but so many intelligent people I've spoken to find EAs collectively insufferable because of this sort of attitude, and so end up not engaging with ideas that might otherwise have appealed to them. Maybe someone could run a Mechanical Turk study on how such messaging affects reception of theoretically unrelated EA ideas.

Also we still, as a community seem confused over what 'neglectedness' does in the ITN framework - whether it's a heuristic or a multiplier, and if the latter how to separate it from tractability and how to account for the size of the problem in question (bigger, less absolutely neglected problems might still benefit more from marginal resources than smaller problems on which we've made more progress with fewer resources, yet I haven't seen a definition of the framework that accounts for this). Yet anecdotally I still hear 'it's not very neglected' used to casually dismiss concerns on everything from climate change through nuclear war to... well, interplanetary colonisation. Until we get a more consistent and coherent framework, if I as a longtime EA supporter am sceptical on one of the supposed core components of EA philosophy, I don't see how I'm supposed to convince mission-driven not-very-utilitarians to listen to its analyses.

I think that it's not always possible to check that a project is "best use, or at least decent use" of its resources. The issue is that these kinds of checks are really only good on the margin. If someone is doing something that jumps to a totally different part of the pareto manifold (like building a colony on Mars or harnessing nuclear fission for the first time), conventional cost-benefit analyses aren't that great. For example a standard post-factum justification of the original US space program is that it accelerated progress in materials science and computer science in a way that payed off the investment even if you don't believe that manned space exploration is worthwhile. Whether or not you agree with this (and I doubt this counterfactual can be quantified with any confidence), I don't think that the people who were working on it would have been able to make this argument convincingly at the time. I imagine that if you ran a cost-benefit analysis at the time, it would have found that a better investment would be to put money into incremental materials research. But without the challenge of having to develop insulators, etc., that work in space, there would have plausibly been fewer new materials discovered. 

I think that here there is an important difference between SpaceX and facebook, since SpaceX is an experiment that just burns private money if it fails to have a long-term payoff, whereas facebook is a global institution whose negative aspects harm billions of people. There's also a difference between something like Mars exploration, which is a simple and popular idea that's expensive to implement,  and more kooky vanity projects which consist of rich people imagining that their being rich also makes them able to solve  hairy problems that more qualified people have failed to solve for ages (an example that comes to mind, which thankfully doesn't have billions of dollars riding on it, is Wolfram's project to solve physics: https://blog.wolfram.com/2021/04/14/the-wolfram-physics-project-a-one-year-update/). I think that many big ambitious initiatives by billionaires are somewhere in between kooky ego-trip and genuinely original/pareto-optimal experiment, but it seems important to recognize that these are different things. Given this point of view, along with the general belief that large systems tend to err on the side of being conservative, I think that it's at least defensible to support experiments like SpaceX or Zuckerberg's big Newark school project, even when (like Zuckerberg's school project) they end up not being successful.

I imagine that if you ran a cost-benefit analysis at the time, it would have found that a better investment would be to put money into incremental materials research.

Space is a particularly complicated area with respect to EV. I imagine that a whole lot of the benefit came from "marketing for science+tech", and that could be quantified easily enough.

For the advancements they made in materials science and similar, I'm still not sure these were enough to justify the space program on their own. I've heard a lot of people make this argument to defend NASA, and I haven't seen them refer to simple cost/benefit reports. Sure, useful tech was developed, but that doesn't tell us that by spending the money on more direct measures, we couldn't have had even useful tech.

SpaceX is an experiment that just burns private money if it fails to have a long-term payoff

It also takes the careers of thousands of really smart, hard-working, and fairly altruistic scientists and engineers. This is a high cost!
 

along with the general belief that large systems tend to err on the side of being conservative

VCs support very reckless projects. If they had their way, startups would often be more ambitious than the entrepreneurs desire. VCs are trying to optimize money, similar to how I recommend we try to optimize social impact. I think that prioritization can and should often result in us having more ambitious projects, not less. 

Open questions:

What's the incentive structure here? If I'm following the money, it seems likely that there's a much higher likely return if you hype up your plausibly-really-important product, and if you believe in the hype yourself. I don't see why Musk or Zuckerberg should ask themselves the hard questions about their mission given that there's not, as far as I can see, any incentive for them to do so. (Which seems bad!)

What can be done? Presumably we could fund two FTE in-house at any given EA research organization to red-team any given massive corporate effort like SpaceX. But I don't have a coherent theory of change as to what that would accomplish. Pressure the SEC to require annual updates to SEC filings? Might be closer...

"If I'm following the money, it seems likely that there's a much higher likely return if you hype up your plausibly-really-important product, and if you believe in the hype yourself."

Yep, I think that's the case right now. But the reason for this is that people actually buy these arguments for some reason.

To the extent that we can convince people not to do that, I would assume the problem would be lessened.

What can be done?

First, we can make sure that EAs treat this stuff skeptically (a low bar, but still a bar).  I'm not sure about second steps, but there are a lot of options.

80,000 Hours has done a really useful job (from what I can tell) improving the state of career decisions (for certain clusters of professionals). I could easily imagine corporate versions or similar, for example.

Somebody ought to start an independent organization specifically dedicated to red-teaming other people and groups' ideas.

I could start this after I graduate in the Fall, or potentially during the summer.

DM me if you want to discuss organization / funding.

FWIW, Elon Musk famously kiiiiiiinda had a theory-of-change/impact before starting SpaceX. In the biography (and the WaitButWhy posts about him), it notes how he thought about funding a smaller mission of sending mice to Mars, and used a material cost spreadsheet to estimate the adequacy of existing space travel technology. He also aggressively reached out to experts in the field to look for the "catch", or whether he was missing something.

This is still nowhere near good red-teaming/proving-his-hunch-wrong, though. He also didn't seem to do nearly as much talking-to-experts knowledge-base-building for his other projects (e.g. Neuralink).

And most groups don't even do that.

I'm pretty sure Mark Zuckerberg still thinks Facebook is a boon to humanity, based on his speculation on the value of "connecting the planet".

This seems a bit naive to me. Most big companies come up with some generic nice-sounding reason why they're helping people. That doesn't mean the people in charge honestly believe that; it could easily just be marketing.

My read is that many bullshitters fairly deeply believe their BS. They often get to be pretty good at absorbing whatever position is maximally advantageous to them. 

Things change if they have to put real money down on it (asking to bet, even a small amount, could help a lot), but these sorts of people are good at putting themselves into positions where they don't need to make those bets.  

There's a lot of work on motivated reasoning out there. I liked Why Everyone (else) is a Hypocrite. 

I consistently enjoy your posts, thank you for the time and energy you invest.

Robin Hanson is famous for critiques in the form of “X isn’t about X, it’s about Y.” I suspect many of your examples may fit this pattern. To wit, Kwame Appiah wrote that “in life, the challenge is not so much to figure out how best to play the game; the challenge is to figure out what game you’re playing.” Andrew Carnegie, for instance, may have been trying to maximize status, among his peers or his inner mental parliament. Elon Musk may be playing a complicated game with SpaceX and his other companies. To critique assumes we know the game, but I suspect we only have a dim understanding of ”the great game” as it’s being played today.

When we see apparent dysfunction, I tend to believe there is dysfunction, but more deeper in the organizational-civilizational stack than it may appear. I.e. I think both Carnegie and Musk were/are hyper-rational actors responding to a very complicated incentive landscape.

That said, I do think ideas get lodged in peoples’ heads, and people just don’t look. Fully agree with your general suggestion, “before you commit yourself to a lifetime’s toil toward this goal, spend a little time thinking about the goal.”

That said— I’m also loathe to critique doers too harshly, especially across illegible domains like human motivation. I could see how more cold-eyed analysis could lead to wiser aim in what things to build; I could also see it leading to fewer great things being built. I can’t say I see the full tradeoffs at this point.

I could see how more cold-eyed analysis could lead to wiser aim in what things to build; I could also see it leading to fewer great things being built

I think analysis really could help lead to more great things being built. It would be a complete catastrophe if someone said, "This analysis shows that SpaceX is less effective than bunkers... therefore we shouldn't do either"

With analysis and optimization, funders could be given more assurance that these projects are great, and could correspondingly put more money into them. This is how the VC world works. 

I think it's very easy to pattern match "we could use analysis" with "really mediocre bureaucratic red-tape", but that's not at all what I think we can and should aim for.

Just responding to "[2] SpaceX has a valuation of just over $100 billion, and I haven't seen a decent red-team from them on the viability of mars colonies vs. global bunkers. If you have links, please send them!" Seems like a bad waste of money to me.

https://www.vox.com/future-perfect/2018/10/22/17991736/jeff-bezos-elon-musk-colonizing-mars-moon-space-blue-origin-spacex

Martin Rees (the Astronomer Royal) has discussed this in a few  books, most recently On the Future: Prospects for Humanity.

Thanks for the links!

To be clear, I assume you're saying that SpaceX is a bad waste of money; not that red-team analyses would be bad wastes of money, right?

Not SpaceX itself, or doing red-team analysis. It seems to me that establishing a self-sustaining mars colony to reduce existential risk is a bad waste of money (compared to eg establishing a self-sustaining Antarctica colony).

Curated and popular this week
Relevant opportunities