I think it's also easy to make a case that longtermist efforts have increased the x-risk of artificial intelligence, with the money and talent that grew some of the biggest hype machines in AI (Deepmind, OpenAI) coming from longtermist places.
It's possible that EA has shaved a couple counterfactual years off of time to catastrophic AGI, compared to a world where the community wasn't working on it.
If you're going to have a meeting this short, isn't it better to e.g. send a message or email about this? Having very short conversations like this means you've wasted a large slot of time on your EAG calendar that you could have used for different types of conversations that you can only do in person at EAG.
It's pretty clear that being multiplanetary is more anti-fragile? It provides more optionality, allows for more differentiation and evolution, and provides stronger challenges.
I recently gave a talk on one of my own ambitious projects at my organization, and gave the following outside view outcomes in order of likelihood.
In general, I'd say that outside view this is the most likely order of outcomes of any ambitious/world-saving project. And I was saying it specifically to elicit feedback and make sure people were red-teaming me morally.
However, it's not specifically clear to me that putting more money into research/thinking improves it much?
For one thing, again the most likely outcome is that the project fails to gain any traction or have any impact at all, so you need to be de-risking the likelihood of that through classic lean-startup MVP style stuff anyway, you shouldn't wait on that, and spend a bunch of money figuring out the positive or negative effects of an intervention at scale that won't actually be able to scale (most things won't scale).
For another, I think that a lot of the benefit of potentially world changing projects is through hard to reason about flow through effects. For instance, in your example about Andrew Carnegie and libraries, a lot of the benefits would be some hard to gesture at stuff related to having a more educated populace and how that effects various aspects of society and culture. You can certainly create Fermi estimates and systems models but ultimately people's models will be very different, and missing one variable or relationship in a complex systems model of society can completely reverse the outcome.
Ultimately, it might be better to use the types of reasoning/systems analyis that work under Knightian Uncertainty, things like "Is this making us more anti-fragile? is this effectuative and allowing us to continually build towards more impact? Is this increasing our capabilities in an asymmetric way?"
This is the exact type of reasoning that would cause someone intuitively to think that space settlements are important - it's clearly a thing that increases the anti-fragility of humanity, even if you don't have exact models of the threats that it may help against. By increasing anti-fragility, you're increasing the ability to face unknown threats. Certainly, you can get into specifics, and you can realize it doesn't make you as anti-fragile as you thought, but again, it's very easy to miss some other specifics that are unknown unknowns and totally reverse your conclusion.
I ultimately think what makes sense is a sort of culture of continuous oversight/thinking about your impact, rather than specific up front research or a budget. Maybe you could have "impact-analysisathons" once a quarter where you discuss these questions. I'm not sure exactly what it would look like, but I notice I'm pretty skeptical at the idea of putting a budget here or creating a team for this purpose. I think they end up doing lots of legible impact analysis which ultimately isn't that useful for the real questions you care about.
This is great! Curious what (if anything) you're doing to measure counterfactual impact. Any sort of randomized trial involving e.g. following up with clients you didn't have the time to take on and measuring their change in productive hours compared to your clients?
Yeah, I'd expect it to be a global catastrophic risk rather than existential risk.
Or perhaps, you think money is important if it has a bigger effect on your happiness (based on e.g. environmental factors and genetic predispostion)? In other words, maybe these people are making correct predictions about how they work, rather than creating self-fulfilling prophecies? It is at least worth considering that the causality goes this way.
This of course is slight evidence that the causality goes in the direction you said.