Halffull

501Joined Nov 2017

Comments
109

I'd also add Vitalik Buterin to the list.

If you're going to have a meeting this short, isn't it better to e.g. send a message or email about this?  Having very short conversations like this means you've wasted a large slot of time on your EAG calendar that you could have used for different types of conversations that you can only do in person at EAG.

It's pretty clear that being multiplanetary is more anti-fragile? It provides more optionality, allows for more differentiation and evolution, and provides stronger challenges.

I recently gave a talk on one of my own ambitious projects at my organization, and gave the following outside view outcomes in order of likelihood.

  1. The project fails to gain any traction or have any meaningful impact on the world.
  2. The project has an impact on the world, but despite intentions the impact is negative, neutral or too small to matter.
  3. The project has enough of a positive outcomes to matter.

In general, I'd say that outside view this is the most likely order of outcomes of any ambitious/world-saving project. And I was saying it specifically to elicit feedback and make sure people were red-teaming me morally.

However, it's not specifically clear to me that putting more money into research/thinking improves it much? 

For one thing, again the most likely outcome is that the project fails to gain any traction or have any impact at all, so you need to be de-risking the likelihood of that through classic lean-startup MVP style stuff anyway, you shouldn't wait on that, and spend a bunch of money figuring out the positive or negative effects of an intervention at scale that won't actually be able to scale (most things won't scale).

For another, I think that a lot of the benefit of potentially world changing projects is through hard to reason about flow through effects. For instance, in your example about Andrew Carnegie and libraries, a lot of the benefits would be some hard to gesture at stuff related to having a more educated populace and how that effects various aspects of society and culture. You can certainly create Fermi estimates and systems models but ultimately people's models will be very different, and missing one variable or relationship in a complex systems model of society can completely reverse the outcome.

Ultimately, it might be better to use the types of reasoning/systems analyis that work under Knightian Uncertainty, things like "Is this making us more anti-fragile?  is this effectuative and allowing us to continually build towards more impact? Is this increasing our capabilities in an asymmetric way?" 

This is the exact type of reasoning that would cause someone intuitively to think that space settlements are important - it's clearly a thing that increases the anti-fragility of humanity, even if you don't have exact models of the threats that it may help against. By increasing anti-fragility, you're increasing the ability to face unknown threats.  Certainly, you can get into specifics, and you can realize it doesn't make you as anti-fragile as you thought, but again, it's very easy to miss some other specifics that are unknown unknowns and totally reverse your conclusion.

I ultimately think what makes sense is a sort of culture of continuous oversight/thinking about your impact, rather than specific up front research or a budget. Maybe you could have "impact-analysisathons" once a quarter where you discuss these questions. I'm not sure exactly what it would look like, but I notice I'm pretty skeptical at the idea of putting a budget here or creating a team for this purpose. I think they end up doing lots of legible impact analysis which ultimately isn't that useful for the real questions you care about.

Sure, but "already working on an EA project" doesn't mean you have an employer.

Assuming you have an employer

This is great! Curious what (if anything) you're doing to measure counterfactual impact.  Any sort of randomized trial involving e.g. following up with clients you didn't have the time to take on and measuring their change in productive hours compared to your clients?

Yeah, I'd expect it to be a global catastrophic risk rather than existential risk.

Is there much EA work into tail risk from GMOs ruining crops or ecosystems?

If not, why not?

Yeah, I mostly focused on the Q1 question so didn't have time to do a proper growth analysis across 2021

Yeah, I was talking about the Q1 model when I was trying to puzzle out what your growth model was.

There isn't a way to get the expected value, just the median currently (I had a bin in my snapshot indicating a median of $25,000). I'm curious what makes the expected value more useful than the median for you?

A lot of the value of potential growth vectors of a business come in the tails. For this particular forecast it doesn't really matter because it's roughly bell-curved shape, but if I was using this as for instance decisionmaking tool to decide what actions to take, I'd really want to look at which ideas had a small chance of being very runaway successes, and how valuable that makes them compared to other options which are surefire, but don't have that chance of tail success. Choosing those ideas isn't likely to pay off on any single idea, but is likely to pay off over the course of a business's lifetime.

Load More