Morpheus

Posts

Sorted by New

Wiki Contributions

Comments

A Sequence Against Strong Longtermism

I like your comparisons with other historical cases when people thought they had inevitable theories about society, and it is a thing I think about.

I do have a pet peeve though about the following claim.

Expected values were being used by the authors inappropriately (that is, without data to inform the probability estimates).

 Let's consider a very short argument for strong longterminism (and a tractable way to influence the distant future by reducing x-risk):
- There is a lot of future ahead of us.
- The universe is large
-  humans are fragile/the universe is harsh (most planets are not inhabitable for us (yet). We don't survive in most space by default)
 ⇒ Therefore expected outcomes  of your actions for the near future become rounding errors compared to future expected outcomes by making sure humanity survives.
All three of these points (while more might be necessary for a convincing case for longterminism) are very much informed by physical theories which in turn have been informed by data about the world we live in (observing through a telescope, going to the moon)!
To illustrate:

- Had I been born in a universe where physicists were predicting with high degrees of certainty (through well-established theories like thermodynamics in our world) that the universe (all of which already inhabited) would be facing an inevitable heat death in 1000 years from now, then I would think that the arguments for longterminism were weak since they would not apply to the universe we live in.


I am not convinced by your arguments around epistemology. I don't understand your fascination with Popper. Popper's philosophy seems more like an informal way to make bayesian updates. You did not provide sufficient evidence for me to convince me to the contrary. While I agree that rigid Bayseanism has flaws, my current best guess means more subjectivism, not less.
 

Anki deck for "Some key numbers that (almost) every EA should know"

My new Favorite: What share of total computation did pocket calculators account for in 1986?

41%

Anki deck for "Some key numbers that (almost) every EA should know"

Thank you! The most surprising (though maybe not most impactful) cards for me so far were the once on neurons:
Sure. Mammals make up the minority of neurons, but HOW ON EARTH are 90 Percent of those from humans? 

Also, 30% from fish? I would have expected fish to be negligible.

Statistics for Lazy People, Part 1

I really like the idea behind this post/series. I'd already come across Lindy's Law/delta T and the rule of succession, by reading other people use it in their predictions, but I had already thought that this was a really inefficient way to learn. I skimmed a few statistics textbooks, but I did not come across a lot of techniques that I actually ended up using. 

I also liked the examples you gave. I felt like 1-3 explicit practice Problems at the end would also have been nice like:

Tesla was founded in 2003.

  • How many years from now does tesla have a 25/75% chance to exist?

Or maybe this is silly?

Anyway...

I knew that the Lifetime of something depends on the time it stuck around and had a rough mental image of the distribution, but so far I did not actually bother calculating it explicitly. So thanks for the heuristics.

Your post actually made me think about how very often the lifetime of something is very dependent on the lifetime of something else whose distribution is better known. Often you can just substitute one probability for the other, but sometimes this is more difficult. For example, when someone is 60 and he has been in the same company for 45 years then I don't expect him to stay another 45, because I roughly know when people tend to retire which in turn is dependent on the expected lifetime of someone. The most extreme/ridiculous form of this is of course how every long-term forecast you make can be totally dominated by your timelines for AGI.