Sorted by New

Wiki Contributions


Quantifying anthropic effects on the Fermi paradox

Interesting! And nice to see ADT make an appearance ^_^

I want to point to where ADT+total utilitarianism diverges from SIA. Basically, SIA has no problem with extreme "Goldilocks" theories - theories that imply that only worlds almost exactly like the Earth have inhabitants. These theories are a priori unlikely (complexity penalty) but SIA is fine with them (if is "only the Earth has life, but has it with certainty", while is "every planet has life with probability", then SIA loves twice as much as ).

ADT+total ut, however, cares about agents that reason similarly to us, even if they don't evolve in exactly the same circumstances. So weights much more than for that theory.

This may be relevant to further developments of the argument.