I'd like to make a pretty straightforward scope/tractability/neglect argument in favor of more anthropics research. I think that assessing the validity of the doomsday argument (that most humans are among the last 99.9% to ever live, so we should be starting from a 99.9% prior that there won't be 100 trillion humans) in particular is high impact. I would love to hear others' thoughts, including disagreements.

 

Scope: My understanding is that there is no consensus on whether the doomsday argument is valid. Cause prioritization is a central question for the EA movement, and I'd argue that the validity of the doomsday argument is a really important consideration. In particular, if we come to a consensus that the argument is invalid, then the case for longtermism seems really strong to me. (Holden Karnofsky has started laying out the argument in a series on his blog; link to first post.) If however we come to a conclusion that the doomsday argument is valid, that bolsters the case for prioritizing more short-term causes, such as third world poverty and animal suffering. 

Tractability: Anthropics is incredibly confusing. But also I do get the sense that we've started thinking more clearly about it over the last 20 years. I expect this progress to continue. It seems to me that there's lowish-hanging fruit in particular in laying out the various assumptions (SSA, SIA, and variations of these) with mathematical rigor. In theory, this ought to help reduce disagreements ("Is the doomsday argument correct, under SSA?") to variations in the particular assumptions being made, which will allow us to think more carefully about these questions.

Neglect: It seems to me that not many people are actively thinking about anthropics. I could be wrong, but e.g. most of the well-written anthropics posts on LessWrong seem to be by one person (Stuart Amstrong). My general sense is that we as a community are kind of scared of touching anthropics, because it's really confusing. I think several more people thinking seriously about anthropics would increase the number of people thinking seriously about anthropics by a factor of 2 of so.

21

0
0

Reactions

0
0
Comments7
Sorted by Click to highlight new comments since: Today at 9:08 PM

I tend to agree. I think the main argument against is that some people at and around MIRI argue they've already (dis)solved it. I'd be interested to know to what extent people like Wei Dai, Stuart Armstrong, Paul Christiano agree. If you personally want collaborate on anthropics research, then there's at least couple of people at FHI who may be interested. Feel free to send a DM!

See this comment by Vladimir Slepnev and my response to it, which explain why I don't think UDT offers a full solution to anthropic reasoning.

Is the claimed dissolution by MIRI folks published somewhere?

I think they believe in Wei Dai's UDT, or some variant of it, which is very close to Stuart's anthropic decision theory, but you'd have to ask them which, if any, published or unpublished version they find most convincing.

I haven't done significant research into the Doomsday argument, but I do remember thinking it seemed intuitively plausible when I first heard of it. Then I listened to this 80,000 Hours podcast , and the discussion on the doomsday argument, if I remember correctly, convinced me it's a non-issue. But you may want to relisted to make sure I'm remembering correctly.  correction: I was not remembering correctly. They came away with the conclusion that more funding & research is needed in this space.

There may be good work to be done on formalizing the puzzle, and proving beyond a doubt that the logic doesn't hold.

I had the opposite takeaway from the podcast. Ajeya and Rob definitely don't come to a confident conclusion. Near the end of the segment, Ajeya says, referring definitely to the simulation argument but also I think to anthropics generally,

I would definitely be interested in funding people who want to think about this. I think it is really deeply neglected. It might be the most neglected global prioritisation question relative to its importance. There’s at least two people thinking about AI timelines, but zero people [thinking about simulation/anthropics], basically. Except for Paul in his spare time, I guess.

Ah, thanks. It was a while ago, so I guess I was misremembering.

Curated and popular this week
Relevant opportunities