All of Alex Williams's Comments + Replies

Simulation argument?

Thank you. I am just wondering though, When you say "each civ," what do you mean? What are these civilizations? Why assume they exist? What motivates the idea that there are other civilizations that run simulations sufficiently similar to our own world (as strange and contingent as its laws and constants are)?

The idea is that it seems like we are in a position to make ancestor simulations, which could contain organised life, i.e. "civilisations" (civs for short). Moreover, both simulated reality and base reality might be similar in that respect. Sorry to say this, but do you actually not follow, or are you doing the analytic philosopher move of saying "what do you mean" because you will only accept a rigorous/watertight argument (or just don't like the argument)?
Simulation argument?

Ok, thank you very much. But why then do so many people take the argument seriously? Is it surprising that the peer reviewed process didn't pick up this problem?

I think most people would probably regard the objection as a nitpick (e.g. "OK, maybe the Indifference Principle isn't actually sufficient to support a tight formal argument, and you need to add in some other assumption, but the informal version if the argument is just pretty clearly right"), feel the objection has been successfully answered (e.g. find the response in the Simulation Argument FAQ more compelling than I do), or just haven't completely noticed the potential issue.

I think it's still totally reasonable for the paper to have passed peer review. ... (read more)

Simulation argument?

"If we could run a vast number of simulations someday, that would be strong statistical evidence in favor of the third alternative. And we would know nothing of them, just as people living in our simulations wouldn't know anything about us."


If we actually do this and run those simulations then we would know that we aren't in any of them. What is the connection between the Indifference Principle and this strong statistical evidence? Thank you, I am appreciative.

Simulation argument?

Like, I am surprised the article made it through the peer-review process without someone noting that problem.

Simulation argument?

Ok, thank you very much. But why then do so many people take the argument seriously? 

Assume that base reality is similar to our own world, and that each civ have many descendant "simulated" civs. Although each civ knows it is not one of its own sims, so do all of them, so it is still plausible that we should be indifferent between them - most of which are simulated. Plenty of room to object at basically every stage of the argument; my point is just that you might still want to be indifferent between civs that all know they aren't their own sim.
-2Alex Williams3mo
Like, I am surprised the article made it through the peer-review process without someone noting that problem.
The future is good "in expectation"

Okay, and thank you very much. But how do we know that if the universe timeline were run over and over again that it would be positive in value? Why not think that the future's value "in expectation" is neutral or very negative? Everyone seems to assume that the future will be good! Why?

3Charles He4mo
This is really important and valid, but there's a lot going on in this question! For clarity, note that your first question seemed to be one about statistics or math. This new question is valid too. The short answer is that there is no way to be certain if the universe is good or bad in expectation (and this also depends on what you value, for example, “negative utilitarians” are worried about suffering more than other people). Yes, many people have this belief. There are principled answers why. For example, if you think humans are basically good, and can get more organized, good human activity and values could spread and produce better things in the future, as opposed to there being a lot of rocks in the universe or something. Maybe what you are getting at is some topics or focuses of “Longtermism”, which focuses on the long term future and longtermists often talks positively about its value. Yes, this is valid to believe. Note that actually, longtermism doesn’t need a positive future, just a future we can impact meaningfully. For example, some people focus on “s-risk”, because even a tiny chance of some system of suffering spreading across the universe seems incredibly important. For those people, or people who have “very short timelines”, they actually put zero or maybe negative value on the future. But there still seems like ways to impact the future for many of them.