W

wuschel

156 karmaJoined Oct 2019

Bio

Hi, I am Julian. I am studying Physics and Philosophy in Göttingn Germany, and co-running the EA group here.

Current Goal: Finding out, where my personal fit is best.

Achievements so far: I once Rick-rolled 3blue1brown, I don't think I´ll ever be able to connect to that success.

Comments
24

One trillion dollars by Andreas Eschbach

Random guy ends up with a one trillion dollar fortune, and tries to use it to make the workd a better place.

Themes include:

-consideration of longterm vs. short term effects

-corruption through money and power

-doomerism

-galaxybraining yourself into letting go of deontological norms

-a carecature of an EA as an antagonist

Strong agree. 
I think having EA as a movement encompassing all those different cause areas also makes it possible to have EA-groups in smaller places, that could not sustain an AI-safety and a Global Health and Animal Rights group. 

I agree. You could substitute "happy people" with anything. 

But I don't think it proves too much. I don't want to have as much money as possible, I don't want to have as much ice cream as possible. There seems to be in general some amount that is enough.

With happy people/QALYs, it is a bit trickier. I sure am happy for anyone who leads a happy life, but I don't, on a gut level, think that a world with 10^50 happy people is worse than a world with 10^51 happy people. 

Total Utilitarianism made me override my intuitions in the past and think that 10^51 people would be way better than 10^50, but this thought experiment made me less confident of that.

Though I do agree with the other commenters here, that just because total Utilitarianism breaks down in the extreme cases, that does not mean we have to doubt more common sense beliefs like "future people matter as much as current people".

My gut feeling is, that this is excessive. Seems to be a sane reaction though, if you agree with Metaculus on the 3% chance of Putin attacking the Baltics.

Do you agree that there is a 3% chance of a Russia-NATO conflict? Is Metaculus well enough calibrated, that they can tell a 3% chance from a 0,3% chance?

Answer by wuschelJul 12, 20217
0
0

Relatable situation. For a short AI risk inroduction for moms, I think I would suggest Robert Miles´ Youtube Chanel

Very interesting point, I have not thought of this. 

I do think, however, that SIA, Utilitarianism, SSA, and Average Utilitarianism all kind of break down, once we have an infinite amount of people. I think people, like Bostrom, have thought about infinite ethics, but I have not read anything on that topic. 

I think you are correct, that there are RC-like problems that AU faces (like the ones you describe), but the original RC (For any population, leading happy lives, there is a bigger population leading nearly worth living lives, whose existence would be better) can be refuted. 

1. : elaborating on why I think Tarsney implicitly assumes SSA:

You are right, that Tarsney does not take any anthropic evidence into account. Therefore it might be more accurate to say, that he forgot about anthropics/does not think it is important. However it just so happens, that assuming the Self-Sampeling Assumption would not change his credence in solipsism at all. If you are a random person from all actual persons, you can not take your existence as evidence how many people exist. So by not taking anthropic reasoning into account, he gets the same result as if he assumed the Self-Sampeling Assumption.

 

2. Does't the Self-Indicaion Assumption say, that the universe is almost surely infinite?

Yes, that is the great weakness of the SIA. You are also completely correct, that we need some kind of more sophisticated mathematics if we want to take into account the possibility of infinite people. But also if we just consider the possibility if very many people existing, the SIA yields weird results. See for example Nick Bostroms thought experiment of the presumptuous philosopher (copy-pasted the text from here):

It is the year 2100 and physicists have narrowed down the search for a theory of  everything to only two remaining plausible candidate theories, T1 and T2 (using  considerations from super-duper symmetry). According to T1 the world is very,  very big but finite, and there are a total of a trillion trillion observers in  the cosmos. According to T2, the world is very, very, very big but finite, and  there are a trillion trillion trillion observers. The super-duper symmetry  considerations seem to be roughly indifferent between these two theories. The  physicists are planning on carrying out a simple experiment that will falsify  one of the theories. Enter the presumptuous philosopher: "Hey guys, it is  completely unnecessary for you to do the experiment, because I can already show  to you that T2 is about a trillion times more likely to be true than T1  (whereupon the philosopher runs the God’s Coin Toss thought experiment and  explains Model 3)!"

I think the way you put it makes sense, and if you put the number in, you get to the right conclusion. The way I think about this is slightly different, but (I think) equivalent:

Let  be the set of all possible Persons, and  the probability of them existing. The probability, that you are the person  is . Lets say some but not all possible people have red hair. She subset of possible people with red hair is  . Then the probability, that you have red hair is:

In my calculations in the post, the set of all possible people is the one solipsistic guy, and  people in the non-solipsistic universe. (with their probability of existence being  and  ). So the probability, that you are in a world, where solipsism is true, is .

Load more