All of wuschel's Comments + Replies

One trillion dollars by Andreas Eschbach

Random guy ends up with a one trillion dollar fortune, and tries to use it to make the workd a better place.

Themes include:

-consideration of longterm vs. short term effects

-corruption through money and power

-doomerism

-galaxybraining yourself into letting go of deontological norms

-a carecature of an EA as an antagonist

Strong agree. 
I think having EA as a movement encompassing all those different cause areas also makes it possible to have EA-groups in smaller places, that could not sustain an AI-safety and a Global Health and Animal Rights group. 

2
carolinaollive
1y
Do I hear $120 + shipping? *prepares gavel*

I agree. You could substitute "happy people" with anything. 

But I don't think it proves too much. I don't want to have as much money as possible, I don't want to have as much ice cream as possible. There seems to be in general some amount that is enough.

With happy people/QALYs, it is a bit trickier. I sure am happy for anyone who leads a happy life, but I don't, on a gut level, think that a world with 10^50 happy people is worse than a world with 10^51 happy people. 

Total Utilitarianism made me override my intuitions in the past and think that 10... (read more)

2
Yonatan Cale
2y
What if the devil would offer you $1, and the same "infinite" offer to double it [by an amount that would mean, to you personally, twice the utility. This might mean doubling it 10x] with a 90% probability [and a 10% probability that you get nothing]. Would you stop doubling at some point? If so - would you say this proves that at some point you don't want any more utility?

My gut feeling is, that this is excessive. Seems to be a sane reaction though, if you agree with Metaculus on the 3% chance of Putin attacking the Baltics.

Do you agree that there is a 3% chance of a Russia-NATO conflict? Is Metaculus well enough calibrated, that they can tell a 3% chance from a 0,3% chance?

2
Greg_Colbourn
2y
Metaculus is (as of posting this comment) on 5% for Nuclear Detonation Fatality By 2024 .
4
axioman
2y
Miscalibration might cut both ways...  On one hand, It seems quite plausible for forecasts like this to usually be underconfident about the likelihood of the null event, but on the other hand recent events should probably have substantially increased forecasters' entropy for questions around geopolitical events in the next few days and weeks. 
Answer by wuschelJul 12, 20217
0
0

Relatable situation. For a short AI risk inroduction for moms, I think I would suggest Robert Miles´ Youtube Chanel

3
Sanjay
3y
Not sure how good the Robert Miles channel is for mums (mine might not be particularly interested in his channel!) but for communicating about AI risk Robert Miles is (generally) good and I second this recommendation

Very interesting point, I have not thought of this. 

I do think, however, that SIA, Utilitarianism, SSA, and Average Utilitarianism all kind of break down, once we have an infinite amount of people. I think people, like Bostrom, have thought about infinite ethics, but I have not read anything on that topic. 

2
Derek Shiller
3y
I agree that there are challenges for each of them in the case of an infinite number of people. My impression is that total utilitarianism can handle infinite cases pretty respectably, by supplementing the standard maxim of maximizing utility with a dominance principle to the effect of 'do what's best for the finite subset of everyone that you're capable of affecting', though it also isn't something I've thought about too much either. I initially was thinking that average utilitarians can't make a similar move without undermining it's spirit, but maybe they can. However, if they can, I suspect they can make the same move in the finite case ('just focus on the average among the population you can affect') and that will throw off your calculations. Maybe in that case, if you can only affect a small number of individuals, the threat from solipsism can't even get going. In any case, I would hope that SIA is at least able to accommodate an infinite number of possible people, or the possibility of an infinite number of people, without becoming useless. I take it that there are an infinite number of epistemically possible people, and so this isn't just an exercise.

I think you are correct, that there are RC-like problems that AU faces (like the ones you describe), but the original RC (For any population, leading happy lives, there is a bigger population leading nearly worth living lives, whose existence would be better) can be refuted. 

1. : elaborating on why I think Tarsney implicitly assumes SSA:

You are right, that Tarsney does not take any anthropic evidence into account. Therefore it might be more accurate to say, that he forgot about anthropics/does not think it is important. However it just so happens, that assuming the Self-Sampeling Assumption would not change his credence in solipsism at all. If you are a random person from all actual persons, you can not take your existence as evidence how many people exist. So by not taking anthropic reasoning into account, he gets the same re... (read more)

2
MichaelStJules
3y
Your evidenceless prior on the number of individuals must be asymptotically 0 (except for a positive probability for infinity), as the number increases, or else the probabilities won't sum to one. Maybe this solves some of the issue? Of course, we have strong evidence that the number is in fact pretty big as Tarsney points out, based on estimates of how many conscious animals have existed so far. And your prior is underdetermined.

I think the way you put it makes sense, and if you put the number in, you get to the right conclusion. The way I think about this is slightly different, but (I think) equivalent:

Let  be the set of all possible Persons, and  the probability of them existing. The probability, that you are the person  is . Lets say some but not all possible people have red hair. She subset of possible people with red hair is  . Then the probability, that you have red hair is:

In my calculations in ... (read more)

This comment totally made my day!

Hi, I am happy your parable finally made it on the forum.  Also: really nice Idea to also upload the audio of the main text. For me at least, this is awesome, as I much rather listen to things than read them.  Wild Idea: maybe more people could also narrate their posts, and we could have a tag that highlights audio-posts, so one could specifically look for them? 

4
Remmelt
3y
Ah, good to know that my fumbled attempts at narrating were helpful! :) I’m personally up for the audio tag. Let me see if I can create one for this post.

Thanks for that comment and your thoughts! I am unfortunately unfamiliar with the works of Hare, but it sounds interesting and I might have to read up on that. 

I totally agree with you, that there are statements to which we assign truth values, that depend on the frame of reference (like "Derek Parfit's cat is to my left", or the temporal ordering of spacelike separated events.) 

I would also not have a problem with a moral theory, that assigns 2 Utilons to an action in one frame of reference, and 3 Utilons in another. 

I do however believe th... (read more)

Yes, good point. I agree that sufficient specification can make time discounting compatible with moral realism.  

One would have to specify an inertial system, from which to measure time. (That would be equivalent to specifying the language to English for example.) 

Then we would not have a logical contradiction anymore, which weakens my claim, but we would still have something I would find unplausible: An inertial system that is preferred by the correct moral theory, even though it is not preferred by the laws of physics. 

On a side note: I think this is beautifully written, and I would be happy, to read future posts from you. These personal glimpses in other people's struggle with EA concepts and values is something that I think might really be valuable to the community, and not many people have the talent to provide it.

1[comment deleted]3y
1[comment deleted]3y
1[comment deleted]3y

Well. I'm floored. People keep upvoting this and saying such wonderfully kind things in the comments . . . Every time I got the notification there was a new comment under this post, I internally flinched and cringed. I'd just written at length about my internal subjective experience, and I regretted writing it from before I clicked submit. It took a lot of evidence piling up to convince the socially cautious part of my brain it was wrong. 

I'm going to update hard towards writing pieces like this one/writing more frequently. It seems like other people ... (read more)

Answer by wuschelNov 27, 202012
0
0

I am grateful for all the people in the community, who are always happy to help with minor things. Everytime I ask someone for training advice at a conference, or for a, explanation of any word on Dank EA Memes, or for career advice in the Forum, I always got really nice and detailed answers. I feel really excepted through that, especially considering, how rare these things are on the internet.

Interesting Idea. Although I fear we might not like what we find....

I really like this accessible format. However, I think it would be helpful, if there would at least be footnotes to the course of your information, whenever something is an interesting claim (for example "One in three children has dangerous levels of lead in their bloodstream").
I fear that without a tractability of information within official EA contexts, a lot of half true hear say seeps through the cracks.
I don't expect any of the  information in this post to be false, however. 

I completely agree with you. This whole reasoning seems to heavily depend on using causal decision theory instead of its (in my opinion) more sensible competitors.

I am not sure, if no one is getting the joke, or just down voting, because they don't wand irony-jokey content on the EA Forum..

6
JohannWolfgang
4y
Honestly, I can't blame them in either case. I suppose the joke is not funny if you don't know the original and the EA community is open enough to new, unusual ideas that it might attract the sort of crazy people who actually think removing the bladder is a good idea. Also, I told everybody who prove-read the post that it was intended as a parody. Maybe otherwise that is entirely non-obvious. And obviously compared to the normal content on the forum it might be seen as a waste of time to read this.

I'm sorry, but I consider that a very personal question.

Interesting questions. Although I don't think i know the answer to any of them better than you do, I have another possible reason, why the suffering in your situation might not be bad:

You could argue through the lens of personal identity, that if you would self-modify, not to feel pain via sympathy anymore, that the person you would turn into would not be you anymore in the morally relevant sense.

This reasoning however would only apply, if you have ethics, that care about personal Identity (for example, by caring about you or your loved ones survivi... (read more)

Imagine you play cards with your friends. You have the deck in your hand. You are pretty confident, that you have shuffled the deck. Than you seal the deck, and give yourself the first 10 cards. And what a surprise: You happen to find all the clubs in your hand!

What is more reasonable to assume? That you just happen do dray all the clubs, or that you where wrong about having suffeld the cards? Rather the latter one.

Compare this to:

Imagine, thinking about the HoH hypothesis. You are pretty confident, that you are good at long term-forecasting, and you pred... (read more)