Bio

Participation
4

I am a generalist quantitative researcher. I am open to volunteering and paid work. I welcome suggestions for posts. You can give me feedback here (anonymously or not).

How others can help me

I am open to volunteering and paid work (I usually ask for 20 $/h). I welcome suggestions for posts. You can give me feedback here (anonymously or not).

How I can help others

I can help with career advice, prioritisation, and quantitative analyses.

Comments
3064

Topic contributions
41

Hi Michael.

I guess the cases where you can't add directly (or with fixed weights) involve genuine normative uncertainty or incommensurability. Or, maybe some cases of two envelopes problems where it's too difficult or unjustifiable to set a unique common scale and use the Bayesian solution.

Effects along different dimensions can be added under expectational total hedonistic utilitarianism, precise credences, and moral realism?

I did not write the post, or read the book. However, based on the podcasts with Eliezer Yudkowsky and Nate Soares I have listened to, I would also like them to focus more on empirical evidence.

Do you see another bet we could make about AI risk? I remain open to bets against short AI timelines, or what they supposedly imply, up to 10 k$. I am also open the increasing the stakes of our bet.

I would go with world A. I think world B is worse than, for example, a world with 1 M times as many people as B, and welfare per person just 0.0001 % lower. Repeating a similar comparison sufficiently many times, I conclude that world B is worse than a world C with way way more people than B, and welfare per person just above 0. I also believe world C is worse than a world D with 10 billion more people than C, all of whom experiencing super high welfare except for 1 person with welfare just below 0 (for example, -10^-100 times the welfare of a random human). So I conclude world B is worse than a world D with lots and lots of people with barely positive lives, 10^10 - 1 people with super high welfare w, and 1 person with welfare just below 0. I think 10^10 - 1 people with welfare w is worse than 10^11 people with welfare 0.0001 % lower than w. Repeating this, I determine that 10^10 - 1 people with welfare w is worse than lots and lots of people with barely positive welfare. So I conclude that world B is worse than a world E with lots and lots of people with barely positive lives, and 1 person with welfare just below 0. Based on a similar reasoning, world B is worse than a world F with lots and lots of people with welfare just above 0, and lots of people with welfare just below 0. For the reasons I have mentioned in the thread, I think sufficiently many people with welfare just below 0 is worse than a given number of people with very negative welfare. So I conclude world B is worse than world A with infinitely many people with welfare just above 0, and a large number of people with very negative welfare.

My conclusion that A is better than B may seem counterintuitive. However, I strongly endorse all the steps that lead me to conclude A is better than B. So I also endorse this conclusion. It is also counterintuitive that the mass of sufficiently many grains of sand could be larger than the mass of a mountain. However, I strongly endorse that grains of sand have mass, and therefore I am forced to endorse the conclusion that sufficiently many grains could have a greater mass than a mountain.

From my computational physics experience I know that it is physically impossible to simulate the exact electrical properties of a system of a couple hundred atoms on a classical digital computer, due to a blowup in computational complexity. 

Relatedly, I liked the post Costs of Embodiment.

People would not distinguish between 53 ÂşC for 60.1 s (maximum pain level of 5), and 53 ÂşC for 59.9 s (maximum pain level of 4).

I should have been clearer. I meant "people would barely distinguish". In any case, my point is that you seem to believe pain of level 5 is infinitely worse than pain of level 4 despite people barely or not distinguishing between experiences with pain of level 4 and 5 if they are sufficiently close to the temperature-duration curve separating a highest level of pain of 4 and 5.

Would you prever averting i) 53 ÂşC for 60.1 s (maximum pain level of 5) for 1 person with probability 10^-100 over ii) 53 ÂşC for 59.9 s (maximum pain level of 4) for the 8 billion people on Earth with certainty?

You did not answer this? If it helps, you could imagine that it was a real situation, and that by default ii) all people on Earth would have one hand under water at 53 ÂşC with certainty for 59.9 s, but that you could prevent this, and instead have i) just one person have their hand under water at 53 ÂşC with probablity 10^-100 for 60.1 s. It seems obvious to me i) is way better. However, if you think level 5 pain is infinitely worse than level 4 pain, you would pick ii).

Actually, in my model, level 5 is unnecessary as it is evolutionarily the same response as level 4, so I’ll probably remove it from my updated model.

The number of levels of pain, and temperture is not important to the situation I described above. As long as you believe some pains are infinitely worse than others, it is possible to come up with a situation like the above where you would pick ii) all people on Earth having one hand under water at temperature T with certainty for 59.9 s (maximum level of pain of k) over i) just one person having their hand under water at temperature T with probablity 10^-100 for 60.1 s (maximum level of pain of k + 1).

I am only discussing temperature and duration, but my argument generalises to any number of dimensions affecting the maximum level of pain. If this depends on N variables, there will be a N-dimensional space with boundaries separating experiences with maximum level of pain k and k + 1. So, for a boundary which contains experiences with duration 60 s, people prioritising pains of level k + 1 infinitely more than pains of level k would pick ii) all people on Earth being subject to a painful stimulus with certainty for 59.9 s (maximum level of pain of k) over i) just one person being subject to the same painful stimulus with probablity 10^-100 for 60.1 s (maximum level of pain of k + 1).

If reincarnation were real, would you prefer infinite lifetimes with dust specks irritating you for 10 minutes, or just one lifetime of 10 min extreme unbearable hell? The former is infinity times larger than the second.

I do not know whether literal dust specks would be sufficiently bad to make my welfare negative. However, I would prefer 10 min of extreme unbearable hell over an infinite time with slighly negative welfare.

Thanks for the very interesting post, AndrĂŠs.

Hi AndrĂŠs. How would you quantitatively compare the intensity of subjetive experiences across species? What would be a good proxy for the (expected) welfare range under electromagnetic (EM) field theories of consciousness?

That makes sense. I was imagining inputs which are broader than car pieces, but narrower than just people (labour) and money (capita), like people in specific roles, or certain production equipment.

Load more