Update, 12/7/21: As an experiment, we're trying out a longer-running Open Thread that isn't refreshed each month. We've set this thread to display new comments first by default, rather than high-karma comments.
If you're new to the EA Forum, consider using this thread to introduce yourself!
You could talk about how you found effective altruism, what causes you work on and care about, or personal details that aren't EA-related at all.
(You can also put this info into your Forum bio.)
If you have something to share that doesn't feel like a full post, add it here!
(You can also create a Shortform post.)
Open threads are also a place to share good news, big or small. See this post for ideas.
(X-posting from LW open thread)
I'm not sure if this is the right place to ask this, but does anyone know what point Paul's trying to make in the following part of this podcast? (Relevant section starts around 1:44:00)
It seems like an important topic but I'm a bit confused by what he's saying here. Is the perspective he's discussing (and puts non-negligible probability on) one that states that the worst possible suffering is a bajillion times worse than the best possible pleasure, and wouldn't that suggest every human's life is net-negative (even if your credence on this being the case is ~.1%)? Or is this just discussing the energy-efficiency of 'hedonium' and 'dolorium', which could potentially be dealt with by some sort of limitation on compute?
Also, I'm not really sure if this set of views is more "a broken bone/waterboarding is a million times as morally pressing as making a happy person", or along the more empirical lines of "most suffering (e.g. waterboarding) is extremely light, humans can experience far far far far far^99 times worse; and pleasure doesn't scale to the same degree." Even a tiny chance of the second one being true is awful to contemplate.
Specifically:
I'm not really sure what's meant by "the reality" here, nor what's meant by biased. Is the assertion that humans' intuitive preferences are driven by the range of possible things that could happen in the ancestral environment & that this isn't likely to match the maximum possible pleasure vs. suffering ratio in the future? If so, how does this lead one to end up concluding it's worse (rather than better)? I'm not really sure how these arguments connect in a way that could lead one to conclude that the worst possible suffering is a quadrillion times as bad as the best bliss is good.
It could be very extreme in case (2) if for some reason you think that the worse suffering is a million times worse than the best happiness (maybe you are imagining severe torture) but I agree that this seems implausibly extreme. Re how to weigh the different possibilities, it depends whether you: 1) scale it as +1 vs 1M, 2) scale it as +1 vs 1/1M, or 3) give both models equal vote in a moral parliament.