Not sure I agree. Brian Tomasik's post is less a general argument against the approach of EV maximization but more a demonstration of its misapplication in a context where expectation is computed across two distinct distributions of utility functions. As an aside, I also don't see the relation between the primary argument being made there and the two-envelopes problem because the latter can be resolved by identifying a very clear mathematical flaw in the claim (that switching is better).
This is a very interesting study and analysis.
I was wondering what its implication would be for an area like animal rights/welfare where the baseline support is likely to be considerably lower than that of climate change.
If we assume that the polarization effect of radical activism holds true across other issues as well, then the fraction of people who become less supportive may be higher than those who have been persuaded to become more concerned (for the simple reason that to start with the the odds of people supporting even the more mo...
This is a very interesting study and analysis!
I was wondering what its implication would be for an area like animal rights/welfare where the baseline support is likely to be considerably lower than that of climate change.
If we assume that the polarization effect of radical activism holds true across other issues as well, then the fraction of people who become less supportive may be higher than those who have been persuaded to become more concerned (for the simple reason that to start with the the odds of people supporting even the more mo...
I didn't get the intuition behind the initial formulation:
What exactly is that supposed to represent? And what was the basis for assigning numbers to the contingency matrix in the two example cases you've considered?
...it seems like your argument is saying "(A) and (B) are both really hard to estimate, and they're both really low likelihood—but neither is negligible. Thus, we can't really know whether our interventions are helping. (With the implicit conclusion being: thus, we should be more skeptical about attempts to improve the long-term future)"
Thanks, that is fairly accurate summary of one of the crucial points I am making except I would also add that the difficulty of estimation increases with time. And this is a major concern here because the case of longtermis...
Great points again!
I have only cursorily examined the links you've shared (bookmarked them for later) but I hope the central thrust of what I am saying does not depend too strongly on being closely familiar with the contents of those.
A few clarifications are in order. I am really not sure about AGI timelines and that's why I am reluctant to attach any probability to it. For instance, the only reason I believe that there is less than 50% chance that we will have AGI in the next 50 years is because we have not seen it yet and IMO it seems rather ...
This is a very interesting paper and while it covers a lot of ground that I have described in the introduction, the actual cubic growth model used has a number of limitations, perhaps the most significant of which is the assumption that it considers the causal effect of an intervention to diminish over time and converge towards some inevitable state: more precisely it assumes as , where is some desirable future state and A and B are some distinct interventions at present.
Please correct me if I am wrong ab...
Several good points made by Linch, Aryeh and steve2512.
As for making my skepticism more precise in terms of probability, it's less about me having a clear sense of timeline predictions that are radically different from those who believe that AGI will explode upon us in the next few decades, and more about the fact that I find most justifications and arguments made in favor of a timeline of less than 50 years to be rather unconvincing.
For instance, having studied and used state-of-the-art deep learning models, I am simply not able to under...
You're completely correct about a couple of things, and not only am I not disputing them, they are crucial to my argument: first, that I am only focusing on only one side of the distribution, and the second, that the scenarios I am referring to (with WW2 counterfactual or nuclear war) are improbable.
Indeed, as I have said, even if the probability of the future scenarios I am positing is of the order of 0.00001 (which makes it improbable), that can hardly be the grounds to dismiss the argument in this context simply because longtermism appeals p...
Thanks for the response. I believe I understand your objection but it would be helpful to distinguish the following two propositions:
a. A catastrophic risk in the next few years is likely to be horrible for humanity over the next 500 years.
b. A catastrophic risk in the next few years is likely to to leave humanity (and other sentient agents) worse off in the next 5,000,000 years, all things considered.
I have no disagreement at all with the first but am deeply skeptical of the second. And that's where the divergence comes from.
The example ...
Not sure I follow this but doesn't the very notion of stochastic dominance arise only when we have two distinct probability distributions? In this scenario the distribution of the outcomes is held fixed but the net expected utility is determined by weighing the outcomes based on other critera (such as risk aversion or aversion to no-difference).