All of Venky1024's Comments + Replies

Not sure I follow this but doesn't the very notion of stochastic dominance arise only when we have two distinct probability distributions? In this scenario the distribution of the outcomes is held fixed but the net expected utility is determined by weighing the outcomes based on other critera (such as risk aversion or aversion to no-difference).

2
MichaelStJules
6mo
Even if we're difference-making risk averse, we still have and should still compare multiple distributions of outcomes to decide between options, so SD would be applicable.

Not sure I agree. Brian Tomasik's post is less a general argument against the approach of EV maximization but more a demonstration of its misapplication in a context where expectation is computed across two distinct distributions of utility functions. As an aside, I also don't see the relation between the primary argument being made there and the two-envelopes problem because the latter can be resolved by identifying a very clear mathematical flaw in the claim (that switching is better).   

This is a very interesting study and analysis.

I was wondering what its implication would be for an area like animal rights/welfare where the baseline support is likely to be considerably lower than that of climate change. 

 If we assume that the polarization effect of radical activism holds true across other issues as well, then the fraction of people who become less supportive may be higher than those who have been persuaded to become more concerned  (for the simple reason that to start with the the odds of people supporting even the more mo... (read more)

This is a very interesting study and analysis!

I was wondering what its implication would be for an area like animal rights/welfare where the baseline support is likely to be considerably lower than that of climate change. 

 If we assume that the polarization effect of radical activism holds true across other issues as well, then the fraction of people who become less supportive may be higher than those who have been persuaded to become more concerned  (for the simple reason that to start with the the odds of people supporting even the more mo... (read more)

I didn't get the intuition behind the initial formulation:

 

What exactly is that supposed to represent? And what was the basis for assigning numbers to the contingency matrix in the two example cases you've considered? 

3
Benedikt Schmidt
3y
Thanks for your question! This is how the α-maxmin model is defined.  You can consider the coefficient α as a sort of pessimism index. For details, see the source Ghirardato et al.  It is supposed to represent the extreme cases. The numbers in the examples are exemplary. The purpose is to have two different cases in order to study.   

...it seems like your argument is saying "(A) and (B) are both really hard to estimate, and they're both really low likelihood—but neither is negligible. Thus, we can't really know whether our interventions are helping. (With the implicit conclusion being: thus, we should be more skeptical about attempts to improve the long-term future)"

Thanks, that is fairly accurate summary of one of the crucial points I am making except I would also add that the difficulty of estimation increases with time. And this is a major concern here because the case of longtermis... (read more)

Great points again!
I have only cursorily examined  the links you've shared (bookmarked them for later) but I hope the central thrust of what I am saying does not depend too strongly on being closely familiar with the contents of those.

A few clarifications are in order. I am really not sure about AGI timelines and that's why I am reluctant to attach any probability to it. For instance, the only reason I believe that there is less than 50% chance that we will have AGI in the next 50 years is because we have not seen it yet and  IMO it seems rather ... (read more)

This is a very interesting paper and while it covers a lot of ground that I have described in the introduction, the actual cubic growth model used has a number of limitations, perhaps the most significant of which is the assumption that it considers the causal effect of an intervention to diminish over time and converge towards some inevitable state: more precisely it assumes  as  , where  is some desirable future state and A and B are some distinct interventions at present.

Please correct me if I am wrong ab... (read more)

Several good points made by Linch, Aryeh and steve2512. 

As for making my skepticism more precise in terms of probability, it's less about me having a clear sense of timeline predictions that are radically different from those who believe that AGI will explode upon us in the next few decades, and more about the fact that I find most justifications and arguments made in favor of a timeline of less than 50 years to be rather unconvincing. 

For instance,  having studied and used state-of-the-art deep learning models, I am simply not able to under... (read more)

4
Steven Byrnes
3y
If we don't have convincing evidence in favor of a timeline <50 years, and we also don't have convincing evidence in favor of a timeline ≥50 years, then we just have to say that this is a question on which we don't have convincing evidence of anything in particular. But we still have to take whatever evidence we have and make the best decisions we can. ¯\_(ツ)_/¯  (You don't say this explicitly but your wording kinda implies that ≥50 years is the default, and we need convincing evidence to change our mind away from that default. If so, I would ask why we should take ≥50 years to be the default. Or sorry if I'm putting words in your mouth.) Lots of ingredients go into AGI, including (1) algorithms, (2) lots of inexpensive chips that can do lots of calculations per second, (3) technology for fast communication between these chips, (4) infrastructure for managing large jobs on compute clusters, (5) frameworks and expertise in parallelizing algorithms, (6) general willingness to spend millions of dollars and roll custom ASICs to run a learning algorithm, (7) coding and debugging tools and optimizing compilers, etc. Even if you believe that you've made no progress whatsoever on algorithms since the 1950s, we've made massive progress in the other categories. I think that alone puts us "significantly closer to AGI today than we were in the 1950s": once we get the algorithms, at least everything else will be ready to go, and that wasn't true in the 1950s, right? But I would also strongly disagree with the idea that we've made no progress whatsoever on algorithms since the 1950s. Even if you think that GPT-3 and AlphaGo have absolutely nothing whatsoever to do with AGI algorithms (which strikes me as an implausibly strong statement, although I would endorse much weaker versions of that statement), that's far from the only strand of research in AI, let alone neuroscience. For example, there's a (IMO plausible) argument that PGMs and causal diagrams will be more important to

You're completely correct about a couple of things, and not only am I not disputing them, they are crucial to my argument: first, that I am only focusing on only one side of the distribution, and the second, that the scenarios I am referring to (with WW2 counterfactual or nuclear war) are improbable.

Indeed, as I have said, even if the probability of the future scenarios I am positing  is of the order of 0.00001 (which makes it improbable),  that can hardly be the grounds to dismiss the argument in this context simply because longtermism appeals p... (read more)

2
Harrison Durland
3y
I see what you mean, and again I have some sympathy for the argument that it's very difficult to be confident about a given probability distribution in terms of both positive and negative consequences. However, to summarize my concerns here, I still think that even if there is a large  amount of uncertainty, there is typically still reason to think that some things will have a positive expected value: preventing a given event (e.g., a global nuclear war) might have a ~0.001% of making existence worse in the long-term (possibility A), but it seems fair to estimate that preventing the same event also has a ~0.1% chance of producing an equal amount of long-term net benefit (B). Both estimates can be highly uncertain, but there doesn't seem to be a good reason to expect that (A) is more likely than (B).  My concern thus far has been that it seems like your argument is saying "(A) and (B) are both really hard to estimate, and they're both really low likelihood—but neither is negligible. Thus, we can't really know whether our interventions are helping. (With the implicit conclusion being: thus, we should be more skeptical about attempts to improve the long-term future)" (If that isn't your argument, feel free to clarify!). In contrast, my point is "Sometimes we can't know the probability distribution of (A) vs. (B), but sometimes we can do better-than-nothing estimates, and for some things (e.g., some aspects of X-risk reduction) it seems reasonable to try."

Thanks for the response.  I believe I understand your objection but it would be helpful to distinguish the following two propositions:

a. A catastrophic risk in the next few years is likely to be horrible for humanity over the next 500 years.

b. A catastrophic risk in the next few years is likely to to leave humanity (and other sentient agents) worse off in the next 5,000,000 years, all things considered. 

 

I have no disagreement at all with the first but am deeply skeptical of the second. And that's where the divergence comes from.

The example ... (read more)

3
Harrison Durland
3y
This seems to be an issue of only considering one side of the possibility distribution. I think it’s very arguable that a post-nuclear-holocaust society is just as if not more likely to be more racist/sexist, more violent or suspicious of others, more cruel to animals (if only because our progress in e.g., lab-grown meat will be undone), etc. in the long term. This is especially the case if history just keeps going through cycles of civilizational collapse and rebuilding—in which case we might have to suffer for hundreds of thousands of years (and subject animals to that many more years of cruelty) until we finally develop a civilization that is capable of maximizing human/sentient flourishing (assuming we don’t go extinct!) You cite the example of post-WW2 peace, but I don’t think it’s that simple: 1. there were many wars afterwards (e.g., the Korean War, Vietnam), they just weren’t all as global in scale. Thus, WW2 may have been more of a peak outlier at a unique moment in history. 2. It’s entirely possible WW2 could have led to another, even worse war—we just got lucky. (consider how people thought WW1 would be the war to end all wars because of its brutality, only for WW2 to follow a few decades later) 3. Inventions such as nuclear weapons, the strengthening of the international system in terms of trade and diplomacy, the disenchantment with fascism/totalitarianism (with the exception of communism), and a variety of other factors seemed to have helped to prevent a WW3; the brutality of WW2 was not the only factor. Ultimately, I still consider that the argument that seemingly horrible things like nuclear holocausts (or The Holocaust) or world wars are more likely to produce good outcomes in the long term just generally seems improbable. (I just wish someone who is more familiar with longtermism would contribute)