V

Venky1024

26 karmaJoined Sep 2021

Comments
9

This is a very interesting study and analysis.

I was wondering what its implication would be for an area like animal rights/welfare where the baseline support is likely to be considerably lower than that of climate change. 

 If we assume that the polarization effect of radical activism holds true across other issues as well, then the fraction of people who become less supportive may be higher than those who have been persuaded to become more concerned  (for the simple reason that to start with the the odds of people supporting even the more moderate animal rights positions would be rather low) .

I reckon though that such simple extrapolation is fraught and there are other factors that will come into the picture when it comes to animal advocacy.  

This is a very interesting study and analysis!

I was wondering what its implication would be for an area like animal rights/welfare where the baseline support is likely to be considerably lower than that of climate change. 

 If we assume that the polarization effect of radical activism holds true across other issues as well, then the fraction of people who become less supportive may be higher than those who have been persuaded to become more concerned  (for the simple reason that to start with the the odds of people supporting even the more moderate animal rights positions would be rather low) .

I reckon though that such simple extrapolation is fraught and there are other factors that will come into the picture when it comes to animal advocacy.  

I didn't get the intuition behind the initial formulation:

 

What exactly is that supposed to represent? And what was the basis for assigning numbers to the contingency matrix in the two example cases you've considered? 

...it seems like your argument is saying "(A) and (B) are both really hard to estimate, and they're both really low likelihood—but neither is negligible. Thus, we can't really know whether our interventions are helping. (With the implicit conclusion being: thus, we should be more skeptical about attempts to improve the long-term future)"

Thanks, that is fairly accurate summary of one of the crucial points I am making except I would also add that the difficulty of estimation increases with time. And this is a major concern here because the case of longtermism rests precisely on there being greater and greater number of humans (and other sentient independent agents) as the horizon of time expands. 

 

Sometimes we can't know the probability distribution of (A) vs. (B), but sometimes we can do better-than-nothing estimates, and for some things (e.g., some aspects of X-risk reduction) it seems reasonable to try.

Fully agree that we should try but the case of longtermism remains rather weak until we have some estimates and bounds that can be reasonably justified. 

Great points again!
I have only cursorily examined  the links you've shared (bookmarked them for later) but I hope the central thrust of what I am saying does not depend too strongly on being closely familiar with the contents of those.

A few clarifications are in order. I am really not sure about AGI timelines and that's why I am reluctant to attach any probability to it. For instance, the only reason I believe that there is less than 50% chance that we will have AGI in the next 50 years is because we have not seen it yet and  IMO it seems rather unlikely to me that the current directions will lead us there. But that is a very weak justification. What I do know is that there has to be some radical qualitative change for artificial agents to go from excelling in narrow tasks to developing general intelligence.

That said,  it may seem like nit-picking but I do want to draw the distinction between "not significant progress" and "no progress at all" towards AGI. Not only am I stating the former, I have no doubt that we have made incredible progress with algorithms in general. I am less convinced about how much those algorithms help us get closer towards an AGI. (In hindsight, it may turn out that our current deep learning approaches such as GANs contain path-breaking proto-AGI ideas /principles, but I am unable to see it that way). 

 

 If we consider a scale of 0-100 where 100 represents AGI attainment and 0 is some starting point in the 1950s, I have no clear idea whether the progress we've made thus far is close to 5 or 0.5 or even 0.05. I have no strong arguments to justify one or the other because I am way too uncertain about how far the final stage is.


There can also be no question with respect to the other categories of progress that you have highlighted such as compute power and infrastructure and large datasets -indeed I see these as central to the remarkable performance we have come to witness with deep learning models.

The perspective I have is that while acknowledging plenty of progress in understanding several processes in the brain such as signal propagation, mapping of specific sensory stimuli to neuronal activity, theories of how brain wiring at birth may have encoded several learning algorithms,  they  constitute piece-meal knowledge and they still seem quite a few strides removed the bigger question - how do we attain high level cognition, develop abstract thinking, be able to reason and solve complex mathematical problems ?

 

Sorry if I'm misunderstanding.

"isn't there an infinite degree of freedom associated with a continuous function?"

I'm a bit confused by this; are you saying that the only possible AGI algorithm is "the exact algorithm that the human brain runs"? The brain is wired up by a finite number of genes, right?

I agree that we don't necessarily have to reproduce the exact wiring or the functional relation in order to create a general intelligence (which is why I mentioned the equivalence classes).

Finite number of genes implies finite steps/information/computation (and that is not disputable of course) but the number of potential wiring options in the brain and functional forms between input and output is exponentially large.   (It is in principle, infinite, if we want to reproduce the exact function, but we both agree that that may not be necessary). Pure exploratory search may not be feasible and one may make the case that with appropriate priors and assuming some modular structure of the brain, the search space will reduce considerably, but still how much of a quantitative grip do we have on this?  And how  much rests on speculation? 

This is a very interesting paper and while it covers a lot of ground that I have described in the introduction, the actual cubic growth model used has a number of limitations, perhaps the most significant of which is the assumption that it considers the causal effect of an intervention to diminish over time and converge towards some inevitable state: more precisely it assumes  as  , where  is some desirable future state and A and B are some distinct interventions at present.

Please correct me if I am wrong about this.

However, the introduction considers not just interventions fading out in terms of their ability to influence future events but often the sheer unpredictability of them.  In fact, much like I did, the idea from chaos theory is cited:

.... we know
on theoretical grounds that complex systems can be extremely sensitive to initial
conditions, such that very small changes produce very large differences in later con-
ditions (Lorenz, 1963; Schuster and Just, 2006). If human societies exhibit this sort
of “chaotic” behavior with respect to features that determine the long-term effects
of our actions (to put it very roughly), then attempts to predictably influence the
far future may be insuperably stymied by our inability to measure the present state
of the world with arbitrary precision.

 

But the model does not consider any of these cases. 

In any case, by the author's own analysis ( which is based on a large number of assumptions), there are several scenarios where the outcome is not favorable to the longtermist.  

Again, interesting work, but this modeling framework is not very persuasive to begin with (regardless of which way the final results point to).

Several good points made by Linch, Aryeh and steve2512. 

As for making my skepticism more precise in terms of probability, it's less about me having a clear sense of timeline predictions that are radically different from those who believe that AGI will explode upon us in the next few decades, and more about the fact that I find most justifications and arguments made in favor of a timeline of less than 50 years to be rather unconvincing. 

For instance,  having studied and used state-of-the-art deep learning models, I am simply not able to understand why we are significantly closer to AGI today than we were in 1950s.   General intelligence requires something qualitatively different from GPT-3 or Alpha Go, and I have seen literally zero evidence that any AI systems comprehend things even remotely close how humans operate. 

Note that the last point is not a requirement (namely that AI should understand objects, events and relations like humans do)  as such for AGI but it does make me skeptical of people who cite these examples as evidence of progress we've made towards such a general intelligence. 

 

I have looked at Holden's post and there are several things that are not clear to me. Here is one: there appears to be a lot of focus on the  number of computations, especially in comparison to the human brain, and while I have little doubt that artificial systems would surpass those limitations (if it has already not done so), the real question is decoding the nature of wiring and the functional form of the relation between the inputs and outputs. Perhaps there is something I am not getting here but (at least in principle) isn't there an infinite degree of freedom associated with a continuous function? Even if one argued that we can define equivalence class of similar functions (made rigorous), does that still not leave us with an extremely large number of possibilities? 

You're completely correct about a couple of things, and not only am I not disputing them, they are crucial to my argument: first, that I am only focusing on only one side of the distribution, and the second, that the scenarios I am referring to (with WW2 counterfactual or nuclear war) are improbable.

Indeed, as I have said, even if the probability of the future scenarios I am positing  is of the order of 0.00001 (which makes it improbable),  that can hardly be the grounds to dismiss the argument in this context simply because longtermism appeals precisely to the immense consequences of events whose absolute probability is very low. 

 

At the risk of quoting out of context:

If we increase the odds of survival at one of the filters by one in a million, we can multiply one of the inputs for C by 1.000001.
So our new value of C is 0.01 x 0.01 x 1.000001 = 0.0001000001
New expected time remaining for civilization = M x C = 10,000,010,000

 

In much the same way,  it's absolutely correct that I am referring to one side of the distribution ; however it is not because the other-side does not exist or is not relevant  bur rather because I want to highlight the magnitude of uncertainty and how that expands with time. 

 

It follows also that I am in no way disputing (and my argument is somewhat orthogonal to) the different counterfactuals for WW2 you've outlined.  

Thanks for the response.  I believe I understand your objection but it would be helpful to distinguish the following two propositions:

a. A catastrophic risk in the next few years is likely to be horrible for humanity over the next 500 years.

b. A catastrophic risk in the next few years is likely to to leave humanity (and other sentient agents) worse off in the next 5,000,000 years, all things considered. 

 

I have no disagreement at all with the first but am deeply skeptical of the second. And that's where the divergence comes from.

The example of a post-nuclear generation being animal-right sensitive is just one possibility that I advanced; one may consider other areas such as universal disarmament,  open borders, end of racism/sexism. If the  probability  of a more tolerant humanity emerging from the ashes of a nuclear winter is even 0.00001, then from the perspective of someone looking back 100,000 years from now, it is not very obvious that the catastrophic risk was bad, all things considered.

For example, whatever the horrors of WWII may have been, the sustenance of relative peace and prosperity of Europe since 1945 owes a significant deal to the war. In addition, the widespread acknowledgement of  norms and conventions around torture and human rights is partly a consequence of the brutality of the war.  That of course if far from enough to conclude that the war was a net positive.  However 5000 years into the future, are you sure that in the majority of scenarios,   in retrospect, WW2 would still be a net negative event? 

In any case, I have added this as well in the post: 

If a longtermist were to state that the expected number of lives saved in say T (say 100,000 ) years is N (say 1,000,000) and that the probability of saving at least M (say 10,000) lives is 25% and that the probability of causing more deaths (or harm engendered) is less than 1%, all things considered (i.e., counter-factuals with opportunity cost), then I’ll put all this aside and join the club!