V

Venky1024

35 karmaJoined Sep 2021

Comments
14

Thanks for the response. You've summarised the post very well except that, more than limiting intellectual freedom, the convention definition leads to excessive focus on purity at the first-order at the expense of broad utilitarian considerations (think of all the vitriol that vegans throw at deserters which is so irrational). 

As for your view that without the solidarity, the veganism would not be what it is today, I am not entirely convinced. To be clear, the community of interest in this discussion is the animal advocacy one and not vegans per se (notwithstanding  the fact the two of them intersect almost completely). Here are some counter-arguments to consider:

  1. Animal advocates are likely to be first-order vegans or very close to it anyway. If one voluntarily chooses to make lifestyle changes based on concern for animal suffering, then one is likely to go to significant lengths to avoid animal products.  Not everyone may go  the same distance but that's okay (or so I think).  
  2. Peter Singer the philosopher who arguably has the greatest claim to influencing people on animal rights and liberation is not a strict vegan and in fact describes himself as being "flexible". Yuval Harari is another person who is passionate about ending industrial agriculture of animals but describes himself as "vegan-ish". If important thinkers who undoubtedly have a great influence on people refrain from using the word "vegan", then why do you  think that as a community animal advocates should not shed that label or loosen its definition?
  3. Conversely, taking vegan purity to the extreme, we have people like Gary Francione who are so opposed to any welfarist progress (regardless of its consequential value) and who insist that we should avoid meat alternatives because that normalizes the idea of consuming animals. I hope we can agree that that position is counterproductive.
  4. I may be extrapolating from personal experience but first-order veganism being as clearly defined (very arbitrary but very well-defined) gives adherents the sense that they are doing enough already and dilutes thinking along utilitarian lines (what if a vegan purist compares herself to someone who is 95% plant-based but convinces 3 people every month to reduce animal products by 50%).  
  5. While on the one hand, vegans could be admired for being very committed to the cause, and inspire others to do the same, they may seen too distant  which could work against people making changes that they otherwise may have been open to. Again, this is speculative and in general I think it cuts both ways. 

You may be slightly mistaken about what I am stating: the ambiguity is in the official definition even if it a sensible sounding one whereas the conventional definition is well-defined ('no first-order consumption') but arbitrary. The problem arises not so much from arbitrariness in and of itself, but rather demanding strict adherence to (and unwarranted focus on) something that isn't well-justified to begin with.  That leads to all sorts of contradictions.

 

On the second point, I agree that the distinctions between the two examples are somewhat arbitrary. One may argue that perhaps animal-testing in many instances is unnecessary (turns out several are based on methods and assumptions that have been around for a century and have persisted more out of inertia despite no clear evaluation of their effectiveness) but conventional agriculture depends on pesticides but I wouldn't find that argument very convincing. 

I am NOT disputing the harm to animals from eating or consuming animal products in any-way nor do I believe that the harm itself in some sense vague or poorly defined (on the contrary, there are very few things that stand out as clearly as that).

 

 The distinction I am trying to draw is between first-order or direct harm from a given action and the multiple indirect - second-order and beyond - ways in which that action can lead to suffering. In the conventional definition of veganism, the focus is almost entirely on the first-order effects especially when it relates to personal identification with the term "vegan". This asymmetric focus happens at the expense of consequentialist considerations of our actions. 

Not sure I follow this but doesn't the very notion of stochastic dominance arise only when we have two distinct probability distributions? In this scenario the distribution of the outcomes is held fixed but the net expected utility is determined by weighing the outcomes based on other critera (such as risk aversion or aversion to no-difference).

Not sure I agree. Brian Tomasik's post is less a general argument against the approach of EV maximization but more a demonstration of its misapplication in a context where expectation is computed across two distinct distributions of utility functions. As an aside, I also don't see the relation between the primary argument being made there and the two-envelopes problem because the latter can be resolved by identifying a very clear mathematical flaw in the claim (that switching is better).   

This is a very interesting study and analysis.

I was wondering what its implication would be for an area like animal rights/welfare where the baseline support is likely to be considerably lower than that of climate change. 

 If we assume that the polarization effect of radical activism holds true across other issues as well, then the fraction of people who become less supportive may be higher than those who have been persuaded to become more concerned  (for the simple reason that to start with the the odds of people supporting even the more moderate animal rights positions would be rather low) .

I reckon though that such simple extrapolation is fraught and there are other factors that will come into the picture when it comes to animal advocacy.  

This is a very interesting study and analysis!

I was wondering what its implication would be for an area like animal rights/welfare where the baseline support is likely to be considerably lower than that of climate change. 

 If we assume that the polarization effect of radical activism holds true across other issues as well, then the fraction of people who become less supportive may be higher than those who have been persuaded to become more concerned  (for the simple reason that to start with the the odds of people supporting even the more moderate animal rights positions would be rather low) .

I reckon though that such simple extrapolation is fraught and there are other factors that will come into the picture when it comes to animal advocacy.  

I didn't get the intuition behind the initial formulation:

 

What exactly is that supposed to represent? And what was the basis for assigning numbers to the contingency matrix in the two example cases you've considered? 

...it seems like your argument is saying "(A) and (B) are both really hard to estimate, and they're both really low likelihood—but neither is negligible. Thus, we can't really know whether our interventions are helping. (With the implicit conclusion being: thus, we should be more skeptical about attempts to improve the long-term future)"

Thanks, that is fairly accurate summary of one of the crucial points I am making except I would also add that the difficulty of estimation increases with time. And this is a major concern here because the case of longtermism rests precisely on there being greater and greater number of humans (and other sentient independent agents) as the horizon of time expands. 

 

Sometimes we can't know the probability distribution of (A) vs. (B), but sometimes we can do better-than-nothing estimates, and for some things (e.g., some aspects of X-risk reduction) it seems reasonable to try.

Fully agree that we should try but the case of longtermism remains rather weak until we have some estimates and bounds that can be reasonably justified. 

Great points again!
I have only cursorily examined  the links you've shared (bookmarked them for later) but I hope the central thrust of what I am saying does not depend too strongly on being closely familiar with the contents of those.

A few clarifications are in order. I am really not sure about AGI timelines and that's why I am reluctant to attach any probability to it. For instance, the only reason I believe that there is less than 50% chance that we will have AGI in the next 50 years is because we have not seen it yet and  IMO it seems rather unlikely to me that the current directions will lead us there. But that is a very weak justification. What I do know is that there has to be some radical qualitative change for artificial agents to go from excelling in narrow tasks to developing general intelligence.

That said,  it may seem like nit-picking but I do want to draw the distinction between "not significant progress" and "no progress at all" towards AGI. Not only am I stating the former, I have no doubt that we have made incredible progress with algorithms in general. I am less convinced about how much those algorithms help us get closer towards an AGI. (In hindsight, it may turn out that our current deep learning approaches such as GANs contain path-breaking proto-AGI ideas /principles, but I am unable to see it that way). 

 

 If we consider a scale of 0-100 where 100 represents AGI attainment and 0 is some starting point in the 1950s, I have no clear idea whether the progress we've made thus far is close to 5 or 0.5 or even 0.05. I have no strong arguments to justify one or the other because I am way too uncertain about how far the final stage is.


There can also be no question with respect to the other categories of progress that you have highlighted such as compute power and infrastructure and large datasets -indeed I see these as central to the remarkable performance we have come to witness with deep learning models.

The perspective I have is that while acknowledging plenty of progress in understanding several processes in the brain such as signal propagation, mapping of specific sensory stimuli to neuronal activity, theories of how brain wiring at birth may have encoded several learning algorithms,  they  constitute piece-meal knowledge and they still seem quite a few strides removed the bigger question - how do we attain high level cognition, develop abstract thinking, be able to reason and solve complex mathematical problems ?

 

Sorry if I'm misunderstanding.

"isn't there an infinite degree of freedom associated with a continuous function?"

I'm a bit confused by this; are you saying that the only possible AGI algorithm is "the exact algorithm that the human brain runs"? The brain is wired up by a finite number of genes, right?

I agree that we don't necessarily have to reproduce the exact wiring or the functional relation in order to create a general intelligence (which is why I mentioned the equivalence classes).

Finite number of genes implies finite steps/information/computation (and that is not disputable of course) but the number of potential wiring options in the brain and functional forms between input and output is exponentially large.   (It is in principle, infinite, if we want to reproduce the exact function, but we both agree that that may not be necessary). Pure exploratory search may not be feasible and one may make the case that with appropriate priors and assuming some modular structure of the brain, the search space will reduce considerably, but still how much of a quantitative grip do we have on this?  And how  much rests on speculation? 

Load more