Magnus Vinding

Researcher @ Center for Reducing Suffering
1441 karmaJoined May 2018Copenhagen, Denmark


Working to reduce extreme suffering for all sentient beings.

Author of Suffering-Focused Ethics: Defense and Implications; Reasoned Politics; & Essays on Suffering-Focused Ethics.

Co-founder (with Tobias Baumann) of the Center for Reducing Suffering (CRS).

Ebooks available for free here and here.


Topic Contributions

I agree that vegan advocacy is often biased and insufficiently informed. That being said, I think similar points apply with comparable, if not greater, strength in the "opposite" direction, and I think we end up with an unduly incomplete perspective on the broader discussion around this issue if we only (or almost only) focus on the biases of vegan advocacy alone.

For example, in terms of identifying reasonable moral views (which, depending on one's meta-ethical view, isn't necessarily a matter of truth-seeking, but perhaps at least a matter of being "plausible-view-seeking"), it seems that there are strong anthropocentric and speciesist biases that work against a fair evaluation of the arguments against speciesism, and which likewise work against a fair evaluation of the moral status of veganism (e.g. from an impartial sentiocentric perspective).

Similarly, with respect to the feasibility of veganism, it seems that factors such as personal inconvenience and perceived stigma against vegans plausibly give rise to biases (in many people) toward overstating the difficulties and hazards of veganism (as also briefly acknowledged in the OP: "I’m sure many people do overestimate the difficulties of veganism").

Relatedly, with respect to the section "What do EA vegan advocates need to do?", I agree with the recommendation to "Take responsibility for the nutritional education of vegans you create". But by extension, an impartial sentiocentric perspective (and even just moderately impartial ones) would also endorse an analogous recommendation like "Take responsibility for the harm that you directly cause to, or fail to prevent for, non-human animals". It seems important not to exclude that aspect of our moral responsibility, and indeed to explicitly include it, as inconvenient as it admittedly is.

The view obviously does have "implausible" implications, if that means "implications that conflict with what seems obvious to most people at first glance".

I don't think what Knutsson means by "plausible" is "what seems obvious to most people at first glance". I also don't think that's a particularly common or plausible use of the term "plausible". (Some examples of where "plausible" and "what seems obvious to most people at first glance" plausibly come apart include what most people in the past might at first glance have considered obvious about the moral status of human slavery, as well as what most people today might at first glance say about the moral status of farming and killing non-human animals.)

Few people agree that "pleasure" and "happiness" are totally worthless in themselves.

Note that Knutsson does not deny that pleasure and happiness are worthwhile in the sense of being better for a person than unpleasure and unhappiness (cf. "What about making individuals happier? Yes, we should do that."). Nor does he deny that certain experiences can benefit existing beings (e.g. by satisfying certain needs). What he argues against is instead that pleasure and experiential happiness are something "above" or "on the other side of" a completely undisturbed state.

As for the claim about "few people" (and setting aside that majority opinion is hardly a good standard for plausibility, as I suspect you'd agree), it's not clear that this "few people" claim is empirically accurate, especially if it concerns the idea that pleasure isn't something "above" a completely undisturbed state. The following is an apropos quote:

The intuition that the badness of suffering doesn’t compare to the supposed badness of inanimate matter (as non-pleasure) seems very common, and the same goes for the view that contentment is what matters, not pleasure-intensity [cf. Gloor, 2017, sec. 2.1]. There are nearly 1.5 billion Buddhists and Hindus, and while Buddhism is less explicit and less consequentialist than negative utilitarianism, the basic (though not uniform) Buddhist view on how pleasure and suffering are being valued is very similar to negative utilitarianism; Hinduism contains some similar views. Ancient Western philosophers such as Epicurus and some Stoics proposed definitions of “happiness” in terms of the absence of suffering.

(On Buddhism and Epicureanism, see e.g. Breyer, 2015; Sherman, 2017; and the recent review of minimalist views of wellbeing by Teo Ajantaival.) 

The reason this matters is that EA frequently decides to make decisions, including funding decisions, based on these ridiculously uncertain estimates. You yourself are advocating for this in your article. 

I think that misrepresents what I write and "advocate" in the essay. Among various other qualifications, I write the following (emphases added):

I should also clarify that the decision-related implications that I here speculate on are not meant as anything like decisive or overriding considerations. Rather, I think they would mostly count as weak to modest considerations in our assessments of how to act, all things considered.

My claims about how I think these would be "weak to modest considerations in our assessments of how to act" are not predicated on the exact manner in which I represent my beliefs: I'd say the same regardless of whether I'm speaking in purely qualitative terms or in terms of ranges of probabilities.

In summary, people should either start stating their uncertainty explicitly, or they should start saying "I don't know".

FWIW, I do state uncertainty multiple times, except in qualitative rather than quantitative terms. A few examples:

This essay contains a lot of speculation and loose probability estimates. It would be tiresome if I constantly repeated caveats like “this is extremely speculative” and “this is just a very loose estimate that I am highly uncertain about”. So rather than making this essay unreadable with constant such remarks, I instead say it once from the outset: many of the claims I make here are rather speculative and they mostly do not imply a high level of confidence. ... I hope that readers will keep this key qualification in mind.

As with all the numbers I give in this essay, the following are just rough numbers that I am not adamant about defending ...

Of course, this is a rather crude and preliminary analysis.

Thanks! :)

Assigning a single number to such a prior, as if it means anything, seems utterly absurd.

I don't agree that it's meaningless or absurd. A straightforward meaning of the number is "my subjective probability estimate if I had to put a number on it" — and I'd agree that one shouldn't take it for more than that.

I also don't think it's useless, since numbers like these can at least help give a very rough quantitative representation of beliefs (as imperfectly estimated from the inside), which can in turn allow subjective ballpark updates based on explicit calculations. I agree that such simple estimates and calculations should not necessarily be given much weight, let alone dictate our thinking, but I still think they can provide some useful information and provoke further thought. I think they can add to purely qualitative reasoning, even if there are more refined quantitative approaches that are better still.

You give a prior of 1 in a hundred that aliens have a presence on earth. Where did this number come from?

It was in large part based on the considerations reviewed in the section "I. An extremely low prior in near aliens". The following sub-section provides a summary with some attempted sanity checks and qualifications (in addition to the general qualifications made at the outset):

All-things-considered probability estimates: Priors on near aliens

Where do all these considerations leave us? In my view, they overall suggest a fairly ignorant prior. Specifically, in light of the (interrelated) panspermia, pseudo-panspermia, and large-scale Goldilocks hypotheses, as well as the possibility of near aliens originating from another galaxy, I might assign something like a 10 percent prior probability to the existence of at least one advanced alien civilization that could have reached us by now if it had decided to. (Note that I am here using the word “civilization” in a rather liberal sense; for example, a distributed web of highly advanced probes would count as a civilization in this context.) Furthermore, I might assign a probability not too far from that — maybe around 1 percent — to the possibility that any such civilization currently has a presence around Earth (again, as a prior).

Why do I have something like a 10 percent prior on there being an alien presence around Earth conditional on the existence of at least one advanced alien civilization that could have reached us? In short, the main reason is the info gain motive that I explore at greater length below. Moreover, as a sanity check on this conditional probability, we can ask how likely it is that humanity would send and maintain probes around other life-supporting planets assuming that we became technologically capable of doing this; roughly 10 percent seems quite sane to me.

At an intuitive level, I would agree with critics who object that a ~1 percent prior probability in any kind of alien presence around Earth seems extremely high. However, on reflection, I think the basic premises that get me to this estimate look quite reasonable, namely the two conjunctive 10-percent probabilities in “the existence of at least one advanced alien civilization that could have reached us by now if it had decided to” and “an alien presence around Earth conditional on the existence of at least one advanced alien civilization that could have reached us”.

Note also that there are others who seem to defend considerably higher priors regarding near aliens (see e.g. these comments by Jacob Cannell; I agree with some of the points Cannell makes, though I would frame them in more uncertain and probabilistic terms).

I can see how substantially lower priors than mine could be defensible, even a few orders of magnitude lower, depending on how one weighs the relevant arguments. Yet I have a hard time seeing how one could defend an extremely low prior that practically rules out the existence of near aliens. (Robin Hanson has likewise argued against an extremely low prior in near aliens.)

Thanks for your comment. I basically agree, but I would stress two points.

First, I'd reiterate that the main conclusions of the post I shared do not rest on the claim that extraordinary UFOs are real. Even assuming that our observed evidence involves no truly remarkable UFOs whatsoever, a probability of >1 in 1,000 in near aliens still looks reasonable (e.g. in light of the info gain motive), and thus the possibility still seems (at least weakly) decision-relevant. Or so my line of argumentation suggests.

Second, while I agree that the wild abilities are a reason to update toward thinking that the reported UFOs are not real objects, I also think there are reasons that significantly dampen the magnitude of this update. First, there is the point that we should (arguably) not be highly confident about what kinds of abilities an advanced civilization that is millions of years ahead of us might possess. Second, there is the point that some of the incidents (including the famous 2004 Nimitz incident) involve not only radar tracking (as reported by Kevin Day in the Nimitz incident), but also eye-witness reports (e.g. by David Fravor and Alex Dietrich in the case of Nimitz), and advanced infrared camera (FLIR) footage (shot by Chad Underwood during Nimitz). That diversity of witnesses and sources of evidence seems difficult to square with the notion that the reported objects weren't physically real (which, of course, isn't to say that they definitely were real).

When taking these dampening considerations into account, it doesn't seem to me that we have that strong reason to rule out that the reported objects could be physically real. (But again, the main arguments of the post I shared don't hinge on any particular interpretation of UFO data.)

I think it would have been more fair if you hadn't removed all the links (to supporting evidence) that were included in the quote below, since it just comes across as a string of unsupported claims without them:

Beyond the environmental effects, there are also significant health risks associated with the direct consumption of animal products, including red meat, chicken meat, fish meat, eggs and dairy. Conversely, significant health benefits are associated with alternative sources of protein, such as beans, nuts, and seeds. This is relevant both collectively, for the sake of not supporting industries that actively promote poor human nutrition in general, as well as individually, to maximize one’s own health so one can be more effectively altruistic.

I think this evidence on personal health is relevant in the ways described. I don't think it's fair to say that the quote above implies that “[health benefits] will definitely happen with no additional work from you, without any costs or trade-offs”; obviously, any change in diet will require some work and will involve some tradeoffs. But I agree that it's worth addressing the potential pitfalls of vegan diets, and it's a fair critique that that would have been worth including in that essay (even though a top link on the blog does list some resources on this).

FWIW, in terms of additional work, tradeoffs, and maximizing health, I generally believe that it is worth making a serious investment into figuring out how to optimize one's health, such as by investing in a DNA test for nutrition, and I think this is true for virtually everyone. Likewise, I think it's worth being clear that all diets involve tradeoffs and risks, including both vegan and omnivore diets (some of the risks associated with the latter are hinted at in the links above: "red meat, chicken meat, fish meat, eggs and dairy").

I didn't claim that there isn't plenty more data. But a relevant question is: plenty more data for what? He says that the data situation looks pretty good, which I trust is true in many domains (e.g. video data), and that data would probably in turn improve performance in those domains. But I don't see him claiming that the data situation looks good in terms of ensuring significant performance gains across all domains, which would be a more specific and stronger claim.

Moreover, the deference question could be posed in the other direction as well, e.g. do you not trust the careful data collection and projections of Epoch? (Though again, Ilya saying that the data situation looks pretty good is arguably not in conflict with Epoch's projections — nor with any claim I made above — mostly because his brief "pretty good" remark is quite vague.)

Note also that, at least in some domains, OpenAI could end up having less data to train their models with going forward, as they might have been using data illegally.

I think it's a very hard sell to try and get people to sacrifice themselves (and the whole world) for the sake of preventing "fates worse than death".

I'm not talking about people sacrificing themselves or the whole world. Even if we were to adopt a purely survivalist perspective, I think it's still far from obvious that trying to slow things down is more effective than is focusing on other aims. After all, the space of alternative aims that one could focus on is vast, and trying to slow things down comes with non-trivial risks of its own (e.g. risks of backlash from tech-accelerationists). Again, I'm not saying it's clear; I'm saying that it seems to me unclear either way.

We should be doing all we can now to avoid having to face such a predicament!

But, as I see it, what's at issue is what the best way is to avoid such a predicament/how to best navigate given our current all-too risky predicament.

FWIW, I think that a lot of the discussion around this issue appears strongly fear-driven, to such an extent that it seems to get in the way of sober and helpful analysis. This is, to be sure, extremely understandable. But I also suspect that it is not the optimal way to figure out how to best achieve our aims, nor an effective way to persuade readers on this forum. Likewise, I suspect that rallying calls along the lines of "Global moratorium on AGI, now" might generally be received less well than would, say, a deeper analysis of the reasons for and against attempts to institute that policy.

What are the downsides from slowing down?

I'd again prefer to frame the issue as "what are the downsides from spending marginal resources on efforts to slow down?" I think the main downside, from this marginal perspective, is opportunity costs in terms of other efforts to reduce future risks, e.g. trying to implement "fail-safe measures"/"separation from hyperexistential risk" in case a slowdown is insufficiently likely to be successful. There are various ideas that one could try to implement.

In other words, a serious downside of betting chiefly on efforts to slow down over these alternative options could be that these s-risks/hyperexistential risks would end up being significantly greater in counterfactual terms (again, not saying this is clearly the case, but, FWIW, I doubt that efforts to slow down are among the most effective ways to reduce risks like these).

a fast software-driven takeoff is the most likely scenario

I don't think you need to believe this to want to be slamming on the breaks on now.

Didn't mean to say that that's a necessary condition for wanting to slow down. But again, I still think it's highly unclear whether efforts that push for slower progress are more beneficial than alternative efforts.

Load more