All of Magnus Vinding's Comments + Replies

Peacefulness, nonviolence, and experientialist minimalism

It's unfortunate that the quote I selected implies "all minimalist axiologies" but I really was trying to talk about this post.

Perhaps it would be good to add an edit on that as well? E.g. "The author agrees that the answers to these questions are 'yes' (for the restricted class of minimalist axiologies he explores here)." :)

(The restriction is relevant, not least since a number of EAs do seem to hold non-experientialist minimalist views.)

3Rohin Shah3mo
Sure, done.
Peacefulness, nonviolence, and experientialist minimalism

The author agrees that the answers to these questions are "yes".

Not quite. The author assumes a certain class of minimalist axiologies (experientialist ones), according to which the answers to those questions are:

  1. Yes (though a world with untroubled sentient beings would be equally perfect, and there are good reasons to focus more on that ideal of minimalism in practice).
  2. If the hypothetical world contains no disvalue, then pressing the button is not strictly better, but if the hypothetical world does contain disvalue, then it would be better to press a cess
... (read more)
2Rohin Shah3mo
I ignored the first footnote because it's not in the posts' remit, according to the post itself: If you assume this limited scope, I think the answer to the second question is "yes" (and that the post agrees with this). I agree that things change if you expand the scope to other minimalist axiologies. It's unfortunate that the quote I selected implies "all minimalist axiologies" but I really was trying to talk about this post. I shouldn't have called it "the main point", I should have said something like "the main point made in response to the two questions I mentioned", which is what I actually meant. I agree that there is more detail about why the author thinks you shouldn't be worried about it that I did not summarize. I still think it is accurate to say that the author's main response to question 1 and 2, as written in Section 2, is "the answers are yes, but actually that's fine and you shouldn't be worried about it", with the point about cessation implications being one argument for that view.

Thanks for summarizing it.

The worries I respond to are complex and the essay has many main points. Like any author, I hope that people would consider the points in their proper context (and not take them out of context). One main point is the contextualization of the worries itself, which is highlighted by the overviews (1.1–1.2) focusing a lot on the relevant assumptions and on minding the gap between theory and practice.

To complex questions, I don't think it's useful to reduce answers to either "yes" or "no", especially when the answers rest on unrealistic assumptions and look very different in theory versus practice. Between theory and practice, I also tend to consider the practical implications more important.

This analysis seems to neglect all "net negative outcomes", including scenarios in which s-risks are realized (as Mjeard noted), the badness of which can go all the way to the opposite extreme (see e.g. "Astronomical suffering from slightly misaligned artificial intelligence").

Including that consideration may support a more general focus on ensuring a better quality of the future, which may also be supported by considerations related to grabby aliens.

Against the "smarts fetish"

I think it's important to stress that it's not just that some people with an extremely high IQ fail to change their minds on certain issues, and more generally fail to overcome confirmation bias  (which I think is fairly unsurprising). A key point is that there actually doesn't appear to be much of a correlation at all between IQ and resistance to confirmation bias.

So to slightly paraphrase what you wrote above, I didn't just write the post because a correlation across a population is of limited relevance when you’re dealing with a smart individual wh... (read more)

I think the studies you refer to may underrate the importance of IQ for good epistemics. First, as I mentioned in my other comment [] , the correlation between IQ-like measures and the most comprehensive test of rationality was as high as 0.695. This is especially noteworthy considering the fact that Stanovich in particular (I haven't followed the others' work) has for a long time argued along your lines - that there are many things that IQ tests miss. So if anything one would expect him to be biased in the direction of a too low correlation. Second, psychological studies of confirmation bias and other biases tend to study participants' reactions to short vignettes. They don't follow participants over longer periods of time. And I think this may lead them to underrate the importance of intelligence for good epistemics; in particular in communities like the effective altruism and rationalist communities. I think that people can to some extent (though certainly not fully) overcome conformation bias and other biases through being alert to them (not the least in interpersonal discussions), through forming better mental habits, through building better epistemic institutions, and so on. This work is, however, quite cognitively demanding, and I would expect more intelligent people to be substantially better at it. Less intelligent people are likely not as good as engaging in the kind of reflection on their own and others' thought-processes to get these kinds of efforts off the ground. I think that the effective altruist and rationalist communities are unusually good at it: they are constantly on the lookout for biased reasoning, and often engage in meta-discussions about their own and each others' reasoning - whether they, e.g. show signs of confirmation bias. And I think a big reason why that works so well is that these communities are comprised by so many i
Against the "smarts fetish"

You argue that EA overrates IQ

As noted above, my main claim is not that "EA overrates IQ" at a purely descriptive level, but rather that other important traits deserve more focus in practice (because those other important traits seem neglected relative to smarts, and also because — at the level of what we seek to develop and incentivize — those other traits seem more elastic and improvable).

I noted in the comment above that:

one line of evidence I have for this is how often I see references to smarts, including in internal discussions related to career and

... (read more)
3John G. Halstead4mo
"my main claim is not that "EA overrates IQ" at a purely descriptive level, but rather that other important traits deserve more focus in practice" The claim that EA overrates IQ is the same as the claim that other traits deserve more attention
Against the "smarts fetish"

“Science advances one funeral at a time.” If that’s true,

If that were literally true, then science wouldn't ever advance much. :)

It seems that most scientists are in fact willing to change their minds when strong evidence has been provided for a hypothesis that goes against the previously accepted view. The "Planck principle" seems more applicable to scientists who are strongly invested in a given hypothesis, but even in that reference class, I suspect that most scientists do actually change their minds during their lifetime when the evidence is strong. An... (read more)

Yep, that’s why I referred to your 2nd and 3rd traits: A better competing theory is only an inconvenient conclusion if you’re invested in the wrong theory (especially if you yourself created that theory). I know IQ and these traits are probably correlated (again, since some level intelligence is a prerequisite for most of the traits). But I’m assuming the reason you wrote the post is that a correlation across a population isn’t relevant when you’re dealing with a smart individual who lacks one of these traits.
Against the "smarts fetish"

Thanks for your comment and for listing those traits and skills; I strongly agree that those are all useful qualities. :)

One might argue that willingness to do grunt work, taking initiative, and mental stamina all belong in a broader "drive/conscientiousness" category, but I think they are in any case important and meaningfully distinct traits worth highlighting in their own right.

Likewise, one could perhaps argue that "ability to network well" falls under a broader category of "social skills", in which interpersonal kindness and respect might also be said... (read more)

Against the "smarts fetish"

Thanks, it looks interesting. :)

Against the "smarts fetish"

Thanks for your comment, Linch. :)

It's a fair point that my post was quite vague on some key points, and your comment provides a great invitation for me to try to clarify my claims and views a bit.

The article claims that an important trait (smarts) is overrated as a precondition to impact

I actually wouldn't say that that's my core claim, although I do agree with it.

My claim about overemphasis relates more to the level of actions, norms, and practical focus than it relates to predictions about how much variance in impact IQ accounts for. (This is somewhat a... (read more)

4John G. Halstead4mo
Like Linch, I do not see how you present any arguments for your main conclusion in the post. You argue that EA overrates IQ but present no arguments that this is the case. Your response also doesn't present any arguments for that conclusion
Against the "smarts fetish"

Yeah, I know; I didn't mean to imply otherwise. :)

Against the "smarts fetish"

Thanks for sharing, I wasn't aware of that. :)

I'm not surprised that there's a strong correlation between those measures. However, I think it's worth keeping in mind that such "general rationality", in the sense of reasoning correctly in various tests, is still quite different from, and still doesn't capture, many of the traits and virtues listed above, such as being highly driven (let alone altruistically driven), resisting signaling drives in high-stakes social contexts, and displaying interpersonal kindness.

I wasn't saying it has anything to do with, e.g. interpersonal kindness. It was a comment on your the relationship between rationality and intelligence, which your last point related to.
Against the "smarts fetish"

I agree on the correlation point. :)

But I don't think that undermines the point about a potential overemphasis on smarts:

Many of these traits are likely correlated with IQ, but that does not negate the point that one can overemphasize IQ at their expense, and that one can have a high IQ and still completely fail to develop these other traits and virtues.

Moreover, for some of the traits, there doesn't appear to be much of a positive correlation, such as in the case of conscientiousness:

studies suggest that there is a weak negative relationship between consc

... (read more)
3Nathan Young4mo
Yeah you've changed my mind a bit.
Stanovich, West, and Toplak (who are cited in your last link) developed maybe the most ambitious general rationality test to date. As measured by that test, rationality exhibited a strong correlation with IQ-type tests - 0.695. See Stuart Ritchie's informative review [].
Against the "smarts fetish"

I agree that most of these other traits do get emphasized, and I probably should have acknowledged that. :)

But I still think it would be beneficial to emphasize most of these traits much more (which is part of the reason why a book like The Scout Mindset is so important). For example, it seems to me that exploration of fundamental issues is extremely neglected relative to its importance. It also seems to me that potential inconvenience and unpleasantness biases warrant more attention in discussions about the importance of non-human animals and risks of ver... (read more)

Forecasting transformative AI: what's the burden of proof?

your post seems to be arguing that current trends don't suggest a coming explosion, but not that they establish a super-high burden of proof for expecting one.

I'm not sure what you mean by a "super-high burden of proof", but I think the reasons provided in that post represent fairly strong evidence against expecting a future growth explosion, and, if you will, a fairly high burden of proof against it. :)

Specifically, I think the points about the likely imminent end of Moore's law (in the section "Moore’s law is coming to an end") and the related point that... (read more)

Forecasting transformative AI: what's the burden of proof?

I strongly agree with this, but feel that it's been acknowledged both in the reports I draw on and in my pieces.

At the risk of repeating myself, I'll just clarify that my point isn't merely that economic growth has declined significantly since the 1960s. That would be a fairly trivial and obvious point. :)

The key point I was gesturing at, and which I haven't seen discussed anywhere else, is that when we look at doubling rates across the entire history of economic growth (and perhaps the history of life, cf. Hanson, 1998), we have seen a slowdown since the ... (read more)

Forecasting Transformative AI: Are we "trending toward" transformative AI? (How would we know?)

I'm sure there are imperfections remaining, and it remains vague, but I think most people can get a pretty good idea of what's being pointed at there, and I think it reasonably fleshes out the vaguer, simpler definition (which I think is also useful for giving a high-level impression).

I'd disagree that most people can get a good idea of what's being pointed at; not least for the reasons I outlined in Section 1.2 above, regarding how advanced software could already reasonably be claimed to have "precipitate[d] a transition comparable to (or more significant... (read more)

Preventing low back pain with exercise

The evidence base for preventing low back pain with exercise seems much greater than that for adjusting posture, stretching and using ergonomic furniture, which his post also recommends. I wanted to emphasise the importance of exercise as the primary intervention.

It's not clear to me that Huang et al. compared exercise to the best alternative interventions, so it seems safer to say that it's best among those included in that review, and perhaps among the interventions that have been studied the most. But that's a considerably weaker claim than "low back pa... (read more)

Tips for overcoming low back pain

Update: I tried taking curcumin supplements to boost my general health, and after taking them for some weeks, I began to notice that I didn't get low back pain even when I did things that usually triggered it. This was much to my surprise, since I didn't expect the supplements to have any effect on my back. Where before I needed to be careful not to do things that would give me low back pain, I now feel like I would have to make an active effort to make the pain come back. So it feels like a really big difference.

This is anecdotal, of course, but it turns ... (read more)

The effective altruist case for parliamentarism

Tiago writes the following in response to a similar comment made on Overcoming Bias:

That is a common hypothesis, which is why studies usually include legal origins as a control. Others do not need to do it, because they used a fixed effects approach such that any invariant characteristic such as colonizer will be automatically controlled for. But endogeneity might always be an issue, which is why the book also deals with theory, and auxiliary evidence from companies and municipalities. I think you would like it!

As Tiago notes, the evidence goes beyond just... (read more)

9Tiago Santos9mo
In addition to Magnus' points, I don't think the cultural argument does it. It is much less well specified. Some people take issue that I define parliamentarism in the book as "executive subordination to the legislature" as too vague (I think it is clear enough, naturally). But if that is risking being too vague, culture is far more. In a sense, culture has globalized dramatically around the world - language, art, form of dress, food, family size, etc. You will probably argue that those are not the aspects that matter, but then shouldn't it be the claim that culture is what matters to specify what aspects matter? Some would say that the aspects that matter are issues like trust, low corruption, respect of property rights, etc. But are there any cultures which do not value those things, which claim they are outright undesirable? I don't think there are. Instead, all cultures value those goods and would like to achieve them. If all do value those traits at least in abstract, does it make sense to call them cultural? Unfortunately, their achievement of those traits is not conditional only on their desire to do so, but on the underlying incentives in the society. If you live in an "extractive society", behaving like the most trusting Scandinavian person might not make you advance a lot. A good parallel might be price controls (which are, incidentally, much more prevalent in presiential countries). In countries which do not implement price controls, prices achieve their equilibrium and business is trusted. But where countries do implement price controls, parallel markets are created, businesses are accused of cheating and of being excessively greed, trust is undermined. Many lament that the problem is that businesses are more greedy in the latter type of country than in the former. But the incentives are doing the real work. Lastly, I would note that studies do control for long-term cultural aspects when they control for the region/country. One could argue that it is
The effective altruist case for parliamentarism

I think the cause of promoting parliamentarism is potentially quite promising, and something that deserves considerably more attention than it has received so far (besides the OP, I believe this post is the only post related to parliamentarism on this forum).

Unfortunately, I don't feel the OP does justice to the cause or to the arguments in its favor. And I suspect Tiago himself would agree; the more convincing case is found in his book on the subject (that direct link to the book is found on his website).

The following is an excerpt on "Parliamentarism vs.... (read more)

6Tiago Santos9mo
Thanks, Magnus. You're right, the argument involves many more elements which I did not explore in the post. I really like your summary and would invite all others to read the book (which is pretty short, and available for free as a pdf at
4G Gordon Worley III9mo
I'm sure this is addressed in the book I haven't read, but I wonder how much of this is confounded by former British rule. That is, if you factor out parliamentary systems that were established after a legacy of British rule, would it still be the case that parliaments are better? I'm guess the argument is "yes' but I'm not sure and am somewhat suspicious that some of these effects could be cultural ones that just happen to come along with parliaments, making parliamentarism an effect rather than a cause.
New book — "Suffering-Focused Ethics: Defense and Implications"

The book is now also available in audiobook and hardcover formats, and is free on kindle as well:

Forecasting Transformative AI: Are we "trending toward" transformative AI? (How would we know?)

I must admit that I’m quite confused about some of the key definitions employed in this series, and, in part for that reason, I’m often confused about what claims are being made. Specifically, I’m confused about the definitions of “transformative AI” and “PASTA”, and find them to be more vague and/or less well-chosen than what sometimes seems assumed here. I'll try to explain below.

1. Transformative AI (TAI)

1.1 The simple definition

The simple definition of TAI used here is "AI powerful enough to bring us into a new, qualitatively different future". T... (read more)

2Holden Karnofsky10mo
On "transformative AI": I agree that this is quite vague and not as well-defined as it would ideally be, and is not the kind of thing I think we could just hand to superforecasters. But I think it is pointing at something important that I haven't seen a better way of pointing at. I like the definition given in Bio Anchors (which you link to), which includes a footnote addressing the fact that AI could be transformative without literally causing GDP growth to behave as described. I'm sure there are imperfections remaining, and it remains vague, but I think most people can get a pretty good idea of what's being pointed at there, and I think it reasonably fleshes out the vaguer, simpler definition (which I think is also useful for giving a high-level impression). In this series, I mostly stuck with the simple definition because I think the discussion of PASTA and digital people makes it fairly easy to see what kind of specific thing I'm pointing at, in a different way. I am not aware of places where it's implied that "transformative AI" is a highly well-defined concept suitable for superforecasters (and I don't think the example you gave in fact implies this), but I'm happy to try to address them if you point them out. On PASTA: my view is that there is a degree of automation that would in fact result in dramatically faster scientific progress than we've ever seen before. I don't think this is self-evident, or tightly proven by the series, but it is something I believe, and I think the series does a reasonable job pointing to the main intuitions behind why I believe it (in particular, the theoretical feedback loop this would create, the "modeling the human trajectory" projection of what we might expect if the "population bottleneck" were removed, and the enormous transformative potential of particular technologies that might result).
Forecasting transformative AI: what's the burden of proof?

Thanks for your reply :-)

Most of your post seems to be arguing that current economic trends don't suggest a coming growth explosion.

That's not quite how I'd summarize it: four of the six main points/sections (the last four) are about scientific/technological progress in particular. So I don't think the reasons listed are mostly a matter of economic trends in general. (And I think "reasons listed" is an apt way to put it, since my post mostly lists some reasons to be skeptical of a future growth explosion — and links to some relevant sources — as opposed to... (read more)

2Holden Karnofsky10mo
Fair point re: economic trends vs. technological trends, though I would stand by the outline of what I said: your post seems to be arguing that current trends don't suggest a coming explosion, but not that they establish a super-high burden of proof for expecting one. Re: "For example, the observation that new scientific insights per human have declined rapidly suggests that even getting digital people might not be enough to get us to a growth explosion, as most of the insights may have been plugged already." Note that the growth modeling analyses I draw on incorporate the "ideas are getting harder to find" dynamic and discuss it at length. So I think a more specific, quantitative argument needs to be made here - I didn't argue for the plausibility of explosive growth based on non-declining insights per mind. Re: "I think the observation mentioned in the second section of my post seems both highly relevant and overlooked, namely that if we take a nerd-dive into the data and look at doublings, we have actually seen an unprecedented deceleration (in terms of how the growth rate has changed across doublings). And while this does not by any means rule out a future growth explosion, I think it is an observation that should be taken into account, and it is perhaps the main reason to be skeptical of a future growth explosion at the level of long-run growth trends." I strongly agree with this, but feel that it's been acknowledged both in the reports I draw on and in my pieces. E.g., I discuss the demographic transition and present the possible explosion as one possibility, rather than as a future strongly implied by the past (and this is something I made a deliberate effort to do). To be clear, my claim here isn't "The points you're raising are unimportant." I think they are quite important. In a world with linear insights per human and no deceleration, I would've written this series very differently; the declining returns and deceleration move me toward "The developme
Forecasting transformative AI: what's the burden of proof?

I don't feel this post engages with the strongest reasons to be skeptical of a growth explosion. The following post outlines what I would consider some of the strongest such reasons:

3Holden Karnofsky1y
Most of your post seems to be arguing that current economic trends don't suggest a coming growth explosion. If current economic trends were all the information I had, I would think a growth explosion this century is <<50% likely (maybe 5-10%?) My main reason for a higher probability is AI-specific analysis (covered in future posts). This post is arguing not "Current economic trends suggest a growth explosion is near" but rather "A growth explosion is plausible enough (and not strongly enough contraindicated by current economic trends) that we shouldn't too heavily discount separate estimates implying that transformative AI will be developed in the coming decades." I mostly don't see the arguments in the piece you linked as providing a strong counter to this claim, but if you highlight which you think provide the strongest counters, I can consider more closely. The one that seems initially like the best candidate for such an argument is "Many of our technologies cannot get orders of magnitude more efficient." But I'm not arguing that e.g. particular energy technologies will get orders of magnitude more efficient; I'm arguing we'll see enough acceleration to be able to quickly develop something as transformative as digital people. There may be an argument that this isn't possible due to key bottlenecks being near their efficiency limits, but I don't think the case in your piece is at that level of specificity.
Magnus Vinding's Shortform

An argument in favor of (fanatical) short-termism?

[Warning: potentially crazy-making idea.]

Section 5 in Guth, 2007 presents an interesting, if unsettling idea: on some inflationary models, new universes continuously emerge at an enormous rate, which in turn means (maybe?) that the grander ensemble of pocket universes consists disproportionally of young universes.

More precisely, Guth writes that, "in each second the number of pocket universes that exist is multiplied by a factor of exp{10^37}." Thus, naively, we should expect earlier points in a g... (read more)

AMA: Tobias Baumann, Center for Reducing Suffering

Concerning how EA views on this compare to the views of the general population, I suspect they aren’t all that different. Two bits of weak evidence:


Brian Tomasik did a small, admittedly unrepresentative and imperfect Mechanical Turk survey in which he asked people the following:

At the end of your life, you'll get an additional X years of happy, youthful, and interesting life if you first agree to be covered in gasoline and burned in flames for one minute. How big would X have to be before you'd accept the deal?

More than 40 percent said t... (read more)

2Sebastian Schwiecker2y
Thanks a lot for the reply and all the links.
AMA: Tobias Baumann, Center for Reducing Suffering

[Warning: potentially disturbing discussion of suicide and extreme suffering.]

I agree with many of the points made by Anthony. It is important to control for these other confounding factors, and to make clear in this thought experiment that the person in question cannot reduce more suffering for others, and that the suicide would cause less suffering in expectation (which is plausibly false in the real world, also considering the potential for suicide attempts to go horribly wrong, Humphry, 1991, “Bizarre ways to die”). (So to be clear, and a... (read more)

The case of the missing cause prioritisation research

Thanks for writing this post! :-)

Two points:

i. On how we think about cause prioritization, and what comes before

2. Consideration of different views and ethics and how this affects what causes might be most important.

It’s not quite clear to me what this means. But it seems related to a broader point that I think is generally under-appreciated, or at least rarely acknowledged, namely that cause prioritization is highly value relative.

The causes and interventions that are optimal relative to one value system are unlikely to be optimal relative to anoth... (read more)

This post [] - which I found interesting and useful - feels relevant in relation to your first point. A relevant excerpt: (I added two line breaks and changed where the diagram was, compared to the original text.) (That post was written on behalf of my former employer, but not by me, and before I was aware of them.)
Why Realists and Anti-Realists Disagree
The way I think about it, when I'm suffering, this is my brain subjectively "disvaluing" (in the sense of wanting to end or change it) the state it's currently in.

This is where I see a dualism of sorts, at least in the way it's phrased. There is the brain disvaluing (as an evaluating subject) the state it's in (where this state is conceived of as an evaluated object of sorts). But the way I think about it, there is just the state your mind-brain is in, and the disvaluing is part of that mind-brain state. (What else could it be?)

This may just seem semantic,... (read more)

New book — "Suffering-Focused Ethics: Defense and Implications"

Thanks for sharing your reflections :-)

This is because of imagining and seeing examples as in the book and here.

Just wanted to add a couple of extra references like this:

The Seriousness of Suffering: Supplement

The Horror of Suffering

Preventing Extreme Suffering Has Moral Priority

To be more specific, I think that one second of the most extreme suffering (without subsequent consequences) would be better than, say, a broken leg.

Just want to note, also for other readers, that I say a bit about such sentiments involving "one second of the most extreme suff... (read more)

New book — "Suffering-Focused Ethics: Defense and Implications"

Thanks for your comment. I appreciate it! :-)

In relation to counterintuitions and counterarguments, I can honestly say that I've spent a lot of time searching for good ones, and tried to include as many as I could in a charitable way (especially in Chapter 8).

I'm still keen to find more opposing arguments and intuitions, and to see them explored in depth. As hinted in the post, I hope my book can provoke people to reflect on these issues and to present the strongest case for their views, which I'd really like to see. I believe such arguments can help advance the views of all of us toward greater levels of nuance and sophistication.

New book — "Suffering-Focused Ethics: Defense and Implications"

Thanks for your comment, Michael :-)

What I was keen to get an example of was mainly this (omitted in the text you quoted above):

Also, whenever there was a problem with an argument, Magnus can retreat to a less demanding version of Suffering-Focused Ethics, which makes it more difficult for the reader to follow the arguments.

That is, an example of how I retreat from the main position I defend (in Chapters 4 and 5), such as by relying on the views of other philosophers whose premises I haven't defended. I don't believe I do that anywhere. Again, what I do in... (read more)

New book — "Suffering-Focused Ethics: Defense and Implications"

Thanks for sharing your review. A few comments:

Concerning the definition of suffering, I do actually provide a definition: an overall bad feeling, or state of consciousness (as I note, I here follow Mayerfeld, 1999, pp. 14-15). One may argue that this is not a particularly reductive definition, and I say the same in a footnote:

One cannot, I submit, define suffering in more precise or reductive terms than this. For just as one cannot ultimately define the experience of, say, phenomenal redness in any other way than by pointing to it, one cannot define a bad
... (read more)
Thanks for this lengthy reply! I want to emphasise that I enjoyed and learned a lot from reading this book, and that most of my criticism I think of mostly as resulting from a deliberate choice of keeping the book readable, and definitely not something that I have any suggestions for improvements. I appreciate your clarifications on chapter 7, on the definition of suffering and of using the arguments from chapters 4,5. Regarding "line of retreat", I meant something similar to your comment to Michael- I think that I felt simply that there were many claims which were supported by various views where I felt that it was difficult for me to judge how to take these into account. I looked back to find a good example of an actual "retreat" and honestly I can't find any. I think that it's possible that I have read something wrongly in chapter 8 and that tainted my expression of some of the reasoning in the book. In any case, I have clearly overemphasised that and I'll retract it. Regarding that feeling of being persuaded, I'm not really sure what to say. It mostly felt that I could easily come up with many counter-intuitions throughout reading the book and that raised some mental alarm bells: these are only the ideas I can come up with, and I'm sure that there are plenty more. I didn't feel that opposing views were clearly explored, even though they were listed. If that's how books that defend moral positions are supposed to be written, then my inside view thinks that's epistemically mistaken. I'd be very interested in discussing the actual contents of my views on the ethics of suffering, on which I'd really appreciate feedback; I've scheduled myself time to write this up here in the weekend. :)
I think 3.2 Intra- and Interpersonal Claims and the discussion of Parfit's compensation principle, Mill's harm principle and Shiffrin's consent principle just before in 3.1 are examples. You don't discuss how they defend these views/principles. (I only started reading last night, and this is about where I am now.)
New book — "Suffering-Focused Ethics: Defense and Implications"

Thanks for your question, Niklas. It's an important one.

The following link contains some resources for sustainable activism that I've found useful:

But specifically, it may be useful to cultivate compassion — the desire for other beings to be free from suffering — more than (affective) empathy, i.e. actually feeling the feelings of those who suffer.

Here is an informative conversation about it:

As I write in section 9.5 (... (read more)

Why Realists and Anti-Realists Disagree
Normative ethics: There’s a sense in which consequentialist obligations to avoid purchasing meat from factory-farmed animals are “real.” But we could also take a different perspective (according to which morality is about hypothetical contracts between people), in which case we’d see no obligations toward animals.

Realists of course agree that we can take another perspective, and that this can be fruitful, but the crucial issue for the realist is whether one perspective is ultimately more valid, or true, than others (as you hint... (read more)

The way I think about it, when I'm suffering, this is my brain subjectively "disvaluing" (in the sense of wanting to end or change it) the state it's currently in. This is not the same as saying that there exists a state of the world that is objectively to be disvalued. (Of course, for people who are looking for meaningful life goals, disvaluing all suffering is a natural next step, which we both have taken.:)) I talk about notions like 'life goals' (which sort of consequentialist am I?), 'integrity' (what type of person do I want to be?), 'cooperation/respect' (how do I think of the relation between my life goals and other people's life goals?), 'reflective equilibrium' (part of philosophical methodology), 'valuing reflection' (the anti-realist notion of normative uncertainty), etc. I find that this works perfectly well and it doesn't feel to me like I'm missing parts of the picture. If you're asking for how I justify particular answers to the above, I'd just say that I'm basing those answers on what feels the most right to me. On my fundamental intuitions. I consider them axiomatic and that's where the buck stops. This makes sense if your only bedrock concepts are Tier 1 or lower. If you allow Tier 2 (normative bedrock concepts), I'd point out that there are arguments why all of normativity is related, in which case it would be a bit weird to say that metaphilosophy has no speaker-independent solution, but e.g., ethics or epistemology do have such solutions. (I take it that your moral realism is primarily based on consciousness realism, so I would classify it as Tier 1 rather than Tier 2. Of course, this typology is very crude and one can reasonably object to the specifics.)
New book — "Suffering-Focused Ethics: Defense and Implications"

Thanks, Mike!

Great questions. Let me see whether I can do them justice.

If you could change peoples' minds on one thing, what would it be? I.e. what do you find the most frustrating/pernicious/widespread mistake on this topic?

Three important things come to mind:

1. There seems to be this common misconception that if you hold a suffering-focused view, then you will, or at least you should, endorse forms of violence that seem abhorrent to common sense. For example, you should consider it good when people get killed (because it prevents future suffering fo... (read more)

New book — "Suffering-Focused Ethics: Defense and Implications"

Thanks for your comment, George.

Sections 1.4 and 8.5 in my book deal directly with the first issue you raise. Also see Chapter 3, "Creating Happiness at the Price of Suffering Is Wrong", for various arguments against a moral symmetry between pleasure and suffering. But many chapters in the first part of the book deal with this.

Empirically, I think it's pretty clear that most people are willing to trade off pleasure and pain for themselves.

I say a good deal about this in Chapter 2. I also discuss the moral relevance of such intrapersonal claims in section 3.2, "Intra- and Interpersonal Claims".

What analysis has been done of space colonization as a cause area?

You're welcome! :-)

Whether this is indeed a dissenting view seems unclear. Relative to the question of how space expansion would affect x-risk, it seems that environmentalists (of whom there are many) tend to believe it would increase such risks (though it's of course debatable how much weight to give their views). Some highly incomplete considerations can be found here:

The sentiment expressed in the following video by Bill Maher, i.e. that space expansion is a "dangerous idea"... (read more)

4Eli Rose3y
FWIW, I don't find it at all surprising when people's moral preferences contradict themselves (in terms of likely implications, as you say). I myself have many contradictory moral preferences.
What analysis has been done of space colonization as a cause area?

Some have argued that space colonization would increase existential risks. Here is political scientist Daniel Deudney, whose book Dark Skies is supposed to be published by OUP this fall:

Once large scale expansion into space gets started, it will be very difficult to stop. My overall point is that we should stop viewing these ambitious space expansionist schemes as desirable, even if they are not yet feasible. Instead we should see them as deeply undesirable, and be glad that they are not yet feasible.[…] Space expansion may indeed be inevitable, bu
... (read more)
4Eli Rose3y
Thanks for the perspective on dissenting views!
How much EA analysis of AI safety as a cause area exists?

Thanks for the stab, Anthony. It's fairly fair. :-)

Some clarifying points:

First, I should note that my piece was written from the perspective of suffering-focused ethics.

Second, I would not say that "investment in AI safety work by the EA community today would only make sense if the probability of AI-catalyzed GCR were decently high". Even setting aside the question of what "decently high" means, I would note that:

1) Whether such investments in AI safety make sense depends in part on one's values. (Though another critique I wo... (read more)

How much EA analysis of AI safety as a cause area exists?

In brief: the less of a determinant specific AGI structure is of future outcomes, the less relevant/worthy of investment it is.

How much EA analysis of AI safety as a cause area exists?

Interesting posts. Yet I don't see how they support that what I described is unlikely. In particular, I don't see how "easy coordination" is in tension with what I wrote.

To clarify, competition that determines outcomes can readily happen within a framework of shared goals, and as instrumental to some overarching final goal. If the final goal is, say, to maximize economic growth (or if that is an important instrumental goal), this would likely lead to specialization and competition among various agents that try out different things, and ... (read more)

Can you explain why this is relevant to how much effort we should put into AI alignment research today?
How much EA analysis of AI safety as a cause area exists?

Thanks for sharing and for the kind words. :-)

I should like to clarify that I also support FRI's approach to reducing AI s-risks. The issue is more how big a fraction of our resources approaches of this kind deserve relative to other things. My view is that, relatively speaking, we very much underinvest in addressing other risks, by which I roughly mean "risks not stemming primarily from FOOM or sub-optimally written software" (which can still involve AI plenty, of course). I would like to see a greater investment in broad explorative resear... (read more)

I think there are good reasons to think this isn't likely, aside from the possibility of FOOM: * Strategic implications of AIs’ ability to coordinate at low cost, for example by merging [] * AGI will drastically increase economies of scale []
How do most utilitarians feel about "replacement" thought experiments?
That's why the very first words of my comment were "I don't identify as a utilitarian."

I appreciate that, and as I noted, I think this is fine. :-)

I just wanted to flag this because it took me some time to clarify whether you were replying based on 1) moral uncertainty/other frameworks, or 2) instrumental considerations relative to pure utilitarianism. I first assumed you were replying based on 2) (as Brian suggested), and I believe many others reading your answer might draw the same conclusion. But a closer reading made it clear to me you were primarily replying based on 1).

How do most utilitarians feel about "replacement" thought experiments?
The contractarian (and commonsense and pluralism, but the theory I would most invoke for theoretical understanding is contractarian) objection to such things greatly outweighs the utilitarian case.

It is worth noting that this is not, as it stands, a reply available to a pure traditional utilitarian.

failing to leave one galaxy, let alone one solar system for existing beings out of billions of galaxies would be ludicrously monomaniacal and overconfident

But a relevant question here is whether that also holds true given a purely utilitarian view, as opposed to... (read more)

It is worth noting that this is not, as it stands, a reply available to a pure traditional utilitarian.

That's why the very first words of my comment were "I don't identify as a utilitarian."

I think the idea is that even a pure utilitarian should care about contractarian-style thinking for almost any practical scenario, even if there are some thought experiments where that's not the case.
How do most utilitarians feel about "replacement" thought experiments?

Thanks for posting this, Richard. :-)

I think it is worth explaining what Knutsson's argument in fact is.

His argument is not that the replacement objection against traditional/classical utilitarianism (TU) is plausible. Rather, the argument is that the replacement objection against TU (as well as other consequentialist views it can be applied to, such as certain prioritarian views) is roughly as plausible as the world destruction argument is against negative utilitarianism (NU). And therefore, if one rejects NU and favors TU, or a similarly "repla... (read more)

I agree and didn't mean to imply that Knutsson endorses the argument in absolute terms; thanks for the clarification.
Critique of Superintelligence Part 2

Thanks for writing this. :-)

Just a friendly note: even as someone who largely agrees with you, I must say that I think a term like "absurd" is generally worth avoiding in relation to positions one disagrees with (I also say this as someone who is guilty of having used this term in similar contexts before).

I think it is better to use less emotionally-laden terms, such as "highly unlikely" or "against everything we have observed so far", not least since "absurd" hardly adds anything of substance beyond what these alter... (read more)

What Is Moral Realism?

Thanks for your reply :-)

For instance, I don't understand how [open individualism] differs from empty individualism. I'd understand if these are different framings or different metaphores, but if we assume that we're talking about positions that can be true or false, I don't understand what we're arguing about when asking whether open individualism or true, or when discussing open vs. empty individualism.

I agree completely. I identify equally as an open and empty individualist. As I've written elsewhere (in You Are Them): "I think these 'positions' ar... (read more)

What Is Moral Realism?

Thanks for writing this, Lukas. :-)

As a self-identified moral realist, I did not find my own view represented in this post, although perhaps Railton’s naturalist position is the one that comes the closest. I can identify both as an objectivist, a constructivist, and a subjectivist, indeed even a Randian objectivist. It all rests on what the nature of the ill-specified “subject” in question is. If one is an open individualist, then subjectivism and objectivism will, one can argue, collapse into one. According to open individualism, the adoption of Randianis... (read more)

Cool! I think the closest I'll come to discussing this view is in footnote 18. I plan to have a post on moral realism via introspection about the intrinsic goodness (or badness) of certain conscious states. I agree with reductionism about personal identity and I also find this to be one of the most persuasive arguments in favor of altruistic life goals. I would not call myself an open indvidualist though because I'm not sure what the position is exactly saying. For instance, I don't understand how it differs from empty individualism. I'd understand if these are different framings or different metaphores, but if we assume that we're talking about positions that can be true or false, I don't understand what we're arguing about when asking whether open individualism or true, or when discussing open vs. empty individualism. Also, I think it's perfectly coherent to have egoistic goals even under a reductionist view of personal identity. (It just turns out that egoism is not a well-defined concept either, and one has to make some judgment calls if one ever expects to encounter edge-cases for which our intuitions give no obvious answers about whether something is still "me.") Yeah, fair point. I mean, even Railton's own view has plenty of practical relevance in the sense that it highlights that certain societal arrangements lead to more overall well-being or life satisfaction than others. (That's also a point that Sam Harris makes.) But if that's all we mean by "moral realism" then it would be rather trivial. Maybe my criteria are a bit too strict, and I would indeed already regard it as extremely surprising if you get something like One Compelling Axiology that agrees on population ethics while leaving a few other things underdetermined.
Load More