Working to reduce extreme suffering for all sentient beings.
Author of Suffering-Focused Ethics: Defense and Implications; Reasoned Politics; & Essays on Suffering-Focused Ethics.
Co-founder (with Tobias Baumann) of the Center for Reducing Suffering (CRS).
Thus it is not at all true that that we ignore the possibility of many quiet civs.
But that's not the claim of the quoted text, which is explicitly about quiet expansionist aliens (e.g. expanding as far and wide as loud expansionist ones). The model does seem to ignore those (and such quiet expansionists might have no borders detectable by us).
Thanks, and thanks for the question! :)
It's indeed not obvious what I mean when I write "a smoothed-out line between the estimated growth rate at the respective years listed along the x-axis". It's neither the annual growth rate in that particular year in isolation (which is subject to significant fluctuations), nor the annual average growth rate from the previously listed year to the next listed year (which would generally not be a good estimate for the latter year).
Instead, it's an estimated underlying growth rate at that year based on the growth rates in the (more) closely adjacent years. I can see that the value I estimated for 2021 was 2.65 percent, the average growth rate from 2015-2022 (according to the data from The World Bank). One could also have chosen, say, 2020-2022, which would yield an estimate of 2.01 percent, but that's arguably too low an estimate given the corona recession.
I think this is an important point. In general terms, it seems worth keeping in mind that option value also entails option disvalue (e.g. the option of losing control and giving rise to a worst-case future).
Regarding long reflection in particular, I notice that the quotes above seem to mostly mention it in a positive light, yet its feasibility and desirability can also be separately criticized, as I've tried to do elsewhere:
First, there are reasons to doubt that a condition of long reflection is feasible or even desirable, given that it would seem to require strong limits to voluntary actions that diverge from the ideal of reflection. To think that we can choose to create a condition of long reflection may be an instance of the illusion of control. Human civilization is likely to develop according to its immediate interests, and seems unlikely to ever be steered via a common process of reflection.
Second, even if we were to secure a condition of long reflection, there is no guarantee that humanity would ultimately be able to reach a sufficient level of agreement regarding the right path forward — after all, it is conceivable that a long reflection could go awfully wrong, and that bad values could win out due to poor execution or malevolent agents hijacking the process.
The limited feasibility of a long reflection suggests that there is no substitute for reflecting now. Failing to clarify and act on our values from this point onward carries a serious risk of pursuing a suboptimal path that we may not be able to reverse later. The resources we spend pursuing a long reflection (which seems unlikely to ever occur) are resources not spent on addressing issues that might be more important and more time-sensitive, such as steering away from worst-case outcomes.
Thanks for your question, Péter :)
There's not a specific plan, though there is a vague plan to create an audio version at some point. One challenge is that the book is full of in-text citations, which in some places makes the book difficult to narrate (and it also means that it's not easy to create a listenable version with software). You're welcome to give it a try if you want, though I should note that narration can be more difficult than one might expect (e.g. even professional narrators often make a lot of mistakes that then need to be corrected).
Thanks for your comment, Michael :)
I should reiterate that my note above is rather speculative, and I really haven't thought much about this stuff.
1: Yes, I believe that's what inflation theories generally entail.
2: I agree, it doesn't follow that they're short-lived.
In each pocket universe, couldn't targeting its far future be best (assuming risk neutral expected value-maximizing utilitarianism)? And then the same would hold across pocket universes.
I guess it could be; I suppose it depends both on the empirical "details" and one's decision theory.
Regarding options a and b, a third option could be:
c: There is an ensemble of finitely many pocket universes wherein new pocket universes emerge in an unbounded manner for eternity, where there will always be a vast predominance of (finitely many) younger pocket universes. (Note that this need not imply that any individual pocket universe is eternal, let alone that any pocket universe can support the existence of value entities for eternity.) In this scenario, for any summation between two points in "global time" across the totality of the multiverse, earlier "pocket-universe moments" will vastly dominate. That might be an argument in favor of extreme neartermism (in that kind of scenario).
But, of course, we don't know whether we are in such a scenario — indeed, one could argue that we have strong anthropic evidence suggesting that we are not — and it seems that common-sense heuristics would in any case speak against giving much weight to these kinds of speculative considerations (though admittedly such heuristics also push somewhat against a strong long-term focus).
These are cached arguments that are irrelevant to this particular post and/or properly disclaimed within the post.
I don't agree that these points are properly disclaimed in the post. I think the post gives an imbalanced impression of the discussion and potential biases around these issues, and I think that impression is worth balancing out, even if presenting a balanced impression wasn't the point of the post.
The asks from this post aren't already in the water supply of this community; everyone reading EA Forum has, by contrast, already encountered the recommendation to take animal welfare more seriously.
I don't think this remark relates so closely to my comment. My comment wasn't about a mere "recommendation to take animal welfare more seriously", but rather about biases that may influence us when it comes to evaluations of arguments regarding the moral status of, for example, speciesism and veganism, as well as about the practical feasibility of veganism. It's not my impression that considerations about such potential biases, and the arguments and research that relate to them (this paper being another example of such research), are familiar to everyone reading the EA Forum.
I have the same impression with respect to philosophical arguments against speciesism (which generally have far stronger implications than just a recommendation to take animal welfare more seriously). For example, it's not my impression that everyone reading the EA Forum is familiar with the argument from species overlap. Indeed, it seems to me that this argument and its implications are generally underappreciated even among most animal advocates.
I agree that vegan advocacy is often biased and insufficiently informed. That being said, I think similar points apply with comparable, if not greater, strength in the "opposite" direction, and I think we end up with an unduly incomplete perspective on the broader discussion around this issue if we only (or almost only) focus on the biases of vegan advocacy alone.
For example, in terms of identifying reasonable moral views (which, depending on one's meta-ethical view, isn't necessarily a matter of truth-seeking, but perhaps at least a matter of being "plausible-view-seeking"), it seems that there are strong anthropocentric and speciesist biases that work against a fair evaluation of the arguments against speciesism, and which likewise work against a fair evaluation of the moral status of veganism (e.g. from an impartial sentiocentric perspective).
Similarly, with respect to the feasibility of veganism, it seems that factors such as personal inconvenience and perceived stigma against vegans plausibly give rise to biases (in many people) toward overstating the difficulties and hazards of veganism (as also briefly acknowledged in the OP: "I’m sure many people do overestimate the difficulties of veganism").
Relatedly, with respect to the section "What do EA vegan advocates need to do?", I agree with the recommendation to "Take responsibility for the nutritional education of vegans you create". But by extension, an impartial sentiocentric perspective (and even just moderately impartial ones) would also endorse an analogous recommendation like "Take responsibility for the harm that you directly cause to, or fail to prevent for, non-human animals". It seems important not to exclude that aspect of our moral responsibility, and indeed to explicitly include it, as inconvenient as it admittedly is.
The view obviously does have "implausible" implications, if that means "implications that conflict with what seems obvious to most people at first glance".
I don't think what Knutsson means by "plausible" is "what seems obvious to most people at first glance". I also don't think that's a particularly common or plausible use of the term "plausible". (Some examples of where "plausible" and "what seems obvious to most people at first glance" plausibly come apart include what most people in the past might at first glance have considered obvious about the moral status of human slavery, as well as what most people today might at first glance say about the moral status of farming and killing non-human animals.)
Few people agree that "pleasure" and "happiness" are totally worthless in themselves.
Note that Knutsson does not deny that pleasure and happiness are worthwhile in the sense of being better for a person than unpleasure and unhappiness (cf. "What about making individuals happier? Yes, we should do that."). Nor does he deny that certain experiences can benefit existing beings (e.g. by satisfying certain needs). What he argues against is instead that pleasure and experiential happiness are something "above" or "on the other side of" a completely undisturbed state.
As for the claim about "few people" (and setting aside that majority opinion is hardly a good standard for plausibility, as I suspect you'd agree), it's not clear that this "few people" claim is empirically accurate, especially if it concerns the idea that pleasure isn't something "above" a completely undisturbed state. The following is an apropos quote:
The intuition that the badness of suffering doesn’t compare to the supposed badness of inanimate matter (as non-pleasure) seems very common, and the same goes for the view that contentment is what matters, not pleasure-intensity [cf. Gloor, 2017, sec. 2.1]. There are nearly 1.5 billion Buddhists and Hindus, and while Buddhism is less explicit and less consequentialist than negative utilitarianism, the basic (though not uniform) Buddhist view on how pleasure and suffering are being valued is very similar to negative utilitarianism; Hinduism contains some similar views. Ancient Western philosophers such as Epicurus and some Stoics proposed definitions of “happiness” in terms of the absence of suffering.
(On Buddhism and Epicureanism, see e.g. Breyer, 2015; Sherman, 2017; and the recent review of minimalist views of wellbeing by Teo Ajantaival.)
The reason this matters is that EA frequently decides to make decisions, including funding decisions, based on these ridiculously uncertain estimates. You yourself are advocating for this in your article.
I think that misrepresents what I write and "advocate" in the essay. Among various other qualifications, I write the following (emphases added):
I should also clarify that the decision-related implications that I here speculate on are not meant as anything like decisive or overriding considerations. Rather, I think they would mostly count as weak to modest considerations in our assessments of how to act, all things considered.
My claims about how I think these would be "weak to modest considerations in our assessments of how to act" are not predicated on the exact manner in which I represent my beliefs: I'd say the same regardless of whether I'm speaking in purely qualitative terms or in terms of ranges of probabilities.
In summary, people should either start stating their uncertainty explicitly, or they should start saying "I don't know".
FWIW, I do state uncertainty multiple times, except in qualitative rather than quantitative terms. A few examples:
This essay contains a lot of speculation and loose probability estimates. It would be tiresome if I constantly repeated caveats like “this is extremely speculative” and “this is just a very loose estimate that I am highly uncertain about”. So rather than making this essay unreadable with constant such remarks, I instead say it once from the outset: many of the claims I make here are rather speculative and they mostly do not imply a high level of confidence. ... I hope that readers will keep this key qualification in mind.
As with all the numbers I give in this essay, the following are just rough numbers that I am not adamant about defending ...
Of course, this is a rather crude and preliminary analysis.
Yeah, it would make sense to include it. :) As I wrote "Robin Hanson has many big ideas", and since the previous section was about signaling and status, I just mentioned some other examples here instead. Prediction markets could have been another one (though it's included in futarchy).