Yeah I’m not totally sure what it implies. For consequentialists, we could say that bringing the life into existence is itself morally neutral; but once the life exists, we have reason to end it (since the life is bad for that person, although we’d have to make further sense of that claim). Deontologists could just say that there is a constraint against bringing into existence tortured lives, but this isn’t because of the life’s contribution to some “total goodness” of the world. Presumably we’d want some further explanation for why this constraint should ... (read more)
I mostly meant to say that someone who otherwise rejects totalism would agree to (*), so as to emphasize that these diverse values are really tied to our position on the value of good lives (whether good = virtuous or pleasurable or whatever).Similarly, I think the transitivity issue has less to do with our theory of wellbeing (what counts as a good life) and more to do with our theory of population ethics. As to how we can resolve this apparent issue, there are several things we could say. We could (as I think Larry Temkin and others have done) agree with... (read more)
Re: the dependence on future existence concerning the values of "freedom/autonomy, relationships (friendship/family/love), art/beauty/expression, truth/discovery, the continuation of tradition/ancestors' efforts, etc.," I think that most of these (freedom/autonomy, relationships, truth/discovery) are considered valuable primarily because of their role in "the good life," i.e. their contribution to individual wellbeing (as per "objective list" theories of wellbeing), so the contingency seems pretty clear here. Much less so for the others, unless we are convinced that people only value these instrumentally.
Local vs. global optimization in career choiceLike many young people in the EA community, I often find myself paralyzed by career planning and am quick to second-guess my current path, developing an unhealthy obsession for keeping doors open in case I realize that I really should have done this other thing.
Many posts have been written recently about the pitfalls of planning your career as if you were some generic template to be molded by 80,000 Hours [reference Holden's aptitudes post, etc.]. I'm still trying to process these ideas and think that the disti... (read more)
You say that care more about the preference of people than about total wellbeing, and that it'd change your mind if it turns out that people today prefer longtermist causes. What do you think about the preferences of future people? You seem to take the "rather make people happy than to make happy people" point of view on population ethics, but future preferences extend beyond their preference to exist. Since you also aren't interested in a world where trillions of people watch Netflix all day, I take it that you don't take their preferences as that im
You say that care more about the preference of people than about total wellbeing, and that it'd change your mind if it turns out that people today prefer longtermist causes.
What do you think about the preferences of future people? You seem to take the "rather make people happy than to make happy people" point of view on population ethics, but future preferences extend beyond their preference to exist. Since you also aren't interested in a world where trillions of people watch Netflix all day, I take it that you don't take their preferences as that im
Yeah my mistake, I should have been clearer about the link for the proposed changes. I think we’re mostly in agreement. My proposed list is probably overcorrecting, and I definitely agree that more criticisms of both approaches are needed. Perhaps a compromise would be just including the reading entitled “Common Ground for Longtermists,” or something similar.
I think you’re right that many definitions of x-risk are broad enough to include (most) s-risks, but I’m mostly concerned about the term “x-risk” losing this broader meaning and instead just referring ... (read more)
Hey Mauricio, thanks for your reply. I’ll reply later with some more remarks, but I’ll list some quick thoughts here:
I agree that s-risks can seem more “out there,” but I think some of the readings I’ve listed do a good job of emphasizing the more general worry that the future involves a great deal of suffering. It seems to me that the asymmetry in content about extinction risks vs. s-risks is less about the particular examples and more about the general framework. Taking this into account, perhaps we could write up something to be a gentler introducti
Ah sorry, I hadn't seen your list of proposed readings (I wrongly thought the relevant link was just a link to the old syllabus). Your points about those readings in (1) and (3) do seem to help with these concerns. A few thoughts:
Hi Aaron, thanks for your reply. I’ve listed some suggestions in one of the hyperlinks above, but I’ll put it here too: https://docs.google.com/document/d/1niRwbh3eejByFQwoiZ0NiaSZDUawn206PUmHs7aKL0A/edit?usp=sharing
I have not put much time into this, so I’d love to hear your thoughts on the proposed changes.
Some criticism of the EA Virtual Programs introductory fellowship syllabus:
I was recently looking through the EA Virtual Programs introductory fellowship syllabus. I was disappointed to see zero mention of s-risks or the possible relevance of animal advocacy to longtermism in the sections on longtermism and existential risk.
I understand that mainstream EA is largely classical utilitarian in practice (even if it recognizes moral uncertainty in principle), but it seems irresponsible not to expose people to these ideas even by the lights of classical utilitar... (read more)
Yeah I'm not really sure why we use the term x-risk anymore. There seems to be so much disagreement and confusion about where extinction, suffering, loss of potential, global catastrophic risks, etc. fit into the picture. More granularity seems desirable.https://forum.effectivealtruism.org/posts/AJbZ2hHR4bmeZKznG/venn-diagrams-of-existential-global-and-suffering is helpful.
Just adding onto this, for those interested in learning how a Kantian meta-ethical approach might be compatible with a consequentialist normative theory, see Kagan's "Kantianism for Consequentialists": https://campuspress.yale.edu/shellykagan/files/2016/07/Kantianism-for-Consequentialists-2cldc82.pdf
Has Singer ever said anything about s-risks? If not, I’m curious to hear his thoughts, especially concerning how his current view compares to what he would’ve thought during his time as a preference utilitarian.
Sorry, I'm a bit confused on what you mean here. I meant to be asking about the prevalence of a view giving animals the same moral status as humans. You say that many might think nonhuman animals' interests are much less strong/important than humans. But I think saying they are less strong is different than saying they are less important, right? How strong they are seems more like an empirical question about capacity for welfare, etc.
Ya, I think 80,000 Hours has been a bit uncareful. I think GPI has done a fine job, and Teruji Thomas has worked on person-affecting views with them.
Woops yeah, I meant to say that GPI is good about this but the transparency and precision gets lost as ideas spread. Fixed the confusing language in my original comment.
In the longtermism section on their key ideas page, 80,000 Hours essentially assumes totalism without making that explicit:
Yeah this is another really great example of how EA is lacking in transparent reasoning. This is especially problematic s... (read more)
Thanks for this post. Looking forward to more exploration on this topic.
I agree that moral circle expansion seems massively neglected. Changing institutions to enshrine (at least some) consideration for the interests of all sentient beings seems like an essential step towards creating a good future, and I think that certain kinds of animal advocacy are likely to help us get there.
As a side note, do we have any data on what proportion of EA's adhere to the sort of "equal consideration of interests" view on animals which you advocate? I also hold this view, but its rarity may explain some differences in cause prioritization. I wonder how rare this view is even within animal advocacy.
Thanks for writing this up.
These are all interesting thoughts and objections that I happen to find persuasive. But more generally, I think EA should be more transparent about what philosophical assumptions are being made, and how this affects cause prioritization. Of course the philosophers associated with GPI are good about this, but often this transparency and precision gets lost as ideas spread.
For instance, in discussions of longtermism, totalism often seems to be assumed without making that assumption clear. Other views are often misrepresented,... (read more)
Let’s explore some hypothetical numbers to illustrate the general concept. If there’s a 5% chance that civilisation lasts for ten million years, then in expectation, there are 5000 future generations. If thousands of people making a concerted effort could, with a 55% probability, reduce th
Hi all, I'm sorry if this isn't the right place to post. Please redirect me if there's somewhere else this should go.
I'm posting on behalf of my friend, who is an aspiring AI researcher in his early 20's, and is looking to live with likeminded individuals. He currently lives in Southern California, but is open to relocating (preferably USA, especially California).
Please message email@example.com if you're interested!
AFAIK the paralysis argument is about the implications of non-consequentialism, not about down-side focused axiologies. In particular, it's about the implications of a pair of views. As Will says in the transcript you linked:
"but this is a paradigm nonconsequentialist view endorses an acts/omissions distinction such that it’s worse to cause harm than it is to allow harm to occur, and an asymmetry between benefits and harms where it’s more wrong to cause a certain amount of harm than it is right or good to cause a certain amount of b... (read more)
This was such a fun read. Bentham is often associated with psychological egoism, so it seems somewhat odd to me that he felt a need to exhort readers to fulfill their own pleasure (since apparently all actions are done on this basis anyway).
Could you say more (or work on that post) about why formal methods will be unhelpful? Why are places like Stanford, CMU, etc. pushing to integrate formal methods with AI safety? Also Paul Christiano has suggested formal methods will be useful for avoiding catastrophic scenarios. (Will update with links if you want.)