seanrson

Posts

Sorted by New

Comments

Longtermism which doesn't care about Extinction - Implications of Benatar's asymmetry between pain and pleasure

Yeah I'm not really sure why we use the term x-risk anymore. There seems to be so much disagreement and confusion about where extinction, suffering, loss of potential, global catastrophic risks, etc. fit into the picture. More granularity seems desirable.

https://forum.effectivealtruism.org/posts/AJbZ2hHR4bmeZKznG/venn-diagrams-of-existential-global-and-suffering is helpful.

What is a "Kantian Constructivist view of the kind Christine Korsgaard favours"?

Just adding onto this, for those interested in learning how a Kantian meta-ethical approach might be compatible with a consequentialist normative theory, see Kagan's "Kantianism for Consequentialists": https://campuspress.yale.edu/shellykagan/files/2016/07/Kantianism-for-Consequentialists-2cldc82.pdf

Questions for Peter Singer's fireside chat in EAGxAPAC this weekend

Has Singer ever said anything about s-risks? If not, I’m curious to hear his thoughts, especially concerning how his current view compares to what he would’ve thought during his time as a preference utilitarian.

Longtermism and animal advocacy

Sorry, I'm a bit confused on what you mean here. I meant to be asking about the prevalence of a view giving animals the same moral status as humans. You say that many might think nonhuman animals' interests are much less strong/important than humans. But I think saying they are less strong is different than saying they are less important, right? How strong they are seems more like an empirical question about capacity for welfare, etc.

some concerns with classical utilitarianism

Ya, I think 80,000 Hours has been a bit uncareful. I think GPI has done a fine job, and Teruji Thomas has worked on person-affecting views with them.

Woops yeah, I meant to say that GPI is good about this but the transparency and precision gets lost as ideas spread. Fixed the confusing language in my original comment.

In the longtermism section on their key ideas page, 80,000 Hours essentially assumes totalism without making that explicit:

Yeah this is another really great example of how EA is lacking in transparent reasoning. This is especially problematic since many people probably don't have the conceptual resources necessary to identify the assumption or how it relates to other EA ideas, so the response might just be a general aversion to EA.

This article is a bit older (2017) so maybe it's more forgiveable, but their coverage of the asymmetry there is pretty bad.

As another piece of evidence, my university group is using an introductory fellowship syllabus recently developed by Oxford EA and there are zero required readings about anything related to population ethics and how different views here might affect cause prioritization. Instead extinction risks are presented as pretty overwhelmingly pressing.

FWIW, I'm skeptical of this, too. I've responded to that paper here, and have discussed some other concerns here.

Thanks, gonna check these out!

Longtermism and animal advocacy

Thanks for this post. Looking forward to more exploration on this topic.

I agree that moral circle expansion seems massively neglected. Changing institutions to enshrine (at least some) consideration for the interests of all sentient beings seems like an essential step towards creating a good future, and I think that certain kinds of animal advocacy are likely to help us get there. 

As a side note, do we have any data on what proportion of EA's adhere to the sort of "equal consideration of interests" view on animals which you advocate? I also hold this view, but its rarity may explain some differences in cause prioritization.  I wonder how rare this view is even within animal advocacy.

some concerns with classical utilitarianism

Thanks for writing this up.

These are all interesting thoughts and objections that I happen to find persuasive. But more  generally, I think EA should be more transparent about what philosophical assumptions are being made, and how this affects cause prioritization. Of course the philosophers associated with GPI are good about this, but often this transparency and precision gets lost as ideas spread.

For instance, in discussions of longtermism, totalism often seems to be assumed without making that assumption clear. Other views are often misrepresented, for example in 80,000's post "Introducing longtermism" where they say: 

This objection is usually associated with a “person-affecting” view of ethics, which is sometimes summed up as the view that “ethics is about helping make people happy, not making happy people”. In other words, we only have moral obligations to help those who are already alive...

But of course person-affecting views are diverse and they need not imply presentism.

From my experience leading an EA university group, this lack of transparency and precision often has the effect of causing people with different philosophical assumptions to reject longtermism altogether, which is a mistake since it's robust across various population axiologies. I worry that this same sort of thing might cause people to reject other EA ideas.

seanrson's Shortform

Hi all, I'm sorry if this isn't the right place to post. Please redirect me if there's somewhere else this should go.

I'm posting on behalf of my friend, who is an aspiring AI researcher in his early 20's, and is looking to live with likeminded individuals. He currently lives in Southern California, but is open to relocating (preferably USA, especially California).

Please message jeffreypythonclass+ea@gmail.com if you're interested!

Moral Anti-Realism Sequence #3: Against Irreducible Normativity

AFAIK the paralysis argument is about the implications of non-consequentialism, not about down-side focused axiologies. In particular, it's about the implications of a pair of views. As Will says in the transcript you linked:

"but this is a paradigm nonconsequentialist view endorses an acts/omissions distinction such that it’s worse to cause harm than it is to allow harm to occur, and an asymmetry between benefits and harms where it’s more wrong to cause a certain amount of harm than it is right or good to cause a certain amount of benefit... And if you have those two claims, then you’ve got to conclude [along the lines of the paralysis argument]".


Also, I'm not sure how Lukas would reply but I think one way of defending his claim which you criticize, namely that "the need to fit all one’s moral intuitions into an overarching theory based solely on intuitively appealing axioms simply cannot be fulfilled", is by appealing to the existence of impossibility theorems in ethics. In that case we truly won't be able to avoid counterintuitive results (see e.g. Arrhenius 2000, Greaves 2017). This also shouldn't surprise us too much if we agree with the evolved nature of some of our moral intuitions.

Book Review: Deontology by Jeremy Bentham

This was such a fun read. Bentham is often associated with psychological egoism, so it seems somewhat odd to me that he felt a need to exhort readers to fulfill their own pleasure (since apparently all actions are done on this basis anyway).

Load More