seanrson

Hello! I am a law student at UChicago, where I help out with UChicago EA. Previously, I co-founded EA UCLA while studying philosophy.

Topic Contributions

Comments

What organizations advise on human and AI wellness careers?

CRS has written about career advice for s-risks.

CLR does some coaching.

HLI seems to be working on career recommendations.

Dismantling Hedonism-inspired Moral Realism

I might be misunderstanding but I don’t think the intuition you mentioned is really an argument for hedonism, since one can agree that there must be beings with conscious experiences for anything to matter without concluding that conscious experience itself is the only thing that matters.

Animal welfare EA and personal dietary options

I think this analysis should make more transparent its reliance on something like total utilitarianism and the presence of a symmetry between happiness and suffering. In the absence of these assumptions, the instances of extreme suffering and exploitation in factory farming more clearly entail "approximate veg*nism."

Consider the fate of a broiler chicken being boiled alive. Many people think that such extreme suffering cannot be counterbalanced by other positive aspects of one's own life, so there is really no way to make the chicken's life "net positive." Moreover, many moral positions deny that the extreme suffering of one individual can be counterbalanced by positive aspects of others' lives. So even if most farmed animals did have lives better than non-existence, we might still find the whole practice objectionable because of how it affects the worst off. Looking beyond just pain and pleasure, many perspectives object to our instrumentalization and exploitation of other sentient beings, which is inherent to the practice of animal farming. And there are also those who subscribe to some sort of asymmetry in population ethics, in which case the creation of negative lives is more bad than the creation of new lives is good (which weighs strongly against a practice like factory farming, even supposing that the average life is net positive).

I'm not saying that these are all correct, but rather that we should do a better job of clarifying our background assumptions rather than just saying "EAs should..."

I think your discussion of concentration camps in the comments further highlights the need to look beyond one particular moral perspective. Even if you are right that most lives in concentration camps were better than non-existence, many people would probably find objectionable the idea of "sentience-maximizing concentration camps," i.e. supporting the creation of new lives in concentration camps while simultaneously working to make them slowly better, rather than altogether banning the practice (supposing that these are the only two options). Again, this could be motivated by the sort of positions I mentioned above.

Longtermism and animal advocacy

Longtermism and animal advocacy are often presented as mutually exclusive focus areas. This is strange, as they are defined along different dimensions: longtermism is defined by the temporal scope of effects, while animal advocacy is defined by whose interests we focus on. Of course, one could argue that animal interests are negligible once we consider the very long-term future, but my main issue is that this argument is rarely made explicit.

This post does a great job of emphasizing ways in which animal advocacy should inform our efforts to improve the very long-term future, and ways in which a focus on the very long-term future should inform animal advocacy.  

This is a key reading for anyone who wants to think more broadly about longtermism. We used this post as part of a fellowship at UCLA focused on effective animal advocacy, and our participants found it very thought-provoking. 

Why I prioritize moral circle expansion over artificial intelligence alignment

I come back to this post quite frequently when considering whether to prioritize MCE (via animal advocacy) or AI safety. It seems that these two cause areas often attract quite different people with quite different objectives, so this post is unique in its attempt to compare the two based on the same long-term considerations. 

I especially like the discussion of bias. Although some might find the whole discussion a bit ad hominem, I think people in EA should take seriously the worry that certain features common in the EA community (e.g., an attraction towards abstract puzzles) might bias us towards particular cause areas.

I recommend this post for anyone interested in thinking more broadly about longtermism.

seanrson's Shortform

Yeah I have been in touch with them. Thanks!

Why I am probably not a longtermist

Yeah I’m not totally sure what it implies. For consequentialists, we could say that bringing the life into existence is itself morally neutral; but once the life exists, we have reason to end it (since the life is bad for that person, although we’d have to make further sense of that claim). Deontologists could just say that there is a constraint against bringing into existence tortured lives, but this isn’t because of the life’s contribution to some “total goodness” of the world. Presumably we’d want some further explanation for why this constraint should exist. Maybe such an action involves an impermissible attitude of callous disregard for life or something like that. It seems like there are many parameters we could vary but that might seem too ad hoc.

Why I am probably not a longtermist

I mostly meant to say that someone who otherwise rejects totalism would agree to (*), so as to emphasize that these diverse values are really tied to our position on the value of good lives (whether good = virtuous or pleasurable or whatever).

Similarly, I think the transitivity issue has less to do with our theory of wellbeing (what counts as a good life) and more to do with our theory of population ethics. As to how we can resolve this apparent issue, there are several things we could say. We could (as I think Larry Temkin and others have done) agree with (b), maintaining that 'better than'  or 'more valuable than' is not a transitive relation. Alternatively, we could adopt a sort of "tethered good approach" (following Christine Korsgaard), where we maintain that claims like "A is better/more valuable than B" are only meaningful insofar as they are  reducible to claims like "A is better/more valuable than B for person P." In that case, we might deny that "a meh life is just as valuable as [or more/less valuable than] nonexistence " is meaningful, since there's no one for whom it is more valuable (assuming we reject comparativism, the view that things can be better or worse for merely possible persons). Michael St. Jules is probably aware of better ways this could be resolved. In general, I think that a lot of this stuff is tricky and our inability to find a solution right now to theoretical puzzles is not always a good reason to abandon a view. 

Why I am probably not a longtermist

Re: the dependence on future existence concerning the values of "freedom/autonomy, relationships (friendship/family/love), art/beauty/expression, truth/discovery, the continuation of tradition/ancestors' efforts, etc.," I think that most of these (freedom/autonomy, relationships, truth/discovery) are considered valuable primarily because of their role in "the good life," i.e. their contribution to individual wellbeing (as per "objective list" theories of wellbeing), so the contingency seems pretty clear here. Much less so for the others, unless we are convinced that people only value these instrumentally.

Load More