seanrson

Hello! I am a law student at UChicago, where I help out with UChicago EA. Previously, I co-founded EA UCLA while studying philosophy. I find compelling sentientism and suffering-focused ethics.

Wiki Contributions

Comments

seanrson's Shortform

Yeah I have been in touch with them. Thanks!

Why I am probably not a longtermist

Yeah I’m not totally sure what it implies. For consequentialists, we could say that bringing the life into existence is itself morally neutral; but once the life exists, we have reason to end it (since the life is bad for that person, although we’d have to make further sense of that claim). Deontologists could just say that there is a constraint against bringing into existence tortured lives, but this isn’t because of the life’s contribution to some “total goodness” of the world. Presumably we’d want some further explanation for why this constraint should exist. Maybe such an action involves an impermissible attitude of callous disregard for life or something like that. It seems like there are many parameters we could vary but that might seem too ad hoc.

Why I am probably not a longtermist

I mostly meant to say that someone who otherwise rejects totalism would agree to (*), so as to emphasize that these diverse values are really tied to our position on the value of good lives (whether good = virtuous or pleasurable or whatever).

Similarly, I think the transitivity issue has less to do with our theory of wellbeing (what counts as a good life) and more to do with our theory of population ethics. As to how we can resolve this apparent issue, there are several things we could say. We could (as I think Larry Temkin and others have done) agree with (b), maintaining that 'better than'  or 'more valuable than' is not a transitive relation. Alternatively, we could adopt a sort of "tethered good approach" (following Christine Korsgaard), where we maintain that claims like "A is better/more valuable than B" are only meaningful insofar as they are  reducible to claims like "A is better/more valuable than B for person P." In that case, we might deny that "a meh life is just as valuable as [or more/less valuable than] nonexistence " is meaningful, since there's no one for whom it is more valuable (assuming we reject comparativism, the view that things can be better or worse for merely possible persons). Michael St. Jules is probably aware of better ways this could be resolved. In general, I think that a lot of this stuff is tricky and our inability to find a solution right now to theoretical puzzles is not always a good reason to abandon a view. 

Why I am probably not a longtermist

Re: the dependence on future existence concerning the values of "freedom/autonomy, relationships (friendship/family/love), art/beauty/expression, truth/discovery, the continuation of tradition/ancestors' efforts, etc.," I think that most of these (freedom/autonomy, relationships, truth/discovery) are considered valuable primarily because of their role in "the good life," i.e. their contribution to individual wellbeing (as per "objective list" theories of wellbeing), so the contingency seems pretty clear here. Much less so for the others, unless we are convinced that people only value these instrumentally.

seanrson's Shortform

Local vs. global optimization in career choice

Like many young people in the EA community, I often find myself paralyzed by career planning and am quick to second-guess my current path, developing an unhealthy obsession for keeping doors open in case I realize that I really should have done this other thing.

Many posts have been written recently about the pitfalls of planning your career as if you were some generic template to be molded by 80,000 Hours [reference Holden's aptitudes post, etc.]. I'm still trying to process these ideas and think that the distinction between local and global optimization may help me (and hopefully others) with career planning.

Global optimization involves finding the best among all possible solutions. By its nature, EA is focused on global optimization, identifying the world's most pressing problems and what we can do to solve them. This technique works well at the community level: we can simultaneously explore and exploit, transfer money between cause areas and strategies, and plan across long timescales. But global optimization is not as appropriate in career planning. Instead, perhaps it is better to think about career choice in terms of local optimization, finding the best solution in a limited set. Local optimization is more action-oriented, better at developing aptitudes, and less time-intensive.

The differences between global and local optimization are perhaps similar to the differences between sequence-based and cluster-based thinking [reference Holden's post]. Like sequence-based thinking, which asks and answers questions with linear, expected-value style reasoning, global optimization is too vulnerable to subtle changes in parameters. Perhaps I've enrolled in a public health program but find AI safety and animal suffering equally compelling cause areas. Suppose I'm too focused on global optimization. In that case, a single new report by the Open Philantrophy Project suggesting shorter timelines for transformative AI might lead me to drop out of my program and begin anew as a software engineer. But perhaps the next day, I find out that clean meat is not as inevitable as I once thought, so I leave my job and begin studying bioengineering.

Global optimization makes us more likely to vacillate between potential paths excessively. The problem, though, is that we need some stability in our goals to make progress and develop the aptitudes necessary for impact in any field. Adding onto this the psychological stress of constant skepticism about one's trajectory, it seems that global optimization can be a bad strategy for career planning. The alternative, local optimization, would have us look around our most immediate surroundings and do our best within that environment.  Local optimization seems like a better strategy if we think that "good correlates with good," and aptitudes are likely to transfer if we later become convinced that no, really, I should have done this other thing.

I think the difficult thing for us is to find the right balance between these two optimization techniques. We don't want to fall into value traps or otherwise miss the forest for the trees, focusing too much on our most immediate options without considering more drastic changes. But too much global optimization can be similarly dangerous.

Why I am probably not a longtermist


You say that care more about the preference of people than about total wellbeing, and that it'd change your mind if it turns out that people today prefer longtermist causes. 

What do you think about the preferences of future people? You seem to take the "rather make people happy than to make happy people" point of view on population ethics, but future preferences extend beyond their preference to exist. Since you also aren't interested in a world where trillions of people watch Netflix all day, I take it that you don't take their preferences as that important.



What do you mean by this?

OP said, "I also care about people’s wellbeing regardless of when it happens." Are you interpreting this concern about future people's wellbeing as not including concern about their preferences?  I think the bit about a Netflix world is consistent with caring about future people's preferences contingent on future people existing. If we accept this kind of view in population ethics, we don't have welfare-related reasons to ensure a future for humanity. But still, we might have quasi-aesthetic desires to create the sort of future that we find appealing. I think OP might just be saying that they lack such quasi-aesthetic desires.

 (As an aside, I suspect that quasi-aesthetic desires motivate at least some of the focus on x-risks. We would expect that people who find futurology interesting would want the world to continue, even if they were indifferent to welfare-related reasons. I think this is basically what motivates a lot of environmentalism. People have a quasi-aesthetic desire for nature, purity, etc., so they care about the environment even if they never ground this in the effects of the environment on conscious beings.)

Perhaps you are referring to the value of creating and satisfying these future people's preferences? If this is what you meant, a standard line for preference utilitarians is that preferences only matter once they are created. So the preferences of future people only matter contingent on the existence of these people (and their preferences).

There are several ways to motivate this, one of which is the following: would it be a good thing for me to create in you entirely new preferences just so I can satisfy them? We might think not.

This idea is captured in Singer's Practical Ethics (from back when he espoused preference utilitarianism):

The creation of preferences which we then satisfy gains us nothing. We can think of the creation of the unsatisfied preferences as putting a debit in the moral ledger which satisfying them merely cancels out... Preference Utilitarians have grounds for seeking to satisfy their wishes, but they cannot say that the universe would have been a worse place if we had never come into existence at all.

Avoiding Groupthink in Intro Fellowships (and Diversifying Longtermism)

Yeah my mistake, I should have been clearer about the link for the proposed changes. I think we’re mostly in agreement. My proposed list is probably overcorrecting, and I definitely agree that more criticisms of both approaches are needed. Perhaps a compromise would be just including the reading entitled “Common Ground for Longtermists,” or something similar.

I think you’re right that many definitions of x-risk are broad enough to include (most) s-risks, but I’m mostly concerned about the term “x-risk” losing this broader meaning and instead just referring to extinction risks. It’s probably too nuanced for an intro syllabus, but MichaelA’s post (https://forum.effectivealtruism.org/posts/AJbZ2hHR4bmeZKznG/venn-diagrams-of-existential-global-and-suffering) could help people to better understand the space of possible problems.

Avoiding Groupthink in Intro Fellowships (and Diversifying Longtermism)

Hey Mauricio, thanks for your reply. I’ll reply later with some more remarks, but I’ll list some quick thoughts here:

  1. I agree that s-risks can seem more “out there,” but I think some of the readings I’ve listed do a good job of emphasizing the more general worry that the future involves a great deal of suffering. It seems to me that the asymmetry in content about extinction risks vs. s-risks is less about the particular examples and more about the general framework. Taking this into account, perhaps we could write up something to be a gentler introduction to s-risks. The goal is to prevent people from identifying “longtermism” as just extinction risk reduction.

  2. Yeah this is definitely true, but completely omitting such a distinctively EA concept like s-risks seems to suggest that something needs to be changed.

  3. I think the reading I listed entitled “Common Ground for Longtermists” should address this worry, but perhaps we could add more. I tend to think that the potential for antagonism is outweighed by the value of broader thinking, but your worry is worth addressing.

Avoiding Groupthink in Intro Fellowships (and Diversifying Longtermism)

Hi Aaron, thanks for your reply. I’ve listed some suggestions in one of the hyperlinks above, but I’ll put it here too: https://docs.google.com/document/d/1niRwbh3eejByFQwoiZ0NiaSZDUawn206PUmHs7aKL0A/edit?usp=sharing

I have not put much time into this, so I’d love to hear your thoughts on the proposed changes.

seanrson's Shortform

Some criticism of the EA Virtual Programs introductory fellowship syllabus:

I was recently looking through the EA Virtual Programs introductory fellowship syllabus. I was disappointed to see zero mention of s-risks or the possible relevance of animal advocacy to longtermism in the sections on longtermism and existential risk.

I understand that mainstream EA is largely classical utilitarian in practice (even if it recognizes moral uncertainty in principle), but it seems irresponsible not to expose people to these ideas even by the lights of classical utilitarianism.

What explains this omission? A few possibilities:

  • The people who created the fellowship syllabus aren't very familiar with s-risks and the possible relevance of animal advocacy to longtermism.
    • This seems plausible to me.  I think founder effects heavily influence EA, and the big figures in mainstream EA don't seem to discuss these ideas very much.
  • These topics seem too weird for an introductory fellowship.
    • It's true that a lot of s-risk scenarios are weird. But there's always some trade-off that we have to make between mainstream palatability and potential for impact. The inclusion of x-risks shows that we are willing to make this trade-off when the ideas discussed are important. To justify the exclusion of s-risks, the weirdness-to-impact ratio would have to be much larger. This might be true of particular s-risk scenarios, but even so, general discussions of future suffering need not reference these weirder scenarios. It could also make sense to include discussion of s-risks as optional reading (so as to avoid turning off people who are less open-minded).
    • The possible relevance of animal advocacy to longtermism does not strike me as any weirder than the discussion of factory farming, and the omission of this material makes longtermism seem very anthropocentric. (I think we could also improve on this by referring to the long-term future using terms like "The Future of Life" rather than "The Future of Humanity.")

More generally, I think that the EA community could do a much better job of communicating the core premise of longtermism without committing itself too much to particular ethical views (e.g., classical utilitarianism) or empirical views (e.g., that animals won't exist in large numbers and thus are irrelevant to longtermism). I see many of my peers just defer to the values supported by organizations like 80,000 Hours without reflecting much on their own positions, which strikes me as quite problematic. The failure to include a broader range of ideas and topics in introductory fellowships only exacerbates this problem of groupthink.

 

[Note: it's quite possible that the syllabus is not completely finished at this point, so perhaps these issues will be addressed. But I think these complaints apply more generally, so I felt like posting this.]

Load More