All of seanrson's Comments + Replies

Why I am probably not a longtermist

Yeah I’m not totally sure what it implies. For consequentialists, we could say that bringing the life into existence is itself morally neutral; but once the life exists, we have reason to end it (since the life is bad for that person, although we’d have to make further sense of that claim). Deontologists could just say that there is a constraint against bringing into existence tortured lives, but this isn’t because of the life’s contribution to some “total goodness” of the world. Presumably we’d want some further explanation for why this constraint should ... (read more)

Why I am probably not a longtermist

I mostly meant to say that someone who otherwise rejects totalism would agree to (*), so as to emphasize that these diverse values are really tied to our position on the value of good lives (whether good = virtuous or pleasurable or whatever).

Similarly, I think the transitivity issue has less to do with our theory of wellbeing (what counts as a good life) and more to do with our theory of population ethics. As to how we can resolve this apparent issue, there are several things we could say. We could (as I think Larry Temkin and others have done) agree with... (read more)

1Mauricio18hHm, I can't wrap my head around rejecting transitivity. Does this imply that bringing tortured lives into existence is morally neutral? I find that very implausible. (You could get out of that conclusion by claiming an asymmetry, but I haven't seen reasons to think that people with objective list theories of welfare buy into that.) This view also seems suspiciously committed to sketchy notions of personhood.
Why I am probably not a longtermist

Re: the dependence on future existence concerning the values of "freedom/autonomy, relationships (friendship/family/love), art/beauty/expression, truth/discovery, the continuation of tradition/ancestors' efforts, etc.," I think that most of these (freedom/autonomy, relationships, truth/discovery) are considered valuable primarily because of their role in "the good life," i.e. their contribution to individual wellbeing (as per "objective list" theories of wellbeing), so the contingency seems pretty clear here. Much less so for the others, unless we are convinced that people only value these instrumentally.

1Mauricio2dThanks! I think I see how these values are contingent in the sense that, say, you can't have human relationships without humans. Are you saying they're also contingent in the sense that (*) creating new lives with these things has no value? That's very unintuitive to me. If "the good life" is significantly more valuable than a meh life, and a meh life is just as valuable as nonexistence, doesn't it follow that a flourishing life is significantly more valuable than nonexistence? (In other words, "objective list" theories of well-being (if they hold some lives to be better than neutral) + transitivity seem to imply that creating good lives is possible and valuable, which implies (*) is false. People with these theories of well-being could avoid that conclusion by (a) rejecting that some lives are better than neutral, or (b) by rejecting transitivity. Do they?)
seanrson's Shortform

Local vs. global optimization in career choice

Like many young people in the EA community, I often find myself paralyzed by career planning and am quick to second-guess my current path, developing an unhealthy obsession for keeping doors open in case I realize that I really should have done this other thing.

Many posts have been written recently about the pitfalls of planning your career as if you were some generic template to be molded by 80,000 Hours [reference Holden's aptitudes post, etc.]. I'm still trying to process these ideas and think that the disti... (read more)

Why I am probably not a longtermist


You say that care more about the preference of people than about total wellbeing, and that it'd change your mind if it turns out that people today prefer longtermist causes. 

What do you think about the preferences of future people? You seem to take the "rather make people happy than to make happy people" point of view on population ethics, but future preferences extend beyond their preference to exist. Since you also aren't interested in a world where trillions of people watch Netflix all day, I take it that you don't take their preferences as that im

... (read more)
5EdoArad2dGood points, thanks :) I agree with everything here. One view on how we impact the future is asking how would we want to construct it assuming we had direct control over it. I think that this view lends more to the points you make and where population ethics feels to me much murkier. However, there are some things that we might be able to put some credence on that we'd expect future people to value. For example, I think that it's more likely than not that future people would value their own welfare. So while it's not an argument for preventing x-risk (as that runs into the same population ethics problems), it is still an argument for other types of possible longtermist interventions and definitely points at where (a potentially enormous amount of) value lies. Say, I expect working on moral circle expansion to be very important from this perspective (although, I'm not sure about how interventions there are actually promising). Regarding quasi-aesthetic desires, I agree and think that this is very important to understand further. Personally, I'm confused as to whether I should value these kinds of desires (even at the expense of something based on welfarism), or whether I should think of these as a bias to overcome. As you say, I also guess that this might be behind some of the reasons for differing stances on cause prioritization.
Avoiding Groupthink in Intro Fellowships (and Diversifying Longtermism)

Yeah my mistake, I should have been clearer about the link for the proposed changes. I think we’re mostly in agreement. My proposed list is probably overcorrecting, and I definitely agree that more criticisms of both approaches are needed. Perhaps a compromise would be just including the reading entitled “Common Ground for Longtermists,” or something similar.

I think you’re right that many definitions of x-risk are broad enough to include (most) s-risks, but I’m mostly concerned about the term “x-risk” losing this broader meaning and instead just referring ... (read more)

Avoiding Groupthink in Intro Fellowships (and Diversifying Longtermism)

Hey Mauricio, thanks for your reply. I’ll reply later with some more remarks, but I’ll list some quick thoughts here:

  1. I agree that s-risks can seem more “out there,” but I think some of the readings I’ve listed do a good job of emphasizing the more general worry that the future involves a great deal of suffering. It seems to me that the asymmetry in content about extinction risks vs. s-risks is less about the particular examples and more about the general framework. Taking this into account, perhaps we could write up something to be a gentler introducti

... (read more)

Thanks!

Ah sorry, I hadn't seen your list of proposed readings (I wrongly thought the relevant link was just a link to the old syllabus). Your points about those readings in (1) and (3) do seem to help with these concerns. A few thoughts:

  • The dichotomy between x-risk reduction and s-risk reduction seems off to me. As I understand them, prominent definitions of x-risks [1] [2] [3] (especially the more thorough/careful discussion in [3]) are all broad enough for s-risks to count as x-risks (especially if we're talking about permanent / locked-in s-risks, which
... (read more)
Avoiding Groupthink in Intro Fellowships (and Diversifying Longtermism)

Hi Aaron, thanks for your reply. I’ve listed some suggestions in one of the hyperlinks above, but I’ll put it here too: https://docs.google.com/document/d/1niRwbh3eejByFQwoiZ0NiaSZDUawn206PUmHs7aKL0A/edit?usp=sharing

I have not put much time into this, so I’d love to hear your thoughts on the proposed changes.

seanrson's Shortform

Some criticism of the EA Virtual Programs introductory fellowship syllabus:

I was recently looking through the EA Virtual Programs introductory fellowship syllabus. I was disappointed to see zero mention of s-risks or the possible relevance of animal advocacy to longtermism in the sections on longtermism and existential risk.

I understand that mainstream EA is largely classical utilitarian in practice (even if it recognizes moral uncertainty in principle), but it seems irresponsible not to expose people to these ideas even by the lights of classical utilitar... (read more)

Longtermism which doesn't care about Extinction - Implications of Benatar's asymmetry between pain and pleasure

Yeah I'm not really sure why we use the term x-risk anymore. There seems to be so much disagreement and confusion about where extinction, suffering, loss of potential, global catastrophic risks, etc. fit into the picture. More granularity seems desirable.

https://forum.effectivealtruism.org/posts/AJbZ2hHR4bmeZKznG/venn-diagrams-of-existential-global-and-suffering is helpful.

What is a "Kantian Constructivist view of the kind Christine Korsgaard favours"?

Just adding onto this, for those interested in learning how a Kantian meta-ethical approach might be compatible with a consequentialist normative theory, see Kagan's "Kantianism for Consequentialists": https://campuspress.yale.edu/shellykagan/files/2016/07/Kantianism-for-Consequentialists-2cldc82.pdf

Questions for Peter Singer's fireside chat in EAGxAPAC this weekend

Has Singer ever said anything about s-risks? If not, I’m curious to hear his thoughts, especially concerning how his current view compares to what he would’ve thought during his time as a preference utilitarian.

Longtermism and animal advocacy

Sorry, I'm a bit confused on what you mean here. I meant to be asking about the prevalence of a view giving animals the same moral status as humans. You say that many might think nonhuman animals' interests are much less strong/important than humans. But I think saying they are less strong is different than saying they are less important, right? How strong they are seems more like an empirical question about capacity for welfare, etc.

4MichaelStJules10moYa, my point is that I'd guess most dedicated EAs would endorse the principle in the abstract, but they might not think animals matter much in practice. Also, for what it's worth, about half of EAs who responded to the diet question are at least vegetarian [https://www.rethinkpriorities.org/blog/2019/12/5/ea-survey-2019-series-community-demographics-amp-characteristics] , and still more are reducing meat consumption: From https://www.rethinkpriorities.org/blog/2019/12/5/ea-survey-2019-series-community-demographics-amp-characteristics
some concerns with classical utilitarianism

Ya, I think 80,000 Hours has been a bit uncareful. I think GPI has done a fine job, and Teruji Thomas has worked on person-affecting views with them.

Woops yeah, I meant to say that GPI is good about this but the transparency and precision gets lost as ideas spread. Fixed the confusing language in my original comment.

In the longtermism section on their key ideas page, 80,000 Hours essentially assumes totalism without making that explicit:

Yeah this is another really great example of how EA is lacking in transparent reasoning. This is especially problematic s... (read more)

Longtermism and animal advocacy

Thanks for this post. Looking forward to more exploration on this topic.

I agree that moral circle expansion seems massively neglected. Changing institutions to enshrine (at least some) consideration for the interests of all sentient beings seems like an essential step towards creating a good future, and I think that certain kinds of animal advocacy are likely to help us get there. 

As a side note, do we have any data on what proportion of EA's adhere to the sort of "equal consideration of interests" view on animals which you advocate? I also hold this view, but its rarity may explain some differences in cause prioritization.  I wonder how rare this view is even within animal advocacy.

7MichaelStJules10moI would guess that most of the more dedicated EAs believe in something roughly like "equal consideration of interests" ("equal consideration of equal interests" to be more specific), but many might think nonhuman animals' interests are much less strong/important than humans, on average.
some concerns with classical utilitarianism

Thanks for writing this up.

These are all interesting thoughts and objections that I happen to find persuasive. But more  generally, I think EA should be more transparent about what philosophical assumptions are being made, and how this affects cause prioritization. Of course the philosophers associated with GPI are good about this, but often this transparency and precision gets lost as ideas spread.

For instance, in discussions of longtermism, totalism often seems to be assumed without making that assumption clear. Other views are often misrepresented,... (read more)

7nil10moThanks for the example! I worry that even when our philosophical assumptions are stated (which is already a good place to be in), it is easy to miss their important implications and to not question whether these implications make sense (as opposed to jumping directly to cause selection). (This kind of rigor would arguably be over-demanding in most cases but could still be a health measure for EA materials.)

Ya, I think 80,000 Hours has been a bit uncareful. I think GPI has done a fine job, and Teruji Thomas has worked on person-affecting views with them.

In the longtermism section on their key ideas page, 80,000 Hours essentially assumes totalism without making that explicit:

Let’s explore some hypothetical numbers to illustrate the general concept. If there’s a 5% chance that civilisation lasts for ten million years, then in expectation, there are 5000 future generations. If thousands of people making a concerted effort could, with a 55% probability, reduce th

... (read more)
seanrson's Shortform

Hi all, I'm sorry if this isn't the right place to post. Please redirect me if there's somewhere else this should go.

I'm posting on behalf of my friend, who is an aspiring AI researcher in his early 20's, and is looking to live with likeminded individuals. He currently lives in Southern California, but is open to relocating (preferably USA, especially California).

Please message jeffreypythonclass+ea@gmail.com if you're interested!

2Linch1yCan you be a bit more specific than "aspiring AI researcher?" Eg, are they interested in AI Safety, are they interested in AI research for other EA reasons, interested in $, interested in AI as a scientific question, etc.
7MichaelDickens1yYou might try the East Bay EA/Rationality Housing Board [https://www.facebook.com/groups/2266502166822026]
Moral Anti-Realism Sequence #3: Against Irreducible Normativity

AFAIK the paralysis argument is about the implications of non-consequentialism, not about down-side focused axiologies. In particular, it's about the implications of a pair of views. As Will says in the transcript you linked:

"but this is a paradigm nonconsequentialist view endorses an acts/omissions distinction such that it’s worse to cause harm than it is to allow harm to occur, and an asymmetry between benefits and harms where it’s more wrong to cause a certain amount of harm than it is right or good to cause a certain amount of b... (read more)

Book Review: Deontology by Jeremy Bentham

This was such a fun read. Bentham is often associated with psychological egoism, so it seems somewhat odd to me that he felt a need to exhort readers to fulfill their own pleasure (since apparently all actions are done on this basis anyway).

The academic contribution to AI safety seems large

Could you say more (or work on that post) about why formal methods will be unhelpful? Why are places like Stanford, CMU, etc. pushing to integrate formal methods with AI safety? Also Paul Christiano has suggested formal methods will be useful for avoiding catastrophic scenarios. (Will update with links if you want.)

5adamShimi1yHum, I think I wrote my point badly on the comment above. What I mean isn't that formal methods will never be useful, just that they're not really useful yet, and will require more pure AI safety research to be useful. The general reason is that all formal methods try to show that a program follows a specification on a model of computation. Right now, a lot of the work on formal methods applied to AI focus on adapting known formal methods to the specific programs (say Neural Networks) and the right model of computation (in what contexts do you use these programs, how can you abstract their execution to make it simpler). But one point they fail to address is the question of the specification. Note that when I say specification, I mean a formal specification. In practice, it's usually a modal logic formula, in LTL [https://en.wikipedia.org/wiki/Linear_temporal_logic] for example. And here we get at the crux of my argument: nobody knows the specification for almost all AI properties we care about. Nobody knows the specification for "Recognizing kittens" or "Answering correctly a question in English". And even for safety questions, we don't have yet a specification of "doesn't manipulate us" or "is aligned". That's the work that still needs to be done, and that's what people like Paul Christiano and Evan Hubinger, among others, are doing. But until we have such properties, the formal methods will not be really useful to either AI capability or AI safety. Lastly, I want to point out that working on AI for formal methods is also a means to get money and prestige. I'm not going to go full Hanson and say that's the only reason, but it's still a part of the international situation. I have examples of people getting AI related funding in France, for a project that is really, but really useless for AI.