All of Mauricio's Comments + Replies

Is the doing the most good really that simple?

Thanks for the question! You might find articles like this one interesting. (That article's a bit outdated but I'd guess still roughly right.)

Biblical advice for people with short AI timelines

or b) kill/enslave everyone

Tangent: did you mean this literally? I know some folks who are worried about people being killed, but I haven't heard of anyone worrying about human enslavement (and that distinction seems like a point in favor of "people worried about this stuff aren't picking scary-sounding scenarios at random," since automated labor would presumably be way more efficient than human labor in these scenarios).

I have heard some people who are concerned about human extinction vis a vis AI. Re: "enslave," that wasn't a great wording choice. I was trying to gesture at S-risks like a stable dictatorship underpinned by AI or other scenarios where humanity loses our autonomy.

Takeaways on US Policy Careers (Part 2): Career Advice

Thanks! I've added a caveat to the post, linking to this comment.

AI Governance Fundamentals - Curriculum and Application

Thank you! That would be great.

(I'm not sure I'd have capacity to manage hosting of the audio files, so no worries if that would be a requirement (although my sense was that it generally isn't, with Nonlinear?))

3Kat Woods6dFantastic! You're right, we'd just put it into podcast form so people could listen on their podcast players, so no need to host the audio files or anything. I'll DM you with more details.
How would you define "existential risk?"

Of the definitions proposed in this paper, I like the clarity and generality of the 3rd: Existential risk is risk of an existential catastrophe, and:

An existential catastrophe is an event which causes the loss of a large fraction of expected value.

(I'd count non-events, since having another term for those isn't catching on, and I'd append "for the long-term future.")

The Explanatory Obstacle of EA

Seems right; maybe this was implied, but I'd add (D) pick a cause & intervention that scores very well under scale/neglectedness/tractability/personal fit considerations

The Explanatory Obstacle of EA

Agreed! So maybe differences in feasible impact are: career >> (high-skill / well-located) volunteering > donations >> other stuff

The Explanatory Obstacle of EA

Yup, this also lines up with how (American) undergrads empirically seem to get most enthusiastic about career-centered content (maybe because they're starved for good career guidance/direction).

And a nitpick:

In most cases, individuals can do much more good by changing their career path or donating

I initially nodded along as I read this, but then I realized that intuition came partly from comparing effective donations with ineffective volunteering, which might not be comparing apples to apples. Do effective donations actually beat effective volunteering... (read more)

2ElliotJDavies8dControversial opinion, but I think most volunteers are probably fairly ineffective, enough to round down to zero. However, it's super easy to be an effective volunteer. Simply: A) Be autonomous/self-motivated B) Put in some significant amount of effort per week C) Be consistent over a long period of time (long enough to climb up the skill curve for the tasks at hand)
3berglund9dFair enough. I would guess you can usually have a higher impact through your career since you are doing something you've specialized in. But the first two examples you bring up seem valid.
Should I go straight into EA community-building after graduation or do software engineering first?

Maybe you're already thinking this, but specifically for community building / meta-EA career paths, my impression is doing 2 years of meta-EA would be much better (in terms of both career capital and direct impact) than 2 years of software engineering at a startup. Intuitions behind this include:

  • Community building experience seems to be highly valued by meta-EA employers / grantmakers / cofounders, because of its clear relevance
  • Maybe the impacts of community building are cumulative, which makes additional work early on especially valuable
  • I'd guess rece
... (read more)

My questions would be:  do you want to do community building an EAR projects in 5-10 years, and do you really have great community-building opportunities now (good org, good strategy for impact, some mentorship)? 

If yes for both, then doing community-building looks good. It aids your trajectory moreso than software. It won't sacrifice much career/financial security, since software is not that credential-based; you can return to it later.  And going into community building now won't raise any more eyebros than doing it in two years would.

If no for one or both questions, e.g. if you see yourself as suited to AIS engineering, then software might be better.

Is it no longer hard to get a direct work job?

In particular, just in the case of uni EA groups, I imagine that there might be one organizer for every, say, 20 to 50 people (?? I really have no idea about this), which is also a ratio of 2 to 5%.

Anecdotally, my (potentially skewed) personal impression is that [students who are very dedicated, hard-working, decent fits for university organizing, and apply for grants to do university group organizing] have chances > 50% of getting some grant.

(By "very dedicated," here and in the other comment, I mean to point at something like: has a solid understan... (read more)

EA at Georgia Tech presently has 3 student organizers and ~40 students who have done the Effective Altruism Fellowship within the past year, plus perhaps 15 other people who have attended general meetings, so let's say 50 members. 3 organizers : 50 members is a ratio of 6%. But our acceptance rate for people interested in becoming an organizer is actually 100%. (Theoretically, we would filter for people who generally agree with the content in the introductory fellowship and are reliable, hard-working, and a decent fit for some organizer position, etc. We w... (read more)

Is it no longer hard to get a direct work job?

As a minor point, I'd consider potential publication biases when interpreting articles about how hard it can be to get these jobs. I imagine if someone had an easy time getting one of these jobs, they might be hesitant to write a post about it, to avoid looking self-celebratory or insensitive.

I think this is a major factor. From what I can tell, some people have very easy times getting EA jobs, and some have very hard times getting EA jobs. This in itself really isn't much information; we'd really need many stats to get a better sense of things.

For what it's worth, I wouldn't read this as, "the people who have a hard time... are just bad candidates". It's more that EA needs some pretty specific things, and there are some sorts of people for who it's been very difficult to find a position; even though some of these people are quite brilliant in many ways.

Is it no longer hard to get a direct work job?

My impression is Michael's update could easily be directionally correct if we refine that estimate.

  • If we count direct work in non-EA orgs (which Michael seemed interested in), this opens many more options; ~34% of survey respondents (11.7 + 8.7 + 5.6 + 4.2 + 3.9) seem to be doing such work, although it's unclear how many of them are working on causes they see as most pressing.
  • The 2020 survey of the community found that ~20% of respondents self-reported "high engagement" with EA. (And that's likely an overestimate due to survey selection effects.) This k
... (read more)
2NunoSempere10dMakes sense
What is most confusing to you about AI stuff?

I've since gotten a bit more context, but I remember feeling super confused about these things when first wondering how much to focus on this stuff:

  1. Before we get to "what's the best argument for this," just what are the arguments for (and against) (strongly) prioritizing AI stuff (of the kind that people in the community are currently working on)?
    1. People keep saying heuristic-y things about self-improving AI and paperclips--just what arguments are they making? (What are the end-to-end / logically thorough / precise arguments here?)
    2. A bunch of people see
... (read more)
The Case for Reducing EA Jargon & How to Do It

Thanks Akash! This seems clear to me when it comes to communicating with people who are new to the community / relevant jargon. Just to clarify, would you also advocate for reducing jargon among people who are mostly already familiar with it? There, it seems like the costs (to clarity) of using jargon are lower, while the benefits (to efficiency and--as you say--sometimes to precision) are higher.

(I'd guess you're mainly talking about communication with newer people, but parts like "Ask people to call out whenever you’re using jargon" make me unsure.)

(I also suspect a lot of the costs and benefits come from how jargon affects people's sense of being in an in-group.)

We need alternatives to Intro EA Fellowships

Thanks!

Yup, to be clear I didn't mean to suggest "more of the same," although you're right that my examples near the end may have been overly anchored to the events fellowships currently have.

a more effective way of learning might be for someone to summarize things / identify key ideas for you

Hm, maybe. One hypothesis is that people tend to understand and remember ideas much better if they engage with them for longer amounts of time. If true, I think this would mean more (good) content is better. This seems likely to me because:

  • It seems much more com
... (read more)
We need alternatives to Intro EA Fellowships

Tangent/caveat to my point about practice: Actually, it seems like in the examples I mentioned, practicing on easier versions of a problem first is often very helpful for being able to do good practice on equivalents of the real thing (e.g. musical scale drills, sport drills, this). I wonder what this means for EA groups.

(On the other hand, I'm not sure this is a very useful set of analogies--maybe the more important thing for people who are just getting into EA is for them to get interested in core EA mindsets/practices, rather than skilled in them, which... (read more)

We need alternatives to Intro EA Fellowships

Good points! Agree that reaching out beyond overrepresented EA demographics is important--I'm also optimistic that this can be done without turning off people who really jive with EA mindsets. (I wish I could offer more than anecdotes, but I think over half of the members of my local group who are just getting involved and seem most enthusiastic about EA stuff are women or POC.)

I'm not convinced that weird people know how to do good better than anybody else

I also wouldn't make that claim about "weird people" in general. Still, I think it's pretty strai... (read more)

We need alternatives to Intro EA Fellowships

Thanks for the thoughtful response! I think you're right that EA projects being legibly good to people unsympathetic with the community is tough.

It is practice at what this process looks like, it is a way to improve our community in a small but meaningful way

I like the first part; I'm still a bit nervous about the second part? Like, isn't one of the core insights of EA that "we can and should do much better than 'small but meaningful'"?

And I guess even with the first part (local projects as practice), advice I've heard about practice in many other cont... (read more)

1Mauricio15dTangent/caveat to my point about practice: Actually, it seems like in the examples I mentioned, practicing on easier versions of a problem first is often very helpful for being able to do good practice on equivalents of the real thing (e.g. musical scale drills, sport drills, this [https://deepmind.com/blog/article/generally-capable-agents-emerge-from-open-ended-play] ). I wonder what this means for EA groups. (On the other hand, I'm not sure this is a very useful set of analogies--maybe the more important thing for people who are just getting into EA is for them to get interested in core EA mindsets/practices, rather than skilled in them, which the "practice" examples emphasize. And making someone do scale/sports drills probably isn't the best way to get them interested in something.)
2Aaron_Scher16dAgain, thank you for some amazing thoughts. I'll only respond to one piece: \begin{quotation}But, anecdotally, it seems like a big chunk (most?) of the value EA groups can provide comes from: * Taking people who are already into weird EA stuff and connecting them with one another * And taking people who are unusually open/receptive to weird EA stuff and connecting them with the more experienced EAs \end{quotation} I obviously can't disagree with your anecdotal experience, but I think what you're talking about here is closely related to what I see as one of EA's biggest flaws: lack of diversity. I'm not convinced that weird people know how to do good better than anybody else, but by not creating a way for other people to be involved in this awesome movement, we lose the value they would create for us and the value we would create for them. There also seems to be a suspicious correlation between these kind of "receptive to EA ideas" people and white men, which appears worrisome. That is, even if our goal is to target marketing to weird EAs or receptive to EA-s, it seems like the way we're doing that might have some bias that has led our community to disproportionately white and male relative to most general populations. On that note, I think learning about EA has made my life significantly better, and I think this will be the case for many other people. I think everybody who does an Intro Fellowship (and isn't familiar with EA) learns something that could be useful to their life – even if they don't join the community for become more involved. I don't want to miss out on these people, even if it's a more efficient allocation of time/resources to only focus on people we expect will become highly engaged. Shortform post coming soon about this 'projects idea' where I'll lay out the pros and cons.
We need alternatives to Intro EA Fellowships

Thanks for this! Tangent:

students are really excited about actually doing stuff. And this can be difficult to reconcile with EA. This semester, we decided to do Effectively Altruistic projects limited into the scope of our school (e.g., what can we do to improve student wellness the most? Decrease the school's carbon footprint? etc.).

Hm, I'm kind of nervous about the norms an EA group might set by limiting its projects' ambitions to its local community. Like, we know a dollar or an hour of work can do way more good if it's aimed at helping people in ex... (read more)

3Aaron_Scher16dGood points. We should have explained what our approach is in a separate post that we could link to; because I didn't explain it too well in my comment. We are trying to frame the project like so: This is not the end goal. It is practice at what this process looks like, it is a way to improve our community in a small but meaningful way. Put another way, the primary goals are skill building and building our club's reputation on campus. Another goal is to just try more stuff to help meta-EA-community building; even though we have a ton of resources on community building, we don't (seem) to have all that many trials or examples of groups doing weird stuff and seeing what happens. Some of the projects we are considering are related to global problems (e.g., carbon labeling on food in dining hall). I like the project ideas you suggest and we will consider them. One reason we're focusing on local is that the "international charity is colonialism" sentiment is really strong here. I think it would be really bad for the club if we got strongly associated with that sentiment. Attempting to dispel this idea is also on my to-do list, but low. Another point of note is that some of what the EA community does is only good in expectation. For instance, decreasing extinction risk by 0.5% per century is considered a huge gain for most EAs. But imagine tabling at a club fair and saying "Oh what did we actually accomplish last year? We trained up students to spend their careers working on AI safety in the hopes of decreasing the chance of humanity ending from robots by 0.02%". Working on low probability, high impact causes and interventions is super important, but I think it makes for crappy advertising because most people don't think about the world in Expected Value. Side point to the side point: I agree that a dollar would go much further in terms of extreme poverty than college students, but I'm less sure about an hour of time. I am in this college community; I know what its ne
EA Communication Project Ideas

Thanks for this!

Make a written intro similar to Ajeya's talk

This script and these slides are heavily inspired by (and are several years more recent than) her talk--might be useful for someone who wants to do this.

2Ben_West14dThanks! That does seem helpful.
We need alternatives to Intro EA Fellowships

Thanks!

Sorry, I'm a bit confused about how this relates to my response. It sounds like this is an argument for changing the distribution of content within the current fellowship structure, while my response was meant to be about which changes to the fellowship structure we should make. (Maybe this is meant to address my question about "what [content] can be cut?" to implement an activities-based fellowship? But that doesn't seem like what you have in mind either, since unlike in the activities-based fellowship you seem to be suggesting that we keep the tot... (read more)

1Akash17dWhoops-- definitely meant my comment as a response to "what content can be cut?" And the section about activities was meant to show how some of the activities in the current fellowship are insufficient (in my view) & offer some suggestions for other kinds of activities. Regardless of whether we shift to a radically new model, or we try to revamp the existing structure, I think it'll be useful to dissect the current fellowship to see what content we most want to keep/remove. Will try to respond to the rest at some point soon, but just wanted to clarify!
We need alternatives to Intro EA Fellowships

Thanks! I'm sympathetic to the broad idea here, but the pitfalls you point out seem pretty significant (maybe less so for the 3-week version, but that one also seems most similar to the current structure).

My main hesitation with activity-based fellowships is that intro fellowships are already pretty light on content (as you point out, they could fit in a busy weekend), so I suspect that cutting content even more would mean leaving even more massive gaps in participants' knowledge of EA. (Right now, content is roughly an intro to core EA mindsets and an int... (read more)

7Ashley Lin15dThanks Mauricio! I agree that some of the pitfalls for the alternatives, specifically challenges with accountability (more things being self-directed) and content (shorter timescales affording less time to consume content), seem significant. That said, I’m optimistic that there are ways to mitigate those challenges through program design. I think we agree that increasing accountability and quality content in Intro Fellowship-like things seems like a good idea. To me, the “current baselines of accountability and content” in the Intro Fellowship are not what we should be striving for and adding more of what already exists might not be the best strategy? (note: I think the worry about students getting busy and prioritizing classes is already an existing problem in Intro Fellowships). The Intro Fellowship misses out on other ways accountability can be increased, some of which I’ve listed below (I liked your ideas around maybe making fellowships prestigious and offering stipends and included them). I also think there are ways to structure content where fellows spend less time reading, but still cover all the core material. Accountability can come from: * Weekly meetings with facilitator and cohort of peers * Being in-person (at a retreat) * Project deliverables * 1-on-1s * Stipends * Making program more prestigious * Mentors / EA professionals Content: I think more is not always better, and agree with Akash that the Intro Fellowship isn’t sufficiently selective with content / selects for the wrong content. To me, most Intro syllabuses seem unwieldy -- ex, there are so many “recommended readings” and exercises which I’ve noticed fellows rarely do. Your concern that fellows might not do enough reading to gain a basic understanding of EA principles / cause areas is very valid. The general idea behind the strategies I share below that might mitigate this is that a more effective way of learning might be for someone to summarize things / identify key ideas for you:

TLDR: I agree that content is important, but I don't think the current version of the fellowship does a good job emphasizing the right kind of content. I would like to see more on epistemics/principles and less on specific cause areas. Also the activities can be more relevant.

Longer version: I share some of your worries, Mauricio. I think the fellowship (at least, the version that Penn EA does) currently has three kinds of content:

  • Readings about principles and ways of seeing the world (e.g., counterfactualism, effectiveness mindset, expanding one's moral c
... (read more)
Penn EA Residency Takeaways

Thanks for the detailed reflection! Nitpick, for the sake of readers having more complete info about the track record of residencies: I think it was Georgetown that got 1 Stanford EA organizer (who prefers to not be named here) for a couple of days. My understanding is this also wasn't enough time for them to do that much. (1 FTE also feels high, since there weren't enough ongoing group activities / emails in the mailing list / availability from local organizers for the organizer to spend more than several hours per day helping with Georgetown EA.)

The othe... (read more)

Takeaways on US Policy Careers (Part 2): Career Advice

Thanks for flagging!

Things may have changed since then. Also, at the time this was more true at Brookings

As you suggest, this other post cites a 2017 statistic which suggests this is still the case about Brookings and is becoming less true about think tanks in general (although the statistic is about "scholars" rather than "senior scholars"):

Among a representative group of think tanks founded before 1960, for instance, 53% of scholars hold PhDs. Among a similarly representative group of think tanks founded between 1960 and 1980, 23% of scholars have

... (read more)
Takeaways on US Policy Careers (Part 2): Career Advice

Thanks for catching that! I think it should be fixed now.

Many Undergrads Should Take Light Courseloads

I'd be curious to hear from someone who knows more about graduate programs (in general or specific ones) to what extent this advice generalizes to those contexts.

Many Undergrads Should Take Light Courseloads

Thanks! I see how that applies to the advice of not focusing so much on grades--do you think there's a similar dynamic with the advice of taking fewer classes? Personally, I've felt more nervous about letting grades slip than about taking fewer classes (since the latter doesn't come at the cost of worse grades--if anything, it makes it easier to get good grades on the fewer classes you take). 

Many Undergrads Should Take Light Courseloads

Is your sense that that's better than math major + econ minor + a few classes in stats and computer science + econ research (doing econ research with the time that would have otherwise gone to extra econ classes)? I'd guess this makes sense since I've heard econ grad schools aren't too impressed by econ majors and care a lot about research experience.

2Parker_Whitfill1moI'd say it's close and depends on the courses you are missing from an econ minor instead of a major. If those classes are 'economics of x' classes (such as media or public finance), then your time is better spent on research. If those classes are still in the core (intermediate micro, macro, econometrics, maybe game theory) I'd probably take those before research. Of course, you are right that admissions care a lot about research experience - but it seems the very best candidates have all those classes AND a lot of research experience.
Many Undergrads Should Take Light Courseloads

Thanks! My initial guess is those are situations when it's good to take specific / high-workload classes--but I'm not sure they're always situations when it's good to take more classes (since people can sometimes take those kinds of classes as parts of meeting their graduation requirements).

Many Undergrads Should Take Light Courseloads

Good point, thanks! Definitely seems like a case where taking hard classes is useful--do you think this is also a case where taking many classes is useful?

6Parker_Whitfill1moI would say an ideal candidate is a math-econ double major, also taking a few classes in stats and computer science. All put together, that's quite a few classes, but not an unmanageable amount.
Many Undergrads Should Take Light Courseloads

Thanks! Good caveat. I'd add a caveat to the caveat: I'd still caution people from taking very heavy courseloads, because exploring (e.g. asking and reflecting on the questions in your second paragraph) seems hard when all of your time and attention goes to meeting deadlines.

Many Undergrads Should Take Light Courseloads

Thanks for this! I might be more optimistic about how compatible our advice is--I'd say students should cut out both class and non-class activities that don't go far toward advancing their goals. I'd also be curious to hear:

  • It sounds like maybe you consider the credential, networking, or research opportunities offered by universities to be significantly less valuable than classes ("Classes are usually the most valuable thing offered by a university "). If so, what's the thinking behind that?
  • You write: "You shouldn't cut classes until you've already cut out
... (read more)
How much should I care about my undergrad grades?

I don't know in general--here's info relevant to just a few post-undergrad options:

  • For top CS PhD programs:
    • This guide to (top) CS PhD program admissions advises: "When applying to a Ph.D. program in CS, you’d like your grades in CS and Math and Engineering classes to be about 3.5 out of 4.0, as a rough guideline. It does not help you, in my opinion, to be closer to 4.0 as opposed to 3.5. It’s a much better idea to spend your time on research than on optimizing your GPA."
    • This other guide suggests a GPA of at least 3.8.
  • For top law schools:
    • Median GPA at top 3
... (read more)
Many Undergrads Should Take Light Courseloads

Thanks, Aaron! I've felt similarly--crazy how much time (and effort/attention/stress) that frees up :)

Participate in or facilitate fellowships/reading groups for EA if EA is something you want to do. Having other people depend on you or expect things from you can be really motivating. 

I'm into the general point here. I'd also encourage people to be much more ambitious in applying this advice--anecdotally, a significantly lighter courseload leaves enough time to e.g. organize whole fellowships (although facilitation/participation can definitely be a good starting point).

Many Undergrads Should Take Light Courseloads

Thanks! That's right, I was mainly thinking about value for group organizing (although seems generally valuable for making connections).

Why I am probably not a longtermist

Thanks! I'm not very familiar with Haidt's work, so this could very easily be misinformed, but I imagine that other moral foundations / forms of value could also give us some reasons to be quite concerned about the long term, e.g.:

  • We might be concerned with degrading--or betraying--our species / traditions / potential.
  • You mention meaninglessness--a long, empty future strikes me as a very meaningless one.

(This stuff might not be enough to justify strong longtermism, but maybe it's enough to justify weak longtermism--seeing the long term as a major concern.)... (read more)

We might be concerned with degrading--or betraying--our species / traditions / potential.

Yeah this is a major motivation for me to be a longtermist. As far as I can see a Haidt/conservative concern for a wider range of moral values, which seem like they might be lost 'by default' if we don't do anything, is a pretty longtermist concern. I wonder if I should write something long up on this.

Why I am probably not a longtermist

Thanks! I can see that for people who accept (relatively strong versions of) the asymmetry. But (I think) we're talking about what a wide range of ethical views say--is it at all common for proponents of objective list theories of well-being to hold that the good life is worse than nonexistence? (I imagine, if they thought it was that bad, they wouldn't call it "the good life"?)

2MichaelStJules2moI think this would be pretty much only antinatalists who hold stronger forms of the asymmetry, and this kind of antinatalism (and indeed all antinatalism) is relatively rare, so I'd guess not.
Why I am probably not a longtermist

Fair points. Your first paragraph seems like a good reason for me to take back the example of freedom/autonomy, although I think the other examples remain relevant, at least for nontrivial minority views. (I imagine, for example, that many people wouldn't be too concerned about adding more people to a loving future, but they would be sad about a future having no love at all, e.g. due to extinction.)

(Maybe there's some asymmetry in people's views toward autonomy? I share your intuition that most people would see it as silly to create people so they can have... (read more)

Why I am probably not a longtermist

Hm, I can't wrap my head around rejecting transitivity.

we could adopt a sort of "tethered good approach" (following Christine Korsgaard), where we maintain that claims like "A is better/more valuable than B" are only meaningful insofar as they are  reducible to claims like "A is better/more valuable than B for person P."

Does this imply that bringing tortured lives into existence is morally neutral? I find that very implausible. (You could get out of that conclusion by claiming an asymmetry, but I haven't seen reasons to think that people with objective list theories of welfare buy into that.) This view also seems suspiciously committed to sketchy notions of personhood. 

1seanrson2moYeah I’m not totally sure what it implies. For consequentialists, we could say that bringing the life into existence is itself morally neutral; but once the life exists, we have reason to end it (since the life is bad for that person, although we’d have to make further sense of that claim). Deontologists could just say that there is a constraint against bringing into existence tortured lives, but this isn’t because of the life’s contribution to some “total goodness” of the world. Presumably we’d want some further explanation for why this constraint should exist. Maybe such an action involves an impermissible attitude of callous disregard for life or something like that. It seems like there are many parameters we could vary but that might seem too ad hoc.
Why I am probably not a longtermist

Thanks! I think I see how these values are contingent in the sense that, say, you can't have human relationships without humans. Are you saying they're also contingent in the sense that (*) creating new lives with these things has no value? That's very unintuitive to me. If "the good life" is significantly more valuable than a meh life, and a meh life is just as valuable as nonexistence, doesn't it follow that a flourishing life is significantly more valuable than nonexistence?

(In other words, "objective list" theories of well-being (if they hold some live... (read more)

3Chi2moAgain, I haven't actually read this, but this article [https://globalprioritiesinstitute.org/teruji-thomas-the-asymmetry-uncertainty-and-the-long-term/] discusses intransitivity in asymmetric person-affecting views, i.e. I think in the language you used: The value of pleasure is contingent in the sense that creating new lives with pleasure has no value. But the disvalue of pain is not contingent in this way. I think you should be able to directly apply that to other object-list theories that you discuss instead of just hedonistic (pleasure-pain) ones. An alternative way to deal with intransitivity is to say that not existing and any life are incomparable. This gives you the unfortunate situation that you can't straightforwardly compare different worlds with different population sizes. I don't know enough about the literature to say how people deal with this. I think there's some long work in the works that's trying to make this version work and that also tries to make "creating new suffering people is bad" work at the same time. I think some people probably do think that they are comparable but reject that some lives are better than neutral. I expect that that's rarer though?
2MichaelStJules2moUnder the asymmetry, any life is at most as valuable as nonexistence, and depending on the particular view of the asymmetry, may be as good only when faced with particular sets of options. 1. If you can bring a good life into existence or none, it is at least permissible to choose none, and under basically any asymmetry that doesn't lead to principled antinatalism (basically all but perfect lives are bad), it's permissible to choose either. 2. If you can bring a good life into existence or none, it is at least permissible to choose none, and under a non-antinatalist asymmetry, it's permissible to choose either. 3. If you can bring a good life into existence, a flourishing life into existence or none, it is at least permissible to choose none, and under a wide view of the asymmetry (basically to solve the nonidentity problem), it is not permissible to bring the merely good life into existence. Under a non-antinatalist asymmetry (which can be wide or narrow), it is permissible to bring the flourishing life into existence. Under a narrow (not wide) non-antinatalist asymmetry, all three options are permissible. If you accept transitivity and the independence of irrelevant alternatives, instead of having the flourishing life better than none, you could have a principled antinatalism: meh life < good life < flourishing life ≤ none, although this doesn't follow.
4seanrson2moI mostly meant to say that someone who otherwise rejects totalism would agree to (*), so as to emphasize that these diverse values are really tied to our position on the value of good lives (whether good = virtuous or pleasurable or whatever). Similarly, I think the transitivity issue has less to do with our theory of wellbeing (what counts as a good life) and more to do with our theory of population ethics. As to how we can resolve this apparent issue, there are several things we could say. We could (as I think Larry Temkin and others have done) agree with (b), maintaining that 'better than' or 'more valuable than' is not a transitive relation. Alternatively, we could adopt a sort of "tethered good approach" (following Christine Korsgaard), where we maintain that claims like "A is better/more valuable than B" are only meaningful insofar as they are reducible to claims like "A is better/more valuable than B for person P." In that case, we might deny that "a meh life is just as valuable as [or more/less valuable than] nonexistence " is meaningful, since there's no one for whom it is more valuable (assuming we reject comparativism, the view that things can be better or worse for merely possible persons). Michael St. Jules is probably aware of better ways this could be resolved. In general, I think that a lot of this stuff is tricky and our inability to find a solution right now to theoretical puzzles is not always a good reason to abandon a view.
Why I am probably not a longtermist

Thanks!

I think many (but not all) of these values are mostly conditional on future people existing or directed at their own lives, not the lives of others

Curious why you think this first part? Seems plausible but not obvious to me.

in an empty future, everyone has full freedom/autonomy and gets everything they want

I have trouble seeing how this is a meaningful claim. (Maybe it's technically right if we assume that any claim about the elements of an empty set is true, but then it's also true that, in an empty future, everyone is oppressed and miserable. So n... (read more)

2MichaelStJules2moI think, for example, it's silly to create more people just so that we can instantiate autonomy/freedom in more people, and I doubt many people think of autonomy/freedom this way. I think the same is true for truth/discovery (and my own example of justice). I wouldn't be surprised if it wasn't uncommon for people to want more people to be born for the sake of having more love or beauty in the world, although I still think it's more natural to think of these things as only mattering conditionally on existence, not as a reason to bring them into existence (compared to non-existence, not necessarily compared to another person being born, if we give up the independence of irrelevant alternatives or transitivity). I also think a view of preference satisfaction that assigns positive value to the creation and satisfaction of new preferences is perverse in a way, since it allows you to ignore a person's existing preferences if you can create and satisfy a sufficiently strong preference in them, even against their wishes to do so. Sorry, I should have been more explicit. You wrote "In the absence of a long, flourishing future, a wide range of values (not just happiness) would go for a very long time unfulfilled [https://www.existential-risk.org/concept.pdf]", but we can also have values that would go frustrated for a very long time too if we don't go extinct, and including even in a future that looks mostly utopian. I also think it's likely the future will contain misery. That's fair. From the paper: It is worth noting that this still doesn't tell us how much greater the difference between total extinction and a utopian future is compared an 80% loss of life in a utopian future. Furthermore, people are being asked to assume the future will be utopian ("a future which is better than today in every conceivable way. There are no longer any wars, any crimes, or any people experiencing depression or sadness. Human suffering is massively reduced, and people are much happier th
2seanrson2moRe: the dependence on future existence concerning the values of "freedom/autonomy, relationships (friendship/family/love), art/beauty/expression, truth/discovery, the continuation of tradition/ancestors' efforts, etc.," I think that most of these (freedom/autonomy, relationships, truth/discovery) are considered valuable primarily because of their role in "the good life," i.e. their contribution to individual wellbeing (as per "objective list" theories of wellbeing), so the contingency seems pretty clear here. Much less so for the others, unless we are convinced that people only value these instrumentally.
Why I am probably not a longtermist

Additional thoughts:

I do not see how this is possible without at least soft totalitarianism, which brings its own risks of reducing the value of the future.

I think the word "totalitarianism" is pulling too much weight here. I'm sympathetic to something like "existential security requires a great combination of preventative capabilities and civilizational resilience." I don't see why that must involve anything as nasty as totalitarianism. As one alternative, advances in automation might allow for decentralized, narrow, and transparent forms of surveillance-... (read more)

Why I am probably not a longtermist

Thanks for this! Quick thoughts:

  • Curious what you make of writings like these. I think they directly addresses your crux of whether there are long-lasting, negative lock-in scenarios on the horizon which we can avoid or shape.
    • Relatedly, you mention wanting to give the values of people who are suffering the most more weight. Those and related readings make what some find a good case for thinking that those who suffer most will be future generations--I imagine they'd wish more of their ancestors had been longtermists.
  • I personally find arguments like these and
... (read more)

On your second bullet point what I would add to Carl's and Ben's posts you link to is that suffering is not the only type of disvalue or at least "nonvalue" (e.g. meaninglessness comes to mind). Framing this in Haidt's moral foundations theory, suffering is only addressing the care/harm foundation.

Also, I absolutely value positive experiences! More so for making existing people happy, but also somewhat for creating happy people. I think I just prioritise it a bit less than the longtermists around me compared to avoiding misery.

I will try to respond to the s-risk point elsewhere.

6MichaelStJules2moI think many (but not all) of these values are mostly conditional on future people existing or directed at their own lives, not the lives of others, and you should also consider the other side: in an empty future, everyone has full freedom/autonomy and gets everything they want, no one faces injustice, no one suffers, etc.. I think most people think of the badness of extinction as primarily the deaths, not the prevented future lives, though, so averting extinction wouldn't get astronomical weight. From this article [https://www.vox.com/future-perfect/2019/11/7/20903337/human-extinction-pessimism-hopefulness-future] (this paper [https://www.nature.com/articles/s41598-019-50145-9]):
5Mauricio2moAdditional thoughts: I think the word "totalitarianism" is pulling too much weight here. I'm sympathetic to something like "existential security requires a great combination of preventative capabilities and civilizational resilience." I don't see why that must involve anything as nasty as totalitarianism. As one alternative, advances in automation might allow for decentralized, narrow, and transparent forms of surveillance--preventing harmful actions without leaving room for misuse of data (which I'd guess is our usual main concern about mass surveillance). (Calling something "soft totalitarianism" also feels like a bit odd, like calling something "mild extremism." Totalitarianism has historically been horrible in large part because it's been so far from being soft/moderate, so sticking the connotations of totalitarianism onto soft/moderate futures may mislead us into underestimating their value.) I don't see how traditional Pascal's mugging type concerns are applicable here. As I understand them, those apply to using expected value reasoning with very low (subjective) probabilities. But surely "humanity will last with at least our current population for as long as the average mammalian species [https://oxford.universitypressscholarship.com/view/10.1093/oso/9780198722274.001.0001/oso-9780198722274-chapter-6#oso-9780198722274-chapter-6-note-181] " (which implies our future is vast) is a far more plausible claim than "I'm a magical mugger from the seventh dimension" [https://www.nickbostrom.com/papers/pascal.pdf]?
Avoiding Groupthink in Intro Fellowships (and Diversifying Longtermism)

Thanks! Yeah, I think you're right; that + Sean's specific reading suggestions seem like reasonably intuitive introductions to s-risks. Do you think there are similarly approachable introductions to specific s-risks, for when people ask "OK, I'm into this broad idea--what specific things could I work on?" (Or maybe this isn't critical--maybe people are oddly receptive to weird ideas if they've had good first impressions.)

6Jamie_Harris3moWell I think moral circle expansion is a good example. You could introduce s-risks as a general class of things, and then talk about moral circle expansion as a specific example. If you don't have much time, you can keep it general and talk about future sentient beings; if animals have already been discussed, mention that idea that if factory farming or something similar was spread to astronomical scales, that could be very bad. If you've already talked about risks from AI, I think you could reasonably discuss some content about artificial sentience [https://forum.effectivealtruism.org/posts/cEqBEeNrhKzDp25fH/the-importance-of-artificial-sentience] without that seeming like too much of a stretch. My current guess is that focusing on detailed simulations as an example is a nice balance between (1) intuitive / easy to imagine and (2) the sorts of beings we're most concerned about. But I'm not confident in that, and Sentience Institute is planning a survey for October that will give a little insight into which sorts of future scenarios and entities people are most concerned about. If by "introductions" you're looking for specific resource recommendations, there are short videos [https://nowthisnews.com/videos/politics/the-end-of-animal-farming-argues-against-factory-farming] , podcasts [https://www.sentienceinstitute.org/podcast/episode-16.html], and academic articles [https://www.sciencedirect.com/science/article/pii/S0016328721000641] depending on the desired length, format etc. Some of the specifics might be technical, confusing, or esoteric, but if you've already discussed AI safety, you could quite easily discuss the concept of focusing on worst-case [https://s-risks.org/focus-areas-of-worst-case-ai-safety/]/ “fail-safe [https://longtermrisk.org/suffering-focused-ai-safety/]” AI safety measures as a promising area. It's also nice because it overlaps with extinction risk reduction work more (as far as I can tell) and seems like a more tractable goal than preven
Avoiding Groupthink in Intro Fellowships (and Diversifying Longtermism)

Thanks!

Ah sorry, I hadn't seen your list of proposed readings (I wrongly thought the relevant link was just a link to the old syllabus). Your points about those readings in (1) and (3) do seem to help with these concerns. A few thoughts:

  • The dichotomy between x-risk reduction and s-risk reduction seems off to me. As I understand them, prominent definitions of x-risks [1] [2] [3] (especially the more thorough/careful discussion in [3]) are all broad enough for s-risks to count as x-risks (especially if we're talking about permanent / locked-in s-risks, which
... (read more)
9seanrson3moYeah my mistake, I should have been clearer about the link for the proposed changes. I think we’re mostly in agreement. My proposed list is probably overcorrecting, and I definitely agree that more criticisms of both approaches are needed. Perhaps a compromise would be just including the reading entitled “Common Ground for Longtermists,” or something similar. I think you’re right that many definitions of x-risk are broad enough to include (most) s-risks, but I’m mostly concerned about the term “x-risk” losing this broader meaning and instead just referring to extinction risks. It’s probably too nuanced for an intro syllabus, but MichaelA’s post ( https://forum.effectivealtruism.org/posts/AJbZ2hHR4bmeZKznG/venn-diagrams-of-existential-global-and-suffering [https://forum.effectivealtruism.org/posts/AJbZ2hHR4bmeZKznG/venn-diagrams-of-existential-global-and-suffering] ) could help people to better understand the space of possible problems.
Avoiding Groupthink in Intro Fellowships (and Diversifying Longtermism)

Thanks for this! I'm not sure what I think about this--a few things might make it challenging/costly to introduce s-risks into the Introductory EA syllabi:

  1. The Introductory EA Program is partly meant to build participants' sustained interest in EA, and the very speculative / weird nature of s-risks could detract from that (by being off-puttingly out-there to people who--like most program participants--have only spent a handful of hours learning about / reflecting on relevant topics).
    1. One might wonder: if this is an issue, why introduce x-risks? I'd guess x-r
... (read more)
8Jamie_Harris3moI agree with 2. Not sure about 3 as I haven't reviewed the Introductory fellowship in depth myself. But on 1, I want to briefly make the case that s-risks don't have to be/seem much more weird than extinction risk work. I've sometimes framed it as: The future is vast and it could be very good or very bad. So we probably want to both try to preserve it for the good stuff and improve the quality. (Although perhaps CLR et al don't actually agree with the preserving bit, they just don't vocally object to it for coordination reasons etc) There are also ways it can seem less weird. E.g. you don't have make complex arguments about wanting to ensure a thing that hasn't happened yet continues to happen, and missed potential, you can just say: "here's a potential bad thing. We should stop that!!" See https://forum.effectivealtruism.org/posts/seoWmmoaiXTJCiX5h/the-psychology-of-population-ethics [https://forum.effectivealtruism.org/posts/seoWmmoaiXTJCiX5h/the-psychology-of-population-ethics] for evidence that people, on average, weigh (future/possible) suffering more than happiness. Also consider that one way of looking at moral circle expansion (one method of reducing s-risks) is that its basically just what many social justicey types are focusing on anyway -- increasing protection and consideration of marginalised groups. It just takes it further.
2seanrson3moHey Mauricio, thanks for your reply. I’ll reply later with some more remarks, but I’ll list some quick thoughts here: 1. I agree that s-risks can seem more “out there,” but I think some of the readings I’ve listed do a good job of emphasizing the more general worry that the future involves a great deal of suffering. It seems to me that the asymmetry in content about extinction risks vs. s-risks is less about the particular examples and more about the general framework. Taking this into account, perhaps we could write up something to be a gentler introduction to s-risks. The goal is to prevent people from identifying “longtermism” as just extinction risk reduction. 2. Yeah this is definitely true, but completely omitting such a distinctively EA concept like s-risks seems to suggest that something needs to be changed. 3. I think the reading I listed entitled “Common Ground for Longtermists” should address this worry, but perhaps we could add more. I tend to think that the potential for antagonism is outweighed by the value of broader thinking, but your worry is worth addressing.
University EA Groups Should Form Regional Groups

Hey, thanks for your thoughts!

Re hiring: I think there's a difference between hiring for "people to set up the infrastructure" and hiring for "people to fill out the infrastructure" (I wrote about this in another comment). I agree that the first one is very important to do well, I think that the second one can be done on a more natural selection basis. 

I'm not sure I buy either of those claims.

  • I have a pretty strong intuition that someone trying to set up this infrastructure is doomed if they don't try a bunch of things and closely engage with feedbac
... (read more)
Load More