All of Jacy_Reese's Comments + Replies

2toonalfrink3yI see a lot of people from EA orgs reply this way. It's a good sign!
The expected value of extinction risk reduction is positive

Oh, sorry, I was thinking of the arguments in my post, not (only) those in your post. I should have been more precise in my wording.

The expected value of extinction risk reduction is positive

Thank you for the reply, Jan, especially noting those additional arguments. I worry that your article neglects them in favor of less important/controversial questions on this topic. I see many EAs taking the "very unlikely that [human descendants] would see value exactly where we see disvalue" argument (I'd call this the 'will argument,' that the future might be dominated by human-descendant will and there is much more will to create happiness than suffering, especially in terms of the likelihood of hedonium over dolorium) and using that to justify a very

... (read more)
1JanBrauner3yHey Jacy, I have written up my thoughts on all these points in the article. Here are the links. * "The universe might already be filled with suffering and post-humans might do something against it." Part 2.2 [https://www.effectivealtruism.org/articles/the-expected-value-of-extinction-risk-reduction-is-positive/#22-existing-disvalue-could-be-alleviated-by-colonizing-space] * "Global catastrophes, that don't lead to extinction, might have negative long-term effects" Part 3 [https://www.effectivealtruism.org/articles/the-expected-value-of-extinction-risk-reduction-is-positive/#31-efforts-to-reduce-non-ai-extinction-risk-reduce-global-catastrophic-risk58] * "Other non-human animal civilizations might be worse Part 2.1 [https://www.effectivealtruism.org/articles/the-expected-value-of-extinction-risk-reduction-is-positive/#21-whether-post-humans-colonizing-space-is-good-or-bad-space-colonization-by-other-agents-seems-worse] The final paragraphs of each sections usually contain discussion of how relevant I think each argument is. All these sections also have some quantitative EV-estimates (linked or in the footnotes). But you probably saw that, since it is also explained in the abstract. So I am not sure what you mean when you say: Are we talking about the same arguments?
The expected value of extinction risk reduction is positive

Thanks for posting on this important topic. You might be interested in this EA Forum post where I outlined many arguments against your conclusion, the expected value of extinction risk reduction being (highly) positive.

I do think your "very unlikely that [human descendants] would see value exactly where we see disvalue" argument is a viable one, but I think it's just one of many considerations, and my current impression of the evidence is that it's outweighed.

Also FYI the link in your article to "moral circle expansion" is dead. We work on that approach at

... (read more)
2JanBrauner3yHey Jacy, I have seen and read your post. It was published after my internal "Oh my god, I really, really need to stop reading and integrating even more sources, the article is already way too long"-deadline, so I don't refer to it in the article. In general, I am more confident about the expected value of extinction risk reduction being positive, than about extinction risk reduction actually being the best thing to work on. It might well be that e.g. moral circle expansion is more promising, even if we have good reasons to believe that extinction risk reduction is positive. I personally don't think that this argument is very strong on its own. But I think there are additional strong arguments (in descending order of relevance): * "The universe might already be filled with suffering and post-humans might do something against it." * "Global catastrophes, that don't lead to extinction, might have negative long-term effects" * "Other non-human animal civilizations might be worse" * ...
Why I'm focusing on invertebrate sentience

I remain skeptical of how much this type of research will influence EA-minded decisions, e.g. how many people would switch donations from farmed animal welfare campaigns to humane insecticide campaigns if they increased their estimate of insect sentience by 50%? But I still think the EA community should be allocating substantially more resources to it than they are now, and you seem to be approaching it in a smart way, so I hope you get funding!

I'm especially excited about the impact of this research on general concern for invertebrate sentience (e.g. esta

... (read more)
3Denkenberger3yMy prior here is brain size weighting for suffering, which means insects are similar importance to humans currently. But I would guess they would be less tractable than humans (though obviously far more neglected). So I think if there could be compelling evidence that we should be weighting insects 5% as much as humans, that would be an enormous update and make invertebrates the dominant consideration in the near future.
2018 list of half-baked volunteer research ideas

[1] Cochrane mass media health articles (and similar):

  • Targeted mass media interventions promoting healthy behaviours to reduce risk of non-communicable diseases in adult, ethnic minorities
  • Mass media interventions for smoking cessation in adults
  • Mass media interventions for preventing smoking in young people.
  • Mass media interventions for promoting HIV testing
  • Smoking cessation media campaigns and their effectiveness among socioeconomically advantaged and disadvantaged populations
  • Population tobacco control interventions and their effects on social inequa
... (read more)
Which piece got you more involved in EA?

I can't think of anything that isn't available in a better form now, but it might be interesting to read for historical perspective, such as what it looks like to have key EA ideas half-formed. This post on career advice is a classic. Or this post on promoting Buddhism as diluted utilitarianism, which is similar to the reasoning a lot of utilitarians had for building/promoting EA.

Which piece got you more involved in EA?

The content on Felicifia.org was most important in my first involvement, though that website isn't active anymore. I feel like forum content (similar to what could be on the EA Forum!) was important because it's casually written and welcoming. Everyone was working together on the same problems and ideas, so I felt eager to join.

2Ben Pace3yI also have never read anything on Felicifia.org (but would like to)! If there's anything easy to link to, I'd be interested to have a read through any archived content that you thought was especially good / novel / mind-changing.
Leverage Research: reviewing the basic facts

Just to add a bit of info: I helped with THINK when I was a college student. It wasn't the most effective strategy (largely, it was founded before we knew people would coalesce so strongly into the EA identity, and we didn't predict that), but Leverage's involvement with it was professional and thoughtful. I didn't get any vibes of cultishness from my time with THINK, though I did find Connection Theory a bit weird and not very useful when I learned about it.

Excerpt from 'Doing Good Better': How Vegetarianism Decreases Animal Product Supply

I get it pretty frequently from newcomers (maybe in the top 20 questions for animal-focused EA?), but everyone seems convinced by a brief explanation of how there's still a small chance of big purchasing changes even if every small consumption change doesn't always lead to a purchasing change.

Why I prioritize moral circle expansion over artificial intelligence alignment

Yes, terraforming is a big way in which close-to-WAS scenarios could arise. I do think it's smaller in expectation than digital environments that develop on their own and thus are close-to-WAS.

I don't think terraforming would be done very differently than today's wildlife, e.g. done without predation and diseases.

Ultimately I still think the digital, not-close-to-WAS scenarios seem much larger in expectation.

Why I prioritize moral circle expansion over artificial intelligence alignment

I'd qualify this by adding that the philosophical-type reflection seems to lead in expectation to more moral value (positive or negative, e.g. hedonium or dolorium) than other forces, despite overall having less influence than those other forces.

Why I prioritize moral circle expansion over artificial intelligence alignment

Thanks for commenting, Lukas. I think Lukas, Brian Tomasik, and others affiliated with FRI have thought more about this, and I basically defer to their views here, especially because I haven't heard any reasonable people disagree with this particular point. Namely, I agree with Lukas that there seems to be an inevitable tradeoff here.

Why I prioritize moral circle expansion over artificial intelligence alignment

I just took it as an assumption in this post that we're focusing on the far future, since I think basically all the theoretical arguments for/against that have been made elsewhere. Here's a good article on it. I personally mostly focus on the far future, though not overwhelmingly so. I'm at something like 80% far future, 20% near-term considerations for my cause prioritization decisions.

This may take a few decades, but social change might take even longer.

To clarify, the post isn't talking about ending factory farming. And I don't think anyone in the E... (read more)

Why I prioritize moral circle expansion over artificial intelligence alignment

Hm, yeah, I don't think I fully understand you here either, and this seems somewhat different than what we discussed via email.

My concern is with (2) in your list. "[T]hey do not wish to be convinced to expand their moral circle" is extremely ambiguous to me. Presumably you mean they -- without MCE advocacy being done -- wouldn't put in wide-MC* values or values that lead to wide-MC into an aligned AI. But I think it's being conflated with, "they actively oppose" or "they would answer 'no' if asked, 'Do you think your values are wr... (read more)

1William_S4yWhy do you think this is the case? Do you think there is an alternative reflection process (either implemented by an AI, by a human society, or combination of both) that could be defined that would reliably lead to wide moral circles? Do you have any thoughts on what would it look like? If we go through some kind of reflection process to determine our values, I would much rather have a reflection process that wasn't dependent on whether or not MCE occurred before hand, and I think not leading to a wide moral circle should be considered a serious bug in any definition of a reflection process. It seems to me that working on producing this would be a plausible alternative or at least parallel path to directly performing MCE.
Why I prioritize moral circle expansion over artificial intelligence alignment

I personally don't think WAS is as similar to the most plausible far future dystopias, so I've been prioritizing it less even over just the past couple of years. I don't expect far future dystopias to involve as much naturogenic (nature-caused) suffering, though of course it's possible (e.g. if humans create large numbers of sentient beings in a simulation, but then let the simulation run on its own for a while, then the simulation could come to be viewed as naturogenic-ish and those attitudes could become more relevant).

I think if one wants something very... (read more)

1saulius4yBut humanity/AI is likely to expand to other planets. Won't those planets need to have complex ecosystems that could involve a lot of suffering? Or do you think it will all be done with some fancy tech that'll be too different from today's wildlife for it to be relevant? It's true that those ecosystems would (mostly?) be non-naturogenic but I'm not that sure that people would care about them, it'd still be animals/diseases/hunger.etc. hurting animals. Maybe it'd be easier to engineer an ecosystem without predation and diseases but that is a non-trivial assumption and suffering could then arise in other ways. Also, some humans want to spread life to other planets for its own sake and relatively few people need to want that to cause a lot of suffering if no one works on preventing it. This could be less relevant if you think that most of the expected value comes from simulations that won't involve ecosystems.
Why I prioritize moral circle expansion over artificial intelligence alignment

Those considerations make sense. I don't have much more to add for/against than what I said in the post.

On the comparison between different MCE strategies, I'm pretty uncertain which are best. The main reasons I currently favor farmed animal advocacy over your examples (global poverty, environmentalism, and companion animals) are that (1) farmed animal advocacy is far more neglected, (2) farmed animal advocacy is far more similar to potential far future dystopias, mainly just because it involves vast numbers of sentient beings who are largely ignored by mo... (read more)

The main reasons I currently favor farmed animal advocacy over your examples (global poverty, environmentalism, and companion animals) are that (1) farmed animal advocacy is far more neglected, (2) farmed animal advocacy is far more similar to potential far future dystopias, mainly just because it involves vast numbers of sentient beings who are largely ignored by most of society.

Wild animal advocacy is far more neglected than farmed animal advocacy, and it involves even larger numbers of sentient beings ignored by most of society. If the superiority o... (read more)

Why I prioritize moral circle expansion over artificial intelligence alignment

Thanks! That's very kind of you.

I'm pretty uncertain about the best levers, and I think research can help a lot with that. Tentatively, I do think that MCE ends up aligning fairly well with conventional EAA (perhaps it should be unsurprising that the most important levers to push on for near-term values are also most important for long-term values, though it depends on how narrowly you're drawing the lines).

A few exceptions to that:

  • Digital sentience probably matters the most in the long run. There are good reasons to be skeptical we should be advocating

... (read more)
Why I prioritize moral circle expansion over artificial intelligence alignment

I'm sympathetic to both of those points personally.

1) I considered that, and in addition to time constraints, I know others haven't written on this because there's a big concern of talking about it making it more likely to happen. I err more towards sharing it despite this concern, but I'm pretty uncertain. Even the detail of this post was more than several people wanted me to include.

But mostly, I'm just limited on time.

2) That's reasonable. I think all of these boundaries are fairly arbitrary; we just need to try to use the same standards across cause ar... (read more)

Why I prioritize moral circle expansion over artificial intelligence alignment

That makes sense. If I were convinced hedonium/dolorium dominated to a very large degree, and that hedonium was as good as dolorium is bad, I would probably think the far future was at least moderately +EV.

-1zdgroff4yIsn't hedonium inherently as good as dolorium is bad? If it's not, can't we just normalize and then treat them as the same? I don't understand the point of saying there will be more hedonium than dolorium in the future, but the dolorium will matter more. They're vague and made-up quantities, so can't we just set it so that "more hedonium than dolorium" implies "more good than bad"?
Why I prioritize moral circle expansion over artificial intelligence alignment

Yeah, I think that's basically right. I think moral circle expansion (MCE) is closer to your list items than extinction risk reduction (ERR) is because MCE mostly competes in the values space, while ERR mostly competes in the technology space.

However, MCE is competing in a narrower space than just values. It's in the MC space, which is just the space of advocacy on what our moral circle should look like. So I think it's fairly distinct from the list items in that sense, though you could still say they're in the same space because all advocacy competes for ... (read more)

Why I prioritize moral circle expansion over artificial intelligence alignment

Thanks for the comment! A few of my thoughts on this:

Presumably we want some people working on both of these problems, some people have skills more suited to one than the other, and some people are just going to be more passionate about one than the other.

If one is convinced non-extinction civilization is net positive, this seems true and important. Sorry if I framed the post too much as one or the other for the whole community.

Much of the work related to AIA so far has been about raising awareness about the problem (eg the book Superintelligence), a

... (read more)
8Brian_Tomasik4yI would guess that increasing understanding of cognitive science would generally increase people's moral circles if only because people would think more about these kinds of questions. Of course, understanding cognitive science is no guarantee that you'll conclude that animals matter, as we can see from people like Dennett, Yudkowsky, Peter Carruthers, etc.
How to get a new cause into EA

I'd go farther here and say all three (global poverty, animal rights, and far future) are best thought of as target populations rather than cause areas. Moreover, the space not covered by these three is basically just wealthy modern humans, which seems to be much less of a treasure trove than the other three because WMHs have the most resources, far more than the other three populations. (Potentially there's also medium-term future beings as a distinct population, depending on where we draw the lines.)

I think EA would probably be discovering more things if... (read more)

3Daniel_Dewey4yI think this is a good point; you may also be interested in Michelle's post about beneficiary groups [http://effective-altruism.com/ea/rw/causes_in_effective_altruism/], my comment about beneficiary subgroups [http://effective-altruism.com/ea/rw/causes_in_effective_altruism/65c], and Michelle's follow-up about finding more effective causes [http://effective-altruism.com/ea/s0/finding_more_effective_causes/].
2davidc4yI guess this thought is probably implicit in a lot of EA, but I'd never quite heard it stated that way. It should be more often! That said, I think it's not quite precise. There's a population missing: humans in the not-quite-far-future (e.g. 100 years from now, which I think is not usually included when people say "far future").
Survey of leaders in the EA community on a range of important topics, like what skills they need and what causes are most effective

Thanks for the response. My main general thought here is just that we shouldn't depend on so much from the reader. Most people, even most thoughtful EAs, won't read in full and come up with all the qualifications on their own, so it's important for article writers to include those themselves, and to include those upfront and center in their articles.

If you wanted to spend a lot of time on "what causes do EA leadership favor," one project I see as potentially really valuable is a list of arguments/evidence and getting EA leaders to vote on their w... (read more)

Survey of leaders in the EA community on a range of important topics, like what skills they need and what causes are most effective

[Disclaimer: Rob, 80k's Director of Research, and I briefly chatted about this on Facebook, but I want to make a comment here because that post is gone and more people will see it here. Also, as a potential conflict-of-interest, I took the survey and work at an organization that's between the animal and far future cause areas.]

This is overall really interesting, and I'm glad the survey was done. But I'm not sure how representative of EA community leaders it really is. I'd take the cause selection section in particular with a big grain of salt, and I wish i... (read more)

4Robert_Wiblin4yHey Jacy thanks for the detailed comment - with EA Global London on this weekend I'll have to be brief! :) One partial response is that even if you don't think this is fully representative of the set of all organisation you'd like to have seen surveyed, it's informative about the groups that were. We list the orgs that were surveyed and point out who wasn't near the start of the article so people understand who the answers represent: You can take this information for whatever it's worth! As for who I chose to sample - on any definition there's always going to be some grey area, orgs that almost meet that definition but don't quite. I tried to find all the organisations with full-time staff who i) were a founding part of the EA movement, or, ii) were founded by people who identify strongly as part of the EA community, or, iii) are now mostly led by people who identify more strongly as part of the EA movement than other other community. I think that's a natural grouping and don't view AMF, MfA or CHAI as meeting that definition (though I'd be happy to be corrected if any group does meet this definition whose leadership I'm not personally familiar with). The main problem with that question in my mind is underrepresentation of GiveWell which has a huge budget and is clearly a central EA organisation - the participants from GiveWell gave me one vote to work with but didn't provide quantitative answers, as they didn't have a strong or clear enough view. More generally, people from the sample who specialise in one cause were more inclined to say they didn't have a view on fund which was most effective and so not answer it (which is reasonable but could bias the answers). Personally like you I give more weight to the views of specialist cause priorities researchers working at cause-neutral organisations. They were more likely to answer the question and are singled out in the table with individual votes. Interestingly their results were quite similar to the full sample.
Should EAs think twice before donating to GFI?

As of December 2016, my impression was that ACE wasn't and hadn't shifted towards a hits based or more risk-averse approach. I don't know if this is because they already were more in that direction than Rob thinks, or because they didn't move to the hits based position Rob thinks they currently have.

[I worked for ACE on the board then as a researcher until December 2016. This is just my personal opinion.]