All of Jacy's Comments + Replies

Thanks for this summary. While there are many disanalogies between historical examples and current events, I think it's easy for us to neglect the historical evidence and try to reinvent wheels.

Jacy
1y15
4
1

I didn't downvote, and the comment is now at +12 votes and +8 agreement (not sure where it was before), but my guess is it would be more upvoted if it were worded more equivocally (e.g., "I think the evidence suggests climate change poses...") and had links to the materials you reference (e.g., "[link] predicted that the melting of the Greenland ice cap would occur..."). There also may be object-level disagreements (e.g., some think climate change is an existential risk for humans in the long run or in the tail risks, such as where geoengineering might be ... (read more)

4
philgoetz
1y
You're right about my tendency towards tendentiousness. Thanks!  I've reworded it some.  Not to include "I think that", because I'm making objective statements about what the IPCC has written.
Jacy
1y10
1
0

The collapse of FTX may be a reason for you to update towards pessimism about the long-term future.

I see a lot of people's worldviews updating this week based on the collapse of FTX. One view I think people in EA may be neglecting to update towards is pessimism about the expected value of the long-term future. Doing good is hard. As Tolstoy wrote, "All happy families are alike; each unhappy family is unhappy in its own way." There is also Yudkowsky: "Value is fragile"; Burns: "The best-laid plans of mice and men often go awry"; von Moltke: "No plan survive... (read more)

Jacy
1y35
10
0

"EA's are, despite our commitments to ethical behaviour, perhaps no more trustworthy with power than anyone else."

I wonder if "perhaps no more trustworthy with power than anyone else" goes a little too far. I think the EA community made mistakes that facilitated FTX misbehavior, but that is only one small group of people. Many EAs have substantial power in the world and have continued to be largely trustworthy (and thus less newsworthy!), and I think we have evidence like our stronger-than-average explicit commitments to use power for good and the critical... (read more)

Fair point. I think, in a knee-jerk reaction, I adjusted too far here. At the very least, it seems that EA's are at least somewhat more likely to do good with power if they have that aim rather than people who just want power for power's sake. It's still an adjustment downwards on my part for the EV of EA politicians, but not to 0 compared to the median candidate of said candidate's political party.

Well, my understanding now is that it is very structurally different (not just reputationally or culturally different) from publicly traded stock: the tiny trading volume, the guaranteed price floor, probably other things. If it were similar, I think I would probably have much less of that concern. This does imply standard net worth calculations for Sam Bankman-Fried were poor estimates, and I put a decent chance on Forbes/Bloomberg/etc. making public changes to their methodology because of this (maybe 7% chance? very low base rate).

I've updated a little toward this being less concerning. Thanks.

That makes sense.

  • To clarify, I wasn't referring to leverage (which I think most would say counts as fraud because of FTX claims to the contrary) in the comment above, just the fragility and illiquidity of the token itself.
  • My understanding is that some EA leadership knew much of the committed wealth was in FTT (at least, I knew, and I know some others who knew), and I worry that a few knew enough about cryptocurrency to know how fragile and illiquid that situation was (I did not, but I should have looked into it more) but allowed that to go unmentioned or u
... (read more)

Hmm, I do think in the absence of the leverage, having wealth in FTT was kind of reasonable, and the leverage was the primary thing that enabled the whole thing to implode this quickly. 

I was still surprised by Alameda not having a more diversified portfolio, but I think it's basically accurate to model FTT as stock, and it's not that crazy to have a lot of your wealth in your own stock (and for it to be hard for you to exit that position, since it looks really suspicious if you sell a lot of your own stock). 

But I do agree that there was probabl... (read more)

Jacy
1y33
14
6

I strongly agree with this. In particular, it seems that the critiques of EA in relation to these events are much less focused on the recent fraud concern than EAs are in their defenses. I think we are choosing the easiest thing to condemn and distance ourselves from, in a very concerning way. Deliberately or not, our focus on the outrage against recent fraud distracts onlookers and community members from the more serious underlying concerns that weigh more heavily on our behavior given their likelihood.

The 2 most pressing to me are the possibilities (i) t... (read more)

For what it's worth, as someone saying in another thread that I do think there were concerns about Sam's honesty circulating, I don't know of anyone I have ever talked to who expressed concern about the money being held primarily in FTT, or who would have predicted anything close to the hole in the balance sheet that we now see. 

I heard people say that we should assume that Sam's net-wealth has high-variance, given that crypto is a crazy industry, but I think you are overstating the degree to which people were aware of the incredible leverage in FTX's... (read more)

Jacy
1y27
3
2

This is great data to have! Thanks for collecting and sharing it. I think the Sioux Falls (Metaculus underestimate of the 48% ban support) and Swiss (Metaculus overestimate of the 37% ban support)  factory farming ban proposals are particularly interesting opportunities to connect this survey data to policy results. I'll share a few scattered, preliminary thoughts to spark discussion, and I hope to see more work on this topic in the future.

  • These 2022 results seem to be in line with the very similar surveys conducted by Rethink Priorities in 2019, whic
... (read more)
5
Neil_Dullaghan
1y
We should also note that Norwood (one of the authors who replicated SI’s original 2017 study) this year ran a new slaughterhouse ban survey experiment  ([Britton & Norwood 2022](https://doi.org/10.1017/aae.2022.17)) and found lower support. (I only just received the data from them so I couldn’t include it in the post). Here is my summary from just skimming the article and quickly aggregating the data. They test a hypothesis that the question ordering in the 2017 SI study cued respondents' ideal self (like whether voting is a moral virtue) rather than their common self (like whether they actually voted). Their theory is that by asking respondents first whether they agreed with statements about meat reduction, discomfort with the way animals are used in the food industry, and animal sentience it cued their ideal self so that “the desire to not appear hypocritical induced them to activate a mixture of their ideal and common self” when answering questions about bans on animal farming, factory farming, and slaughterhouses. The actual design of their study is a little too complicated to explain here (involving four treatments that altered the order and wording of ideal and common self questions, some food-related and some non-food related, as well as inserting buffer questions), but basically some respondents saw the ban questions before the ideal self questions, and others saw them in the same order as in the original 2017 SI study. Furthermore, to build on their tests about whether respondents understood the implications of bans,  "roughly half of the subjects are given the [common self] statements exactly as they appeared on the Animal Sentience survey, while the other half contain an addition [. . .] For example, some see the statement “I support a ban on slaughterhouses” while others see the statement “I support a ban on slaughterhouses and will stop eating meat”. " While the primary aim of their study was to test something they call “identity inertia” and they

Thanks for reading and engaging with our work!

  • In 2019 we conducted some exploratory small-N, low-confidence studies on this topic that informed these high-quality, larger N studies. We feel comfortable presenting these recent results as we want to promote the norm of advocates choosing strategies and messages based on the best quality evidence, so would rather the community update based on the results of high-quality studies rather than low-confidence small-N studies where the wrong inferences may be drawn.
  • [Update 2022-Nov-18: I added a methods section to
... (read more)
Jacy
1y87
54
9

Rather than further praising or critiquing the FTX/Alameda team, I want to flag my concern that the broader community, including myself, made a big mistake in the "too  much money" discourse and subsequent push away from earning to give (ETG) and fundraising.  People have discussed Open Philanthropy and FTX funding in a way that gives the impression that tens of billions are locked in for effective altruism, despite many EA nonprofits still insisting on their significant room for more funding. (There has been some pushback, and my impression that... (read more)

Thanks for going into the methodological details here.

I think we view "double-counting" differently, or I may not be sufficiently clear in how I handle it. If we take a particular war as a piece of evidence, which we think fits into both "Historical Harms" and "Disvalue Through Intent," and it is overall -8 evidence on the EV of the far future, but it seems 75% explained through "Historical Harms" and 25% explained through "Disvalue Through Intent," then I would put -6 weight on the former and -2 weight on the latter. I agree this isn't very precise, and I... (read more)

7
Mau
2y
I think we're on a similar page regarding double-counting--the approach you describe seems like roughly what I was going for. (My last comment was admittedly phrased in an overly all-or-nothing way, but I think the numbers I attached suggest that I wasn't totally eliminating the weight on history.) On whether we see "reasons for negative weight" differently, I think that might be semantic--I had in mind the net weight, as you suggest (I was claiming this net weight was 0). The suggestion that digital minds might be affected just by their being different is a good point that I hadn't been thinking about. (I could imagine some people speculating that this won't be much of a problem because influential minds will also eventually tend to be digital.) I tentatively think that does justify a mildly negative weight on digital minds, with the other factors you mention seeming to be fully accounted for in other weights.

This is helpful data. Two important axes of variation here are:

- Time, where this has fortunatley become more frequently discussed in recent years
- Involvement, where I speak a lot with artificial intelligence and machine learning researches who work on AI safety but not global priorities research; often their motivation was just reading something like Life 3.0. I think these people tend to have thought through crucial considerations less than, say, people on this forum.

Trade 3 is removing a happy person, which is usually bad in a person-affecting view, possibly bad enough to not be worth less than $0.99 and thus not be Dutch booked. 

4
Rohin Shah
2y
Responded in the other comment thread.
Jacy
2y17
0
0

Hi Khorton, I wouldn't describe it as stepping back into the community, and I don't plan on doing that, regardless of this issue, unless you consider occasional posts and presentations or socializing with my EA friends as such. This post on the EV of the future was just particularly suited for the EA Forum (e.g., previous posts on it), and it's been 3 years since I published that public apology and have done everything asked of me by the concerned parties (around 4 years since I was made aware of the concerns, and I know of no concerns about my behavior since then).

I'm not planning to comment more here. This is in my opinion a terrible place to have these conversations, as Dony pointed out as well.

8[anonymous]2y
Why should we believe that you have in fact changed? You were kicked out of Brown for sexual misconduct. You claim to believe that the allegations at that time were false. Instead of being extra-careful in your sexual conduct following this, at least five women complain to CEA about your sexual sexual misconduct, and CEA calls the complaints 'credible and concerning'. There is zero reason to think you have changed.   Plus, you're a documented liar, so we should have no reason to believe you. 
1[anonymous]2y
It's a comment that is typical of Jacy - he cannot help but dissemble. "I am also stepping back from the EA community more generally, as I have been planning to since last year in order to focus on my research." It makes it sound like he was going to step back anyway even while he was touting himself as an EA co-founder and was about to promote his book! In fact, if you read between the lines, CEA severed ties between him and the community. He then pretends that he was going to do this anyway. The whole apology is completely pathetic. 
Jacy
2y-1
0
0

[Edit: I've now made some small additions to the post to better ensure readers do not get the impressions that you're worried about. The substantive content of the post remains the same, and I have not read any disagreements with it, though please let me know if there are any.]

Thanks for clarifying. I see the connection between both sets of comments, but the draft comments still seem more like 'it might be confusing whether this is about your experience in EA or an even-coverage history', while the new comments seem more like 'it might give the impression ... (read more)

Thanks. I agree with essentially all of this, and I left a comment with details: https://forum.effectivealtruism.org/posts/ZbdNFuEP2zWN5w2Yx/ryancarey-s-shortform?commentId=oxodp9BzigZ5qgEHg

I would reiterate that this was only on my website for a few weeks, and I removed it as soon as I got the negative feedback. [Edit: As I say in my detailed comment, I viewed the term "co-founder" in terms of the broad base of people who built EA as a social movement. Others read it as a narrower term, such as the 1-3 co-founders of a typical company or nonprofit. Now I... (read more)

[anonymous]2y20
0
0

You removed it after people called you out on it being bullshit and you know it isn't true. 

[Edit: I've now made some small additions to the post to better ensure readers do not get the impressions that you're worried about. The substantive content of the post remains the same, and I have not read any disagreements with it, though please let me know if there are any.]

I think I agree with essentially all of this, though I would have preferred if you gave this feedback when you were reading the draft because I would have worded my comments to ensure they don't give the impression you're worried about. I strongly agree with your guess that EA would... (read more)

I think I agree with essentially all of this, though I would have preferred if you gave this feedback when you were reading the draft because I would have worded my comments to ensure they don't give the impression you're worried about.

If it seemed to you like I was raising different issues in the draft, then each to their own, I guess. But these concerns were what I had in mind when I wrote comments like the following:

> 2004–2008: Before I found other EAs

If you're starting with this, then you should probably include "my" in the title (or similar) becau

... (read more)
Jacy
2y11
0
0

Hi John, just to clarify some inaccuracies in your two comments:

- I’ve never harassed anyone, and I’ve never stated or implied that I have.  I have apologized for making some people uncomfortable with “coming on too strong” in my online romantic advances. As I've said before in that Apology, I never intended to cause any discomfort, and I’m sorry that I did so. There have, to my knowledge, been no concerns about my behavior since I was made aware of these concerns in mid-2018.

- I didn’t lie on my website. I had (in a few places) described myself as a ... (read more)

Hi Jacy, you said in your apology "I am also stepping back from the EA community more generally, as I have been planning to since last year in order to focus on my research."

I haven't seen you around since then, so was surprised to see you attend an EA university retreat* and start posting more about EA. Would you describe yourself as stepping back into the EA community now?

*https://twitter.com/jacyanthis/status/1515682513280282631?s=20&t=reRvYxXCs2z-AvszF31Gng

2[anonymous]2y
* Were you expelled from Brown for sexual harassment? Or was that also for clumsy online flirting? * You did lie on your website. It is false that you are a co-founder of effective altruism. There is not a single person in the world who thinks that is true, and you only said it to further your career. That you can't even acknowledge that that was a lie speaks volumes.  * Perhaps CEA can clarify whether there was any connection between the allegations and CEA severing ties with SI.  * Were the allegations reported to the Sentience Institute before CEA? Why did you not write a public apology before CEA approached you with the allegations? You agreeing with CEA to being banned from EA events and you being banned from EA events are the same thing.  * The issue is how long you should 'step away' from the community for. 

It's great to know where your specific weights differ! I agree that each of the arguments you put forth are important. Some specifics:

  • I agree that differences in the future (especially the weird possibilities like digital minds and acausal trade) is a big reason to discount historical evidence. Also, by these lights, some historical evidence (e.g., relations across huge gulfs of understanding and ability like from humans to insects) seems a lot more important than others (e.g., the fact that animal muscle and fat happens to be an evolutionarily advantageou
... (read more)
4
Jamie_Harris
2y
I also put my intuitive scores into a copy of your spreadsheet. In my head, I've tended to simplify the picture into essentially the "Value Through Intent" argument vs the "Historical Harms" argument, since these seem liked the strongest arguments in either direction to me. In that framing, I lean towards the future being weakly positive. But this post is a helpful reminder that there are various other arguments pointing in either direction (which, in my case, overall push me towards a less optimistic view). My overall view still seems pretty close to zero at the moment though.  Also interesting how wildly different each of our scores are. Partly I think this might be because I was quite confused/worried about double-counting. Also maybe just not fully grasping some of the points listed in the post.
7
Mau
2y
Thanks! Responding on the points where we may have different intuitions: * Regarding your second bullet point, I agree there are a bunch of things that we can imagine having gone differently historically, where each would have been enough to make things go better. These other factors are all already accounted for, so putting the weight on historical harms/progress again still seems to be double-counting (even if which thing it's double-counting isn't well-defined). * Regarding your third bullet point, thanks for flagging those points - I don't think I buy that any of them are reasons for negative weight. * Intrusions could be harmful, but there could also be positive analogues. * Duplication, instrumental usefulness, and nested minds are just reasons to think there might be more of these minds, so these considerations only seem net negative if we already have other reasons to assume these minds' well-being would be net negative (we may have such reasons, but I think these are already covered by other factors, so counting them here seems like double-counting) * (As long as we're speculating about nested minds: should we expect them to be especially vulnerable because others wouldn't recognize them as minds? I'm skeptical; it seems odd to assume we'll be at that level of scientific progress without having learned how experiences work.) * On interpretation of the spreadsheet: * I think (as you might agree) that results should be taken as suggestive but far from definitive. Adding things up fails to capture many important dynamics of how these things work (e.g., cooperation might not just create good things but also separately counteract bad things). * Still, insofar as we're looking at these results, I think we should mostly look at the logarithmic sum (because some dynamics of the future could easily be far more important than others). * As I suggested, I have a few smaller quibbles, so these aren't quite my numbers (although these quibbles don

This is super interesting. Thanks for writing it. Do you think you're conflating several analytically distinct phenomena when you say (i) "Fanaticism is the idea that we should base our decisions on all of the possible outcomes of our actions no matter how unlikely they are ... base our decisions on all of the possible outcomes of our actions no matter how unlikely they are EA fanatics take a roughly maximize expected utility approach" and (ii) "Fanaticism is unreasonable"?

For (i), I mainly have in mind two approaches "fanatics" could be defined by: (ia) "... (read more)

2
Derek Shiller
2y
I meant to suggest that our all-things-considered assignments of probability and value should support projects like the ones I laid out. Those assignments might include napkin calculations, but if we know we overestimate those, we should adjust accordingly. This sounds to me like it is in line with my takeaways. Perhaps we differ on the grounds for sandboxing? Expected value calculations don't involve capping influence of component hypotheses. Do you have a take on how you would defend that? I don't mean to say that fanaticism is wrong. So please don't read this as a reductio. Interpreted as a claim about rationality, I largely am inclined to agree with it. What I would disagree with is a normative inference from its rationality to how we should act. Let's not focus less on animal welfare or global poverty because of farfetched high-value possibilities, even if it would be rational to do so.
Jacy
2y12
0
0

Brief Thoughts on the Prioritization of Quality Risks

This is a brief shortform post to accompany "The Future Might Not Be So Great." These are just some scattered thoughts on the prioritization of quality risks not quite relevant enough to go in the post itself. Thanks to those who gave feedback on the draft of that post, particularly on this section.

People ask me to predict the future, when all I want to do is prevent it. Better yet, build it. Predicting the future is much too easy, anyway. You look at the people around you, the street you stand on, th

... (read more)

That's right that we don't have any ongoing projects exclusively on the impact of AI on nonhuman biological animals, though much of our research includes that, especially the outer alignment idea of ensuring an AGI or superintelligence accounts for the interests about all sentient beings, including wild and domestic  nonhuman biological animals. We also have several empirical projects where we collect data on both moral concern for animals and for AI, such as on perspective-taking, predictors of moral concern, and our recently conducted US nationally ... (read more)

4
Black Box
2y
Thanks for the explanation; I do support what SI is doing (researching problems around digital sentience as moral patients, which seems to be an important and neglected area), and your reasoning makes sense!
Jacy
2y56
0
0

Good points! This is exactly the sort of work we do at Sentience Institute on moral circle expansion (mostly for farmed animals from 2016 to 2020, but since late 2020, most of our work has been directly on AI—and of course the intersections), and it has been my priority since 2014. Also, Peter Singer and Yip Fai Tse are working on "AI Ethics: The Case for Including Animals"; there are a number of EA Forum posts on nonhumans and the long-term future; and the harms of AI and "smart farming" for farmed animals is a common topic, such as this recent article th... (read more)

4
Black Box
2y
Sentience Institute has, in its research agenda, research projects about digital sentients (which presumably include certain possible forms of AI) as moral patients, but (please correct me if I'm wrong) in the "In-progress research projects" section there doesn't seem to be anything substantial about the impact of AI (especially transformative AI) on animals?
2
toonalfrink
5y
I see a lot of people from EA orgs reply this way. It's a good sign!

Oh, sorry, I was thinking of the arguments in my post, not (only) those in your post. I should have been more precise in my wording.

Jacy
5y13
0
0

Thank you for the reply, Jan, especially noting those additional arguments. I worry that your article neglects them in favor of less important/controversial questions on this topic. I see many EAs taking the "very unlikely that [human descendants] would see value exactly where we see disvalue" argument (I'd call this the 'will argument,' that the future might be dominated by human-descendant will and there is much more will to create happiness than suffering, especially in terms of the likelihood of hedonium over dolorium) and using that to justify a very

... (read more)
1
JanB
5y
Hey Jacy, I have written up my thoughts on all these points in the article. Here are the links. * "The universe might already be filled with suffering and post-humans might do something against it." Part 2.2 * "Global catastrophes, that don't lead to extinction, might have negative long-term effects" Part 3 * "Other non-human animal civilizations might be worse Part 2.1 The final paragraphs of each sections usually contain discussion of how relevant I think each argument is. All these sections also have some quantitative EV-estimates (linked or in the footnotes). But you probably saw that, since it is also explained in the abstract. So I am not sure what you mean when you say: Are we talking about the same arguments?
Jacy
5y23
0
0

Thanks for posting on this important topic. You might be interested in this EA Forum post where I outlined many arguments against your conclusion, the expected value of extinction risk reduction being (highly) positive.

I do think your "very unlikely that [human descendants] would see value exactly where we see disvalue" argument is a viable one, but I think it's just one of many considerations, and my current impression of the evidence is that it's outweighed.

Also FYI the link in your article to "moral circle expansion" is dead. We work on that approach at

... (read more)
4
JanB
5y
Hey Jacy, I have seen and read your post. It was published after my internal "Oh my god, I really, really need to stop reading and integrating even more sources, the article is already way too long"-deadline, so I don't refer to it in the article. In general, I am more confident about the expected value of extinction risk reduction being positive, than about extinction risk reduction actually being the best thing to work on. It might well be that e.g. moral circle expansion is more promising, even if we have good reasons to believe that extinction risk reduction is positive. I personally don't think that this argument is very strong on its own. But I think there are additional strong arguments (in descending order of relevance): * "The universe might already be filled with suffering and post-humans might do something against it." * "Global catastrophes, that don't lead to extinction, might have negative long-term effects" * "Other non-human animal civilizations might be worse" * ...
Jacy
5y15
0
0

I remain skeptical of how much this type of research will influence EA-minded decisions, e.g. how many people would switch donations from farmed animal welfare campaigns to humane insecticide campaigns if they increased their estimate of insect sentience by 50%? But I still think the EA community should be allocating substantially more resources to it than they are now, and you seem to be approaching it in a smart way, so I hope you get funding!

I'm especially excited about the impact of this research on general concern for invertebrate sentience (e.g. esta

... (read more)
3
Denkenberger
5y
My prior here is brain size weighting for suffering, which means insects are similar importance to humans currently. But I would guess they would be less tractable than humans (though obviously far more neglected). So I think if there could be compelling evidence that we should be weighting insects 5% as much as humans, that would be an enormous update and make invertebrates the dominant consideration in the near future.

[1] Cochrane mass media health articles (and similar):

  • Targeted mass media interventions promoting healthy behaviours to reduce risk of non-communicable diseases in adult, ethnic minorities
  • Mass media interventions for smoking cessation in adults
  • Mass media interventions for preventing smoking in young people.
  • Mass media interventions for promoting HIV testing
  • Smoking cessation media campaigns and their effectiveness among socioeconomically advantaged and disadvantaged populations
  • Population tobacco control interventions and their effects on social inequa
... (read more)

I can't think of anything that isn't available in a better form now, but it might be interesting to read for historical perspective, such as what it looks like to have key EA ideas half-formed. This post on career advice is a classic. Or this post on promoting Buddhism as diluted utilitarianism, which is similar to the reasoning a lot of utilitarians had for building/promoting EA.

The content on Felicifia.org was most important in my first involvement, though that website isn't active anymore. I feel like forum content (similar to what could be on the EA Forum!) was important because it's casually written and welcoming. Everyone was working together on the same problems and ideas, so I felt eager to join.

2
Ben Pace
6y
I also have never read anything on Felicifia.org (but would like to)! If there's anything easy to link to, I'd be interested to have a read through any archived content that you thought was especially good / novel / mind-changing.
Jacy
6y23
0
0

Just to add a bit of info: I helped with THINK when I was a college student. It wasn't the most effective strategy (largely, it was founded before we knew people would coalesce so strongly into the EA identity, and we didn't predict that), but Leverage's involvement with it was professional and thoughtful. I didn't get any vibes of cultishness from my time with THINK, though I did find Connection Theory a bit weird and not very useful when I learned about it.

Jacy
6y10
0
0

I get it pretty frequently from newcomers (maybe in the top 20 questions for animal-focused EA?), but everyone seems convinced by a brief explanation of how there's still a small chance of big purchasing changes even if every small consumption change doesn't always lead to a purchasing change.

Jacy
6y10
0
0

Yes, terraforming is a big way in which close-to-WAS scenarios could arise. I do think it's smaller in expectation than digital environments that develop on their own and thus are close-to-WAS.

I don't think terraforming would be done very differently than today's wildlife, e.g. done without predation and diseases.

Ultimately I still think the digital, not-close-to-WAS scenarios seem much larger in expectation.

Jacy
6y11
0
0

I'd qualify this by adding that the philosophical-type reflection seems to lead in expectation to more moral value (positive or negative, e.g. hedonium or dolorium) than other forces, despite overall having less influence than those other forces.

Jacy
6y10
0
0

Thanks for commenting, Lukas. I think Lukas, Brian Tomasik, and others affiliated with FRI have thought more about this, and I basically defer to their views here, especially because I haven't heard any reasonable people disagree with this particular point. Namely, I agree with Lukas that there seems to be an inevitable tradeoff here.

I just took it as an assumption in this post that we're focusing on the far future, since I think basically all the theoretical arguments for/against that have been made elsewhere. Here's a good article on it. I personally mostly focus on the far future, though not overwhelmingly so. I'm at something like 80% far future, 20% near-term considerations for my cause prioritization decisions.

This may take a few decades, but social change might take even longer.

To clarify, the post isn't talking about ending factory farming. And I don't think anyone in the E... (read more)

Jacy
6y18
0
0

Hm, yeah, I don't think I fully understand you here either, and this seems somewhat different than what we discussed via email.

My concern is with (2) in your list. "[T]hey do not wish to be convinced to expand their moral circle" is extremely ambiguous to me. Presumably you mean they -- without MCE advocacy being done -- wouldn't put in wide-MC* values or values that lead to wide-MC into an aligned AI. But I think it's being conflated with, "they actively oppose" or "they would answer 'no' if asked, 'Do you think your values are wr... (read more)

3
William_S
6y
Why do you think this is the case? Do you think there is an alternative reflection process (either implemented by an AI, by a human society, or combination of both) that could be defined that would reliably lead to wide moral circles? Do you have any thoughts on what would it look like? If we go through some kind of reflection process to determine our values, I would much rather have a reflection process that wasn't dependent on whether or not MCE occurred before hand, and I think not leading to a wide moral circle should be considered a serious bug in any definition of a reflection process. It seems to me that working on producing this would be a plausible alternative or at least parallel path to directly performing MCE.
Jacy
6y19
0
0

I personally don't think WAS is as similar to the most plausible far future dystopias, so I've been prioritizing it less even over just the past couple of years. I don't expect far future dystopias to involve as much naturogenic (nature-caused) suffering, though of course it's possible (e.g. if humans create large numbers of sentient beings in a simulation, but then let the simulation run on its own for a while, then the simulation could come to be viewed as naturogenic-ish and those attitudes could become more relevant).

I think if one wants something very... (read more)

3
saulius
6y
But humanity/AI is likely to expand to other planets. Won't those planets need to have complex ecosystems that could involve a lot of suffering? Or do you think it will all be done with some fancy tech that'll be too different from today's wildlife for it to be relevant? It's true that those ecosystems would (mostly?) be non-naturogenic but I'm not that sure that people would care about them, it'd still be animals/diseases/hunger.etc. hurting animals. Maybe it'd be easier to engineer an ecosystem without predation and diseases but that is a non-trivial assumption and suffering could then arise in other ways. Also, some humans want to spread life to other planets for its own sake and relatively few people need to want that to cause a lot of suffering if no one works on preventing it. This could be less relevant if you think that most of the expected value comes from simulations that won't involve ecosystems.
Jacy
6y28
1
0

Those considerations make sense. I don't have much more to add for/against than what I said in the post.

On the comparison between different MCE strategies, I'm pretty uncertain which are best. The main reasons I currently favor farmed animal advocacy over your examples (global poverty, environmentalism, and companion animals) are that (1) farmed animal advocacy is far more neglected, (2) farmed animal advocacy is far more similar to potential far future dystopias, mainly just because it involves vast numbers of sentient beings who are largely ignored by mo... (read more)

The main reasons I currently favor farmed animal advocacy over your examples (global poverty, environmentalism, and companion animals) are that (1) farmed animal advocacy is far more neglected, (2) farmed animal advocacy is far more similar to potential far future dystopias, mainly just because it involves vast numbers of sentient beings who are largely ignored by most of society.

Wild animal advocacy is far more neglected than farmed animal advocacy, and it involves even larger numbers of sentient beings ignored by most of society. If the superiority o... (read more)

Jacy
6y11
0
0

Thanks! That's very kind of you.

I'm pretty uncertain about the best levers, and I think research can help a lot with that. Tentatively, I do think that MCE ends up aligning fairly well with conventional EAA (perhaps it should be unsurprising that the most important levers to push on for near-term values are also most important for long-term values, though it depends on how narrowly you're drawing the lines).

A few exceptions to that:

  • Digital sentience probably matters the most in the long run. There are good reasons to be skeptical we should be advocating

... (read more)
Jacy
6y10
0
0

I'm sympathetic to both of those points personally.

1) I considered that, and in addition to time constraints, I know others haven't written on this because there's a big concern of talking about it making it more likely to happen. I err more towards sharing it despite this concern, but I'm pretty uncertain. Even the detail of this post was more than several people wanted me to include.

But mostly, I'm just limited on time.

2) That's reasonable. I think all of these boundaries are fairly arbitrary; we just need to try to use the same standards across cause ar... (read more)

That makes sense. If I were convinced hedonium/dolorium dominated to a very large degree, and that hedonium was as good as dolorium is bad, I would probably think the far future was at least moderately +EV.

1
zdgroff
6y
Isn't hedonium inherently as good as dolorium is bad? If it's not, can't we just normalize and then treat them as the same? I don't understand the point of saying there will be more hedonium than dolorium in the future, but the dolorium will matter more. They're vague and made-up quantities, so can't we just set it so that "more hedonium than dolorium" implies "more good than bad"?
Jacy
6y17
1
0

Yeah, I think that's basically right. I think moral circle expansion (MCE) is closer to your list items than extinction risk reduction (ERR) is because MCE mostly competes in the values space, while ERR mostly competes in the technology space.

However, MCE is competing in a narrower space than just values. It's in the MC space, which is just the space of advocacy on what our moral circle should look like. So I think it's fairly distinct from the list items in that sense, though you could still say they're in the same space because all advocacy competes for ... (read more)

Jacy
6y19
1
0

Thanks for the comment! A few of my thoughts on this:

Presumably we want some people working on both of these problems, some people have skills more suited to one than the other, and some people are just going to be more passionate about one than the other.

If one is convinced non-extinction civilization is net positive, this seems true and important. Sorry if I framed the post too much as one or the other for the whole community.

Much of the work related to AIA so far has been about raising awareness about the problem (eg the book Superintelligence), a

... (read more)
8
Brian_Tomasik
6y
I would guess that increasing understanding of cognitive science would generally increase people's moral circles if only because people would think more about these kinds of questions. Of course, understanding cognitive science is no guarantee that you'll conclude that animals matter, as we can see from people like Dennett, Yudkowsky, Peter Carruthers, etc.
Load more