All of seanrson's Comments + Replies

I think the objection comes from the seeming asymmetry between over-attributing and under-attributing consciousness. It's fine to discuss our independent impressions about some topic, but when one's view is a minority position and the consequences of false beliefs are high, isn't there some obligation of epistemic humility?

7
niplav
8mo
Disagreed, animal moral patienthood competes with all the other possible interventions effective altruists could be doing, and does so symmetrically (the opportunity cost cuts in both directions!).

Maybe the examples are ambiguous but they don't seem cherrypicked to me. Aren't these some of the topics Yudskowky is most known for discussing? It seems to me that the cherrypicking criticism would apply to opinions about, I don't know, monetary policy, not issues central to AI and cognitive science.

5
Larks
8mo
If I was trying to list central historical claims that Eliezer made which were controversial at the time I would start with things like: * AGI is possible. * AI alignment is the most important issue in the world. * Alignment will not be easy. * People will let AGIs out of the box.
6
trevor1
8mo
None of these issues are "central" to AI or the cognitive science that's relevant to AI, AI alignment, or human upskilling. The author's area of interest is more about consciousness, animal welfare, and qualia.  The issues are the sole thing justifying Omnizoid's rather heated indictments against Yudkowsky, such as: Most readers will only read the accusations in the introduction and then bounce off the evidence backing them; because all of them are topics that, like string theory, only a handful of people on earth are capable of engaging with them. It just so happens that the author is one of them. Virtually nobody can read the actual arguments behind this post without dedicating >4 hours of their life to it, which makes it pretty well optimized to attract attention and damage Yudkowsky's reputation as much as possible with effectively zero accountability.

Hey Jack! In support of your view, I think you'd like some of Magnus Vinding's writings on the topic. Like you, he expresses some skepticism about focusing on narrower long-term interventions like AI safety research (vs. broader interventions like improved institutions).

Against your view, you could check out these two (i, ii) articles from CLR.

Feel free to message me if you'd like more resources. I'd love to chat further :)

How about Melanie Mitchell’s Artificial Intelligence: A Guide for Thinking Humans?

Oh totally (and you probably know much more about this than me). I guess the key thing I'm challenging is the idea that there was something like a very fast transfer of power resulting just from upgraded computing power moving from chimp-ancestor brain -> human brain (a natural FOOM), which the discussion sometimes suggests. My understanding is that it's more like the new adaptations allowed for cumulative cultural change, which allowed for more power.

Answer by seanrsonSep 20, 202225
5
2

Psychology/anthropology:

The misleading human-chimp analogy: AI will stand in relation to us the same way we stand in relation to chimps. I think this analogy basically ignores how humans have actually developed knowledge and power--not by rapid individual brain changes, but by slow, cumulative cultural changes. In turn, the analogy may lead us to make incorrect predictions about AI scenarios.

6
Geoffrey Miller
2y
Well, human brains are about three times the mass of chimp brains, diverged from our most recent common ancestor with chimps about 6 million years ago, and have evolved a lot of distinctive new adaptations such as language, pedagogy, virtue signaling, art, music, humor, etc. So we might not want to put too much emphasis on cumulative cultural change as the key explanation for human/chimp differences.

In addition to (farmed and wild) animal organizations, OPIS is worth checking out.

1
Yanay
2y
Thank you very much! 
3
NunoSempere
2y
+1; also a fan of OPIS.

Here's a list of organizations focusing on the quality of the long-term future (including the level of suffering), from this post:
 

If you are persuaded by the arguments that the expected value of human expansion is not highly positive or that we should prioritize the quality of the long-term future, promising approaches include research, field-building, and community-building, such as at the Center on Long-Term Risk, Center for Reducing Suffering, Future of Humanity Institute, Global Catastrophic Risk Institute, Legal Priorities Project, and Open Phil

... (read more)
1
Yanay
2y
Thank you. Indeed, I read about S-risks, and their treatment is important but very unpredictable (so much so, that some efforts to prevent long-term suffering can actually produce it). Until I make up my mind on the matter and direct my actions - both my contribution and my career - to the prevention of long-term suffering, in the meantime I am looking for more modest but safer options to prevent the suffering that exists at this moment

I found this to be a comprehensive critique of some of the EA community's theoretical tendencies (over-reliance on formalisms, false precision, and excessive faith in aggregation). +1 to Michael Townsend's suggestions, especially adding a TLDR to this post.

Longtermism + EA might include organizations primarily focused on the quality of the long-term future rather than its existence and scope (e.g., CLR, CRS, Sentience Institute), although the notion of existential risk construed broadly is a bit murky and potentially includes these (depending on how much of the reduction in quality threatens “humanity’s potential”)

Cool diagram! I would suggest rephrasing the Longtermism description to say “We should focus directly on future generations.” As it is, it implies that people only work on animal welfare and global poverty because of moral positions, rather than concerns about tractability, etc.

Answer by seanrsonAug 23, 202214
0
0

Glad to have you here :D

I'm just going to plug some recommendations for suffering-focused stuff: You can connect with other negative utilitarians and suffering-focused people in this Facebook group, check out this career advice, and explore issues in ethics and cause prioritization here.

Julia Wise (who commented earlier) runs the EA Peer Support Facebook group, which could be good to join, and there are many other EA and negative utilitarian/suffering-focused community groups. Feel free to PM me!

Also spent hens are almost always sold to be slaughtered, where many are probably exposed to torture-level suffering. I remember looking into this a while back and only found one pasture farm where spent hens were not sold for slaughter. You can find details for many farms here: https://www.cornucopia.org/scorecard/eggs/

I think considerations like these are important to challenge the recent emphasis on grounding x-risk (really, extinction risk) in near-term rather than long-term concerns. That perspective seems to assume that the EV of human expansion is pretty much settled, so we don’t have to engage too deeply with more fundamental issues in prioritization, and we can instead just focus on marketing.

I’d like to see more written directly comparing the tractability and neglectedness of population risk reduction and quality risk reduction. I wonder if you’ve perhaps overs... (read more)

Cool! Look forward to learning more about your work. Fyi your Discord invite link is broken.

On the other hand, this would exclude people whose main issue with longtermism is epistemic in nature. But maybe it’s too hard to come up with an acceptable catch-all term.

CRS has written about career advice for s-risks.

CLR does some coaching.

HLI seems to be working on career recommendations.

5
brb243
2y
So cool, basically CRS focuses on careers in reducing suffering or s-risks (various roles and individualized approach), CLR on s-risks (perhaps more networking referrals in order to promote CLR's values if these are shared with the applicant), HLI on near-term/current generation happiness improvement (suffering prioritization or deprioritization is not explicitly stated) (which may however have long-term effects, e. g. if positive education leads to malevolence prevention) learning with the applicant about top opportunities for impact in the applicant's area of interest that would have not otherwise been taken (rather than subtracting the counterfactual contribution of the next best hire) (while career profiles are being further developed). These are somewhat different lines of understanding specialization (suffering prioritization and impact metric (wellbeing vs. health) wise) but may be perhaps even better in supporting overall enjoyed systems!

I might be misunderstanding but I don’t think the intuition you mentioned is really an argument for hedonism, since one can agree that there must be beings with conscious experiences for anything to matter without concluding that conscious experience itself is the only thing that matters.

[anonymous]2y15
1
0

I agree that this is the next stage of the dialectic. But then the situations is: sentient experience is a necessary condition on there being value in the world. No other putative intrinsically valuable thing (preference satisfaction, authenticity, friendship etc) is a necessary condition on there being value in the world - eg even proponents of the view that authenticity is good don't think it is necessary for there being value in the world, as illustrated by the example of a torture experience machine. If you are assessing whether something is intrinsica... (read more)

I think this analysis should make more transparent its reliance on something like total utilitarianism and the presence of a symmetry between happiness and suffering. In the absence of these assumptions, the instances of extreme suffering and exploitation in factory farming more clearly entail "approximate veg*nism."

Consider the fate of a broiler chicken being boiled alive. Many people think that such extreme suffering cannot be counterbalanced by other positive aspects of one's own life, so there is really no way to make the chicken's life "net positive."... (read more)

4
Charles He
2y
Here are links: Rob Miles: https://www.youtube.com/channel/UCLB7AzTwc6VFZrBsO2ucBMg Two Minute Papers: https://www.youtube.com/c/K%C3%A1rolyZsolnai Yannic Kilcher: https://www.youtube.com/c/YannicKilcher

Longtermism and animal advocacy are often presented as mutually exclusive focus areas. This is strange, as they are defined along different dimensions: longtermism is defined by the temporal scope of effects, while animal advocacy is defined by whose interests we focus on. Of course, one could argue that animal interests are negligible once we consider the very long-term future, but my main issue is that this argument is rarely made explicit.

This post does a great job of emphasizing ways in which animal advocacy should inform our efforts to improve the ver... (read more)

I come back to this post quite frequently when considering whether to prioritize MCE (via animal advocacy) or AI safety. It seems that these two cause areas often attract quite different people with quite different objectives, so this post is unique in its attempt to compare the two based on the same long-term considerations. 

I especially like the discussion of bias. Although some might find the whole discussion a bit ad hominem, I think people in EA should take seriously the worry that certain features common in the EA community (e.g., an attraction towards abstract puzzles) might bias us towards particular cause areas.

I recommend this post for anyone interested in thinking more broadly about longtermism.

Yeah I have been in touch with them. Thanks!

Yeah I’m not totally sure what it implies. For consequentialists, we could say that bringing the life into existence is itself morally neutral; but once the life exists, we have reason to end it (since the life is bad for that person, although we’d have to make further sense of that claim). Deontologists could just say that there is a constraint against bringing into existence tortured lives, but this isn’t because of the life’s contribution to some “total goodness” of the world. Presumably we’d want some further explanation for why this constraint should ... (read more)

I mostly meant to say that someone who otherwise rejects totalism would agree to (*), so as to emphasize that these diverse values are really tied to our position on the value of good lives (whether good = virtuous or pleasurable or whatever).

Similarly, I think the transitivity issue has less to do with our theory of wellbeing (what counts as a good life) and more to do with our theory of population ethics. As to how we can resolve this apparent issue, there are several things we could say. We could (as I think Larry Temkin and others have done) agree with... (read more)

1
Mau
3y
Hm, I can't wrap my head around rejecting transitivity. Does this imply that bringing tortured lives into existence is morally neutral? I find that very implausible. (You could get out of that conclusion by claiming an asymmetry, but I haven't seen reasons to think that people with objective list theories of welfare buy into that.) This view also seems suspiciously committed to sketchy notions of personhood. 

Re: the dependence on future existence concerning the values of "freedom/autonomy, relationships (friendship/family/love), art/beauty/expression, truth/discovery, the continuation of tradition/ancestors' efforts, etc.," I think that most of these (freedom/autonomy, relationships, truth/discovery) are considered valuable primarily because of their role in "the good life," i.e. their contribution to individual wellbeing (as per "objective list" theories of wellbeing), so the contingency seems pretty clear here. Much less so for the others, unless we are convinced that people only value these instrumentally.

1
Mau
3y
Thanks! I think I see how these values are contingent in the sense that, say, you can't have human relationships without humans. Are you saying they're also contingent in the sense that (*) creating new lives with these things has no value? That's very unintuitive to me. If "the good life" is significantly more valuable than a meh life, and a meh life is just as valuable as nonexistence, doesn't it follow that a flourishing life is significantly more valuable than nonexistence? (In other words, "objective list" theories of well-being (if they hold some lives to be better than neutral) + transitivity seem to imply that creating good lives is possible and valuable, which implies (*) is false. People with these theories of well-being could avoid that conclusion by (a) rejecting that some lives are better than neutral, or (b) by rejecting transitivity. Do they?)

Local vs. global optimization in career choice

Like many young people in the EA community, I often find myself paralyzed by career planning and am quick to second-guess my current path, developing an unhealthy obsession for keeping doors open in case I realize that I really should have done this other thing.

Many posts have been written recently about the pitfalls of planning your career as if you were some generic template to be molded by 80,000 Hours [reference Holden's aptitudes post, etc.]. I'm still trying to process these ideas and think that the disti... (read more)


You say that care more about the preference of people than about total wellbeing, and that it'd change your mind if it turns out that people today prefer longtermist causes. 

What do you think about the preferences of future people? You seem to take the "rather make people happy than to make happy people" point of view on population ethics, but future preferences extend beyond their preference to exist. Since you also aren't interested in a world where trillions of people watch Netflix all day, I take it that you don't take their preferences as that im

... (read more)
5
EdoArad
3y
Good points, thanks :) I agree with everything here. One view on how we impact the future is asking how would we want to construct it assuming we had direct control over it. I think that this view lends more to the points you make and where population ethics feels to me much murkier.  However, there are some things that we might be able to put some credence on that we'd expect future people to value. For example, I think that it's more likely than not that future people would value their own welfare. So while it's not an argument for preventing x-risk (as that runs into the same population ethics problems), it is still an argument for other types of possible longtermist interventions and definitely points at where (a potentially enormous amount of) value lies. Say, I expect working on moral circle expansion to be very important from this perspective (although, I'm not sure about how interventions there are actually promising). Regarding quasi-aesthetic desires, I agree and think that this is very important to understand further. Personally, I'm confused as to whether I should value these kinds of desires (even at the expense of something based on welfarism), or whether I should think of these as a bias to overcome. As you say, I also guess that this might be behind some of the reasons for differing stances on cause prioritization.

Yeah my mistake, I should have been clearer about the link for the proposed changes. I think we’re mostly in agreement. My proposed list is probably overcorrecting, and I definitely agree that more criticisms of both approaches are needed. Perhaps a compromise would be just including the reading entitled “Common Ground for Longtermists,” or something similar.

I think you’re right that many definitions of x-risk are broad enough to include (most) s-risks, but I’m mostly concerned about the term “x-risk” losing this broader meaning and instead just referring ... (read more)

Hey Mauricio, thanks for your reply. I’ll reply later with some more remarks, but I’ll list some quick thoughts here:

  1. I agree that s-risks can seem more “out there,” but I think some of the readings I’ve listed do a good job of emphasizing the more general worry that the future involves a great deal of suffering. It seems to me that the asymmetry in content about extinction risks vs. s-risks is less about the particular examples and more about the general framework. Taking this into account, perhaps we could write up something to be a gentler introducti

... (read more)
Mau
3y13
0
0

Thanks!

Ah sorry, I hadn't seen your list of proposed readings (I wrongly thought the relevant link was just a link to the old syllabus). Your points about those readings in (1) and (3) do seem to help with these concerns. A few thoughts:

  • The dichotomy between x-risk reduction and s-risk reduction seems off to me. As I understand them, prominent definitions of x-risks [1] [2] [3] (especially the more thorough/careful discussion in [3]) are all broad enough for s-risks to count as x-risks (especially if we're talking about permanent / locked-in s-risks, which
... (read more)

Hi Aaron, thanks for your reply. I’ve listed some suggestions in one of the hyperlinks above, but I’ll put it here too: https://docs.google.com/document/d/1niRwbh3eejByFQwoiZ0NiaSZDUawn206PUmHs7aKL0A/edit?usp=sharing

I have not put much time into this, so I’d love to hear your thoughts on the proposed changes.

Some criticism of the EA Virtual Programs introductory fellowship syllabus:

I was recently looking through the EA Virtual Programs introductory fellowship syllabus. I was disappointed to see zero mention of s-risks or the possible relevance of animal advocacy to longtermism in the sections on longtermism and existential risk.

I understand that mainstream EA is largely classical utilitarian in practice (even if it recognizes moral uncertainty in principle), but it seems irresponsible not to expose people to these ideas even by the lights of classical utilitar... (read more)

2
NunoSempere
3y
This seems fixable by sending an email to whomever is organizing the syllabus, possibly after writing a small syllabus on s-risks yourself, or by finding one already written.

Yeah I'm not really sure why we use the term x-risk anymore. There seems to be so much disagreement and confusion about where extinction, suffering, loss of potential, global catastrophic risks, etc. fit into the picture. More granularity seems desirable.

https://forum.effectivealtruism.org/posts/AJbZ2hHR4bmeZKznG/venn-diagrams-of-existential-global-and-suffering is helpful.

Just adding onto this, for those interested in learning how a Kantian meta-ethical approach might be compatible with a consequentialist normative theory, see Kagan's "Kantianism for Consequentialists": https://campuspress.yale.edu/shellykagan/files/2016/07/Kantianism-for-Consequentialists-2cldc82.pdf

Has Singer ever said anything about s-risks? If not, I’m curious to hear his thoughts, especially concerning how his current view compares to what he would’ve thought during his time as a preference utilitarian.

Sorry, I'm a bit confused on what you mean here. I meant to be asking about the prevalence of a view giving animals the same moral status as humans. You say that many might think nonhuman animals' interests are much less strong/important than humans. But I think saying they are less strong is different than saying they are less important, right? How strong they are seems more like an empirical question about capacity for welfare, etc.

5
MichaelStJules
3y
Ya, my point is that I'd guess most dedicated EAs would endorse the principle in the abstract, but they might not think animals matter much in practice. Also, for what it's worth, about half of EAs who responded to the diet question are at least vegetarian, and still more are reducing meat consumption: From https://www.rethinkpriorities.org/blog/2019/12/5/ea-survey-2019-series-community-demographics-amp-characteristics

Ya, I think 80,000 Hours has been a bit uncareful. I think GPI has done a fine job, and Teruji Thomas has worked on person-affecting views with them.

Woops yeah, I meant to say that GPI is good about this but the transparency and precision gets lost as ideas spread. Fixed the confusing language in my original comment.

In the longtermism section on their key ideas page, 80,000 Hours essentially assumes totalism without making that explicit:

Yeah this is another really great example of how EA is lacking in transparent reasoning. This is especially problematic s... (read more)

Thanks for this post. Looking forward to more exploration on this topic.

I agree that moral circle expansion seems massively neglected. Changing institutions to enshrine (at least some) consideration for the interests of all sentient beings seems like an essential step towards creating a good future, and I think that certain kinds of animal advocacy are likely to help us get there. 

As a side note, do we have any data on what proportion of EA's adhere to the sort of "equal consideration of interests" view on animals which you advocate? I also hold this view, but its rarity may explain some differences in cause prioritization.  I wonder how rare this view is even within animal advocacy.

7
MichaelStJules
3y
I would guess that most of the more dedicated EAs believe in something roughly like "equal consideration of interests" ("equal consideration of equal interests" to be more specific), but many might think nonhuman animals' interests are much less strong/important than humans, on average.

Thanks for writing this up.

These are all interesting thoughts and objections that I happen to find persuasive. But more  generally, I think EA should be more transparent about what philosophical assumptions are being made, and how this affects cause prioritization. Of course the philosophers associated with GPI are good about this, but often this transparency and precision gets lost as ideas spread.

For instance, in discussions of longtermism, totalism often seems to be assumed without making that assumption clear. Other views are often misrepresented,... (read more)

7
nil
3y
Thanks for the example! I worry that even when our philosophical assumptions are stated (which is already a good place to be in), it is easy to miss their important implications and to not question whether these implications make sense (as opposed to jumping directly to cause selection). (This kind of rigor would arguably be over-demanding in most cases but could still be a health measure for EA materials.)

Ya, I think 80,000 Hours has been a bit uncareful. I think GPI has done a fine job, and Teruji Thomas has worked on person-affecting views with them.

In the longtermism section on their key ideas page, 80,000 Hours essentially assumes totalism without making that explicit:

Let’s explore some hypothetical numbers to illustrate the general concept. If there’s a 5% chance that civilisation lasts for ten million years, then in expectation, there are 5000 future generations. If thousands of people making a concerted effort could, with a 55% probability, reduce th

... (read more)

Hi all, I'm sorry if this isn't the right place to post. Please redirect me if there's somewhere else this should go.

I'm posting on behalf of my friend, who is an aspiring AI researcher in his early 20's, and is looking to live with likeminded individuals. He currently lives in Southern California, but is open to relocating (preferably USA, especially California).

Please message jeffreypythonclass+ea@gmail.com if you're interested!

2
Linch
3y
Can you be a bit more specific than "aspiring AI researcher?" Eg, are they interested in AI Safety, are they interested in AI research for other EA reasons, interested in $, interested in AI as a scientific question, etc. 
7
MichaelDickens
3y
You might try the East Bay EA/Rationality Housing Board

AFAIK the paralysis argument is about the implications of non-consequentialism, not about down-side focused axiologies. In particular, it's about the implications of a pair of views. As Will says in the transcript you linked:

"but this is a paradigm nonconsequentialist view endorses an acts/omissions distinction such that it’s worse to cause harm than it is to allow harm to occur, and an asymmetry between benefits and harms where it’s more wrong to cause a certain amount of harm than it is right or good to cause a certain amount of b... (read more)

This was such a fun read. Bentham is often associated with psychological egoism, so it seems somewhat odd to me that he felt a need to exhort readers to fulfill their own pleasure (since apparently all actions are done on this basis anyway).

Could you say more (or work on that post) about why formal methods will be unhelpful? Why are places like Stanford, CMU, etc. pushing to integrate formal methods with AI safety? Also Paul Christiano has suggested formal methods will be useful for avoiding catastrophic scenarios. (Will update with links if you want.)

5
adamShimi
4y
Hum, I think I wrote my point badly on the comment above. What I mean isn't that formal methods will never be useful, just that they're not really useful yet, and will require more pure AI safety research to be useful. The general reason is that all formal methods try to show that a program follows a specification on a model of computation. Right now, a lot of the work on formal methods applied to AI focus on adapting known formal methods to the specific programs (say Neural Networks) and the right model of computation (in what contexts do you use these programs, how can you abstract their execution to make it simpler). But one point they fail to address is the question of the specification. Note that when I say specification, I mean a formal specification. In practice, it's usually a modal logic formula, in LTL for example. And here we get at the crux of my argument: nobody knows the specification for almost all AI properties we care about. Nobody knows the specification for "Recognizing kittens" or "Answering correctly a question in English". And even for safety questions, we don't have yet a specification of "doesn't manipulate us" or "is aligned". That's the work that still needs to be done, and that's what people like Paul Christiano and Evan Hubinger, among others, are doing. But until we have such properties, the formal methods will not be really useful to either AI capability or AI safety. Lastly, I want to point out that working on AI for formal methods is also a means to get money and prestige. I'm not going to go full Hanson and say that's the only reason, but it's still a part of the international situation. I have examples of people getting AI related funding in France, for a project that is really, but really useless for AI.