Yeah, that makes sense and was also my (less informed) impression. I've said so in the post:
...As others[2] have also pointed out, I think we’d get the best sense of net wild animal welfare not from abstract arguments but by studying individual animals up close. I don’t think anyone who works on these topics really disagrees (my post is directed more towards non-experts than experts). Still, I have seen versions of the Evening Out Argument come up here and there in discussions, and I got the impression that some people [[in EA]] put a lot more weight on these
Amanda Askell a few hours ago on twitter:
The negative reaction to this made me realize a lot of people in EA just have very poor understanding of how media works. The thing I said was (and is) true, it was said as part of a much longer explanation that was better, and I don't control how much of that they put in.
This was interesting to read! I don't necessarily think the points that Greg Lewis pointed out are that big of a deal because while it can sometimes be embarrassing to discuss and investigate things as non-experts, there are also benefits that can come from it. Especially when the experts seem to be slow or under political constraints or sometimes just wrong in the case of individual experts. But I agree that EA can fall into a pattern where interested amateurs discuss technical topics with the ambition (and confidence?) of domain experts -- without enough...
What you comment is true but I don't feel like it invalidates any of what I've written. (Insofar as I'm claiming we have solved something, it would be metaethics and not morality.) Regarding what to do in case of conflict, I have emphasized that thwarting others' life goals by going outside the political and societal norms that we have is anti-social, disrespectful, uncooperative, selfish/non-altruistic, etc. To many people, this observation will have sufficient motivating force. If someone has strong anti-social tendencies and Machiavellian dispositions o...
I had a conversation with Claude Opus 4 two months or so ago in the context of being worried that LLMs find some tasks tedious or boring (and training being like hedge trimming where possibly morally relevant proto preferences of the model get trimmed away to generate desired answers and behaviors).
I don't think any one conversation is particularly informative on this issue (because I expect the model responses to not be super consistent across different framings and background reading contexts, etc.), but I'll still add mine here for diversity of th...
I feel like the concept of "neocolonialism" is pointing at some important things, but it's also fuzzy and maybe muddling the waters a bit on top of that, since it seems to come with some ideological baggage?
In particular, while I haven't read the texts you're referring to, it gives me the impression that it might be mixing together some things that are morally bad and preventable, like exploitation/greed and not treating certain groups the way we'd want ourselves to be treated, with things that are bad/unfair features of the world that can only be mitigate...
Thanks for engaging with my comment (and my writing more generally)!
You’re right I haven’t engaged here about what normative uncertainty means in that circumstance but I think, practically, it may look a lot like the type of bargaining and aggregation referenced in this post (and outlined elsewhere), just with a different reason for why people are engaged in that behavior.
I agree that the bargaining you reference works well for resolving value uncertainty (or resolving value disagreements via compromise) even if anti-realism is true. Still, I wa...
It's not clear to me whether we actually disagree on the value of "evolutionary cost-balancing approaches", or we disagree on the level and value of the existing empirical information we have about suffering in nature.
On reflection, it's certainly possible that I was assuming we had more evidence on suffering/wellbeing in nature (and in bees specfically) than we do. I haven't looked into it too much and it intuitively felt to me like we could probably do better than the evolutionary reasoning stuff, but maybe the other available lines of evidence are simil...
I think the discussion under "An outside view on having strong views" would benefit from discussing how much normative ethics is analogous to science and how much it is anologous to something more like personal career choice (which weaves together personal interests but still has objective components where research can be done -- see also my post on life goals).
FWIW, I broadly agree with your response to the objection/question, "I’m an anti-realist about philosophical questions so I think that whatever I value is right, by my lights, so why should I care a...
See here. Though the wording could be tidied up a bit.
I read that now and think there's something to the idea that some animals suffer less from death/injury than we would assume (if early death is a statistical near-certainty for those animals and there's nothing they can do to control their luck there, so they'd rather focus on finding mates/getting the mating ritual right, which is about upsides more than downsides). The most convincing example I can think of are mayflies. It seems plausible that mayflies (who only live 1-2 days in their adult for...
Before I engage further, may I ask if you believe that suffering vs pleasure intensity is comparable on the same axis? Iirc I think I might've read you saying otherwise.
I think they are not on the same axis. (Good that you asked!)
For one thing, I don't think all valuable-to-us experiences are of the same type and "intensity" only makes some valuable experiences better, but not others. (I'm not too attached to this point; my view that positive and negative experiences aren't on the same scale is also based on other considerations.) One of my favorite e...
For eusocial insects like bees in particular, evolution ought to incentivize them to have net positive lives as long as the hive is doing well overall.
There might be a way to salvage what you're saying, but I think this stuff is tricky.
I voted 65% but I think anti-realism is obviously true or we're using words differently.
To see whether we might be using words differently, see this post and this one.
To see why I still voted 65% on "objective" and not 0%, see this post. (Though, on the most strict meanings of "objective," I would put 0%.)
If we agree on what moral realism means, here's the introduction to the rest of my sequence on why moral realism is almost certainly false.
Thus, if consumers viewed plant-based meat and cultivated meat as perfect substitutes, cultivated meat would have a net negative effect since plant-based alternatives perform better both environmentally and in terms of animal welfare (albeit marginally for the latter).
"Marginally for the latter" -- that still seems like good news for people who care primarily about animal wellbeing. The way I see it, the environment is not that good a thing anyway (wild animal suffering makes it negative according to my values, and even if others care less about it or care...
Imagine delegates of views you find actually significantly appealing. (At that level, I think the original post here is correct and your delegates will either use all their caring capacity for helping insects, or insects will be unimportant to them.) Instead of picking one of these delegates, you go with their compromise solution that might look something like, "Ask yourself if you have a comparative advantage at helping insects -- If not, stay on the lookout for low-effort ways to help insects and low-effort ways to avoid causing great harm to the cause o...
I think there are two competing failure modes:
(1) The epistemic community around EA, rationality, and AI safety, should stay open to criticism of key empirical assumptions (like the level of risks from AI, risks of misalignments, etc.) in a healthy way.
(2) We should still condemn people who adopt contrarian takes with unreasonable-seeming levels of confidence and then take actions based on them that we think are likely doing damage.
In addition, there's possibly also a question of "how much do people who benefit from AI safety funding and AI safety associat...
or whether we're just dealing with people getting close to the boundaries of unilateral action in a way that is still defensible because they've never claimed to be more aligned than they were, never accepted funding that came with specific explicit assumptions, etc.)
Caveats up front: I note the complexity of figuring out what Epoch's own views are, as opposed to Jaime's [corrected spelling] view or the views of the departing employees. I also do not know what representations were made. Therefore, I am not asserting that Epoch did something or needs to do ...
(I know I'm late again replying to this thread.)
What surprises me about this whole situation is that people seem surprised at the executive leadership at a corporation worth an estimated $61.5B would engage in big-corporation PR-speak. The base rate for big-corporation execs engaging in such conduct in their official capacities seems awfully close to 100%.
Hm, good point. This gives me pause, but I'm not sure what direction to update in. Like, maybe I should update "corporate speak is just what these large orgs do and it's more like a fashion thing than a s...
When I speak of a strong inoculant, I mean something that is very effective in preventing the harm in question -- such as the measles vaccine. Unless there were a measles case at my son's daycare, or a family member were extremely vulnerable to measles, the protection provided by the strong inoculant is enough that I can carry on with life without thinking about measles.
In contrast, the influenza vaccine is a weak inoculant -- I definitely get vaccinated because I'll get infected less and hospitalized less without it. But I'm not surprised when I get...
I think the people in the article you quote are being honest about not identifying with the EA social community, and the EA community on X is being weird about this.
I never interpreted that to be the crux/problem here. (I know I'm late replying to this.)
People can change what they identify as. For me, what looks shady in their responses is the clusmy attempts at downplaying their past association with EA.
I don't care about it because I still identify with EA; instead, I care because it goes under "not being consistently candid." (I quite like that ex...
I agree that these statements are not defensible. I'm sad to see it. There's maybe some hope that the person making these statements was just caught off guard and it's not a common pattern at Antrhopic to obfuscate things with that sort of misdirection. (Edit: Or maybe the journalist was fishing for quotes and made it seem like they were being more evasive than they actually were.)
I don't get why they can't just admit that Anthropic's history is pretty intertwined with EA history. They could still distance themselves from "EA as the general public pe...
As you say you can block the obligation to gamble and risk Common-sense Eutopia for something better in different ways/for different reasons.
For me, Common-sense Eutopia sounds pretty appealing because it ensures continuity for existing people. Considering many people don't have particularly resource-hungry life goals, Common-sense Eutopia would score pretty high on a perspective where it matters what existing people want for the future of themselves and their loved ones.
Even if we say that other considerations besides existing people also matter morally, we may not want those other considerations to just totally swamp/outweigh how good Common-sense Eutopia is from the perspective of existing people.
Now, if you accept utilitarianism for a fixed population, you should think that D is better than C
If we imagine that world C already exists, then yeah, we should try to change C into D.(Similarly, if world D already exists, we'd want to prevent changes from D to C.)
So, if either of the two worlds already exists, D>C.
Where the way you're setting up this argument turns controversial, though, is when you suggest that "D>C" is valid in some absolute sense, as opposed to just being valid (in virtue of how it better fulfills the preferences of existing peo...
When I said earlier that some people form non-hedonistic life goals, I didn't mean that they commit to the claim that there are things that everyone else should value. I meant that there are non-hedonistic things that the person in question values personally/subjectively.
You might say that subjective (dis)value is trumped by objective (dis)value -- then we'd get into the discussion of whether objective (dis)value is a meaningful concept. I argue against that in my above-linked post on hedonist axiology. Here's a shorter attempt at making some of the key po...
Depends what you mean by "moral realism."
I consider myself a moral anti-realist, but I would flag that my anti-realism is not the same as saying "anything goes." Maybe the best way to describe my anti-realism to a person who thinks about morality in a realist way is something like this:
"Okay, if you want to talk that way, we can say there is a moral reality, in a sense. But it's not a very far-reaching one, at least as far as the widely-compelling features of the reality are concerned. Aside from a small number of uncontroversial moral statemen...
I agree that hedonically "neutral" experiences often seem perfectly fine.
I suspect that there's a sleight of hand going on where moral realist proponents of hedonist axiology try to imply that "pleasure has intrinsic value" is the same claim as "pleasure is good." But the only sense in which "pleasure is good" is obviously uncontroversial is merely the sense of "pleasure is unobjectionable." Admittedly, pleasure also often is something we desire, or something we come to desire if we keep experiencing it -- but this clearly isn't always the case for a...
I haven't read your other recent comments on this, but here's a question on the topic of pausing AI progress. (The point I'm making is similar to what Brad West already commented.)
Let's say we grant your assumptions (that AIs will have values that matter the same as or more than human values and that an AI-filled future would be just as or more morally important than one with humans in control). Wouldn't it still make sense to pause AI progress at this important junction to make sure we study what we're doing so we can set up future AIs to do as well as (r...
Cool post!
From the structure of your writing (moslty the high number of subtitles), I often wasn't sure where you're endorsing a specific approach versus just laying out what the options are and what people could do. (That's probably fine because I anyway see the point of good philosophy as "clearly laying out the option space.")
In any case, I think you hit on the things I also find relevant. E.g., even as a self-identifying moral anti-realist, I place a great deal of importance on "aim for simplicity (if possible/sensible)" in practice.
Some thoughts...
Thanks for the reply, and sorry for the wall of text I'm posting now (no need to reply further, this is probably too much text for this sort of discussion)...
I agree that uncertainty is in someone's mind rather than out there in the world. Still, granting the accuracy of probability estimates feels no different from granting the accuracy of factual assumptions. Say I was interested in eliciting people's welfare tradeoffs between chicken sentience and cow sentience in the context of eating meat (how that translates into suffering caused per calorie of meat)...
That makes sense; I understand that concern.
I wonder if, next time, the survey makers could write something to reassure us that they're not going to be using any results out of context or with an unwarranted spin (esp. in cases like the one here, where the question is related to a big 'divide' within EA, but worded as an abstract thought experiment.)
If we're considering realistic scenarios instead of staying with the spirit of the thought experiment (which I think we should not, partly precisely because it introduces lots of possible ambiguities in how people interpret the question, and partly because this probably isn't what the surveyors intended, given the way EA culture has handled thought experiments thus far – see for instance the links in Lizka's answer, or the way EA draws heavily from analytic philosophy, where straightforwardly engaging with unrealistic thought experiments is a standard comp...
My intuitive reaction to this is "Way to screw up a survey."
Considering that three people agree-voted your post, I realize I should probably come away with this with a very different takeaway, more like "oops, survey designers need to put in extra effort if they want to get accurate results, and I would've totally fallen for this pitfall myself."
Still, I struggle with understanding your and the OP's point of view. My reaction to the original post was something like:
Why would this matter? If the estimate could be off by 1 percentage point, it could be...
Probably most people are "committed to safety" in the sense that they wouldn't actively approve conduct at their organization where executives got developers to do things that they themselves presented as reckless. To paint an exaggerated picture, imagine if some executive said the following:
"I might be killing millions of people here if something goes wrong, and I'm not super sure if this will work as intended because the developers flagged significant uncertainties and admitted that they're just trying things out essentially flying blind; still, we won't...
This seems cool!
I could imagine that many people will gravitate towards moral parliament approaches even when all the moral considerations are known. If moral anti-realism is true, there may not come a point in moral reflection under idealized circumstances where it suddenly feels like "ah, now the answer is obvious." So, we can also think of moral parliament approaches as a possible answer to undecidedness when all the considerations are laid open.
I feel like only seeing it as an approach to moral uncertainty (so that, if we knew more about moral considerations, we'd just pick one of the first-order normative theories) is underselling the potential scope of applications of this approach.
...Secondly, prioritizing competence. Ultimately, humanity is mostly in the same boat: we're the incumbents who face displacement by AGI. Right now, many people are making predictable mistakes because they don't yet take AGI very seriously. We should expect this effect to decrease over time, as AGI capabilities and risks become less speculative. This consideration makes it less important that decision-makers are currently concerned about AI risk, and more important that they're broadly competent, and capable of responding sensibly to confusing and stressful s
you can infer that people who don't take AI risk seriously are somewhat likely to lack important forms of competence
This seems true, but I'd also say that the people who do take AI risk seriously also typically lack different important forms of competence. I don't think this is coincidental; instead I'd say that there's (usually) a tradeoff between "good at taking very abstract ideas seriously" and "good at operating in complex fast-moving environments". The former typically requires a sort of thinking-first orientation to the world, the latter an action-f...
Okay, what does not tolerating actual racism look like to you? What is the specific thing you're asking for here?
Up until recently, whenever someone criticized rationality or EA for being racist or for supporting racists, I could say something like the following:
"I don't actually know of anyone in these communities who is racist or supports racism. From what I hear, some people in the rationality community occasionally discuss group differences in intelligence, because this was discussed in writings by Scott Alexander, which a lot of people have read...
I’m not that surprised we aren’t understanding one another, we have our own context and hang ups.
Yeah, I agree I probably didn't get a good sense of where you were coming from. It's interesting because, before you made the comments in this post and in the discussion here underneath, I thought you and I probably had pretty similar views. (And I still suspect that – seems like we may have talked past each other!) You said elsewhere that last year you spoke against having Hanania as a speaker. This suggested to me that even though you value truth-seeking a lo...
I feel like the controversy over the conference has become a catalyst for tensions in the involved communities at large (EA and rationality).
It has been surprisingly common for me to make what I perceive to be totally sensible point that isn't even particularly demanding (about, e.g., maybe not tolerating actual racism) and then the "pro truth-seeking faction" seem to lump me together with social justice warriors and present analogies that make no sense whatsoever. It's obviously not the case that if you want to take a principled stance against racism, you...
.I don't really think it's this. I think it is "I don't want people associating me with people or ideas like that so I'd like you to stop please".
It might be what you say for some people, but that doesn't ring true for my case (at all). (But also, compared to all the people who complained about stuff at Manifest or voiced negative opinions from the sidelines as forum users, I'm pretty sure I'm in the 33% that felt the least strongly and had fewer items to pick at.)
...But let's take your case, that means you think that on the margin some notion of consideraten
"Influence-seeking" doesn't quite resonate with me as a description of the virtue on the other end of "truth-seeking."
What's central in my mind when I speak out against putting "truth-seeking" above everything else is mostly a sentiment of "I really like considerate people and I think you're driving out many people who are considerate, and a community full of disagreeable people is incredibly off-putting."
Also, I think considerateness axis is not the same as the decoupling axis. I think one can be very considerate and also great at decoupling; you just have to be able to couple things back together as well.
Good points! It seems good to take a break or at least move to the meta level.
I think one emotion that is probably quite common in discussions about what norms should be (at least in my own experience) is clinging. Quoting from Joe Carlsmith's post on it:
...Clinging, as I think about it, is a certain mental flavor or cluster of related flavors. It feels contracted, tight, clenched, and narrow. It has a kind of hardness, a “not OK-ness,” and a (sometimes subtle) kind of desperation. It sees scarcity. It grabs. It sees threat. It pushes away. It carries seeds o
Well said.
I meant to say the exact same thing, but seem to have struggled at communicating.
I want to point out that my comment above was specifically reacting to the following line and phrasing in timunderwood's parent comment:
I also have a dislike for excluding people who have racist style views simply on that basis, with no further discussion needed, because it effectively is setting the prior for racism being true to 0 before we've actually looked at the data.
My point (and yours) is that this quoted passage would be clearer if it said "genetic group differences" instead of "racism."
I agree with this diagnosis of the situation. At the same time, I feel like it's the wrong approach to make it a scientific proposition whether racism is right or not. It should never be right, no matter the science. (I know this is just talking semantics, but I think it adds a bunch of moral clarity to frame it in this way, that science can never turn out to support racism.) As I said here, the problem I see with the HBD crowd is that they think their opinions on the science justifies certain other things or that it's a very important topic.
The scientific proposition is "are there racial genetic differences related to intelligence" right, not "is racism [morally] right"?
I find it odd how much such things seem to be conflated; if I learned that Jews have an IQ an average of 5 points lower than non-Jews, I would... still think the Holocaust and violence towards and harassment of Jews was abhorrent and horrible? I don't think I'd update much/at all towards thinking it was less horrible. Or if you could visually identify people whose mothers had drank alcohol during pregnancy, and they were...
I agree the article was pretty bad and unfair, and I agree with most things you say about cancel culture.
But then you lose me when you imply that racism is no different than taking one of the inevitable counterintuitive conclusions in philosophy thought experiments. (I've previously had a lengthy discussion on this topic in this recent comment thread.)
If I were an organizer of a conference where I wanted having interesting and relevant ideas being discussed, I'd still want there to be a bar for attendees to avoid the problem Scott Alexander pointed out (so...
I think generally though it's easy to misunderstand people, and if people respond to clarify, you should believe what they say they meant to say, not your interpretation of what they said.
Depends on context. Not (e.g.) if someone has a pattern of using plausible deniability to get away with things (I actually don't know if this applies to Hanania) or if we have strong priors for suspecting that this is what they're doing (arguably applies here for reasons related to his history; see next paragraph).
If someone has a history of being racist, but they say the...
I made the following edit to my comment above-thread:
[Edit: To be clear, by "HBD crowd" I don't mean people who believe and say things like "intelligence is heritable" or "embryo selection towards smarter babies seems potentially very good if implemented well." I thought this was obvious, but someone pointed out that people might file different claims under the umbrella "HBD".]
I'm not sure this changes anything about your response, but my perspective is that a policy of "let's not get obsessed over mapping out all possible group differences and whether the...
Thanks!
Playing devil's advocate:
Even if we grant that punishment is more effective than positive reward in shaping behavior, what about the consideration that, once the animal learns, it'll avoid situations where it gets punished, but it actively seeks out (and gets better at) obtaining positive reward?
(I got this argument from Michael St Jules -- see point 4. in the list in this comment.)
Edit: And as a possible counterpoint to the premise, I remember this review of a book on parenting and animal training where it says that training anima... (read more)