All of Lukas_Gloor's Comments + Replies

Thanks! 

Playing devil's advocate: 

Even if we grant that punishment is more effective than positive reward in shaping behavior, what about the consideration that, once the animal learns, it'll avoid situations where it gets punished, but it actively seeks out (and gets better at) obtaining positive reward? 

(I got this argument from Michael St Jules -- see point 4. in the list in this comment.)

Edit: And as a possible counterpoint to the premise, I remember this review of a book on parenting and animal training where it says that training anima... (read more)

5
NickLaing
""[...] not getting a reward may create frustration, which is nothing but another form of pain." From my human experience, I can be living "net positive" while being extremely frustrated about something.  In general I think direct observation of individuals is a fantastic way forward. Maybe even the only way forward here. Theoretical arguments make so many assumptions I fee llike I could argue all sides here. I'm amazed EAs haven't funded some individual animal observation stuff. Put a small cam and a fitbit on a deer or other prey animal and see what they get up to? My guess is that the life would look more positive than we expect.
8
Jim Buhler
Fair point, though then: * Any being that is motivated by severe pain where yours is motivated by pleasure (or lighter pain like small frustration, indeed) should be selected for over yours. * Your animal will presumably still need reminders of what it feels like not to avoid these situations to actually be motivated to avoid them. (Unless the suffering it felt the last time was so traumatizing that it'll never make the mistake again but then, this hardly goes against the suffering-prevalence thesis.) * We know (from empirical findings, this time) that many of those pain-inducing situations are common and hard to (systematically) avoid. * (EDIT to add:) Why would your learning animal need rewards if it can just not repeat past mistakes? Maybe learning abilities say things about how large the welfare range is more than about pain vs. pleasure (see Schukraft et al. 2024, sec. 3.1.4, and 4.4.3). In absolute terms, fair. I'm just skeptical that judgment calls on net welfare after empirically studying the lives of wild animals are any better. If there's a logical or evolutionary reason to expect X, this seems like a stronger reason for X than "we've looked at what some wild animals commonly experience and we feel like what we see means X."  Maybe stronger does not mean strong in absolute, though. But then, the conclusion would not be that we shouldn't update much based on theoretical arguments of this sort, but that there is no evidence we can find (whether theoretical or empirical) on which we could base significant updates. Interesting, I'll look into this. Thanks!

Yeah, that makes sense and was also my (less informed) impression. I've said so in the post:

As others[2] have also pointed out, I think we’d get the best sense of net wild animal welfare not from abstract arguments but by studying individual animals up close. I don’t think anyone who works on these topics really disagrees (my post is directed more towards non-experts than experts). Still, I have seen versions of the Evening Out Argument come up here and there in discussions, and I got the impression that some people [[in EA]] put a lot more weight on these

... (read more)
5
abrahamrowe
Oh yeah! Sorry, missed that. But to be clear, I definitely agree that this was an important point to put out there and am glad you did! :) Thanks for writing it.

Amanda Askell a few hours ago on twitter:

The negative reaction to this made me realize a lot of people in EA just have very poor understanding of how media works. The thing I said was (and is) true, it was said as part of a much longer explanation that was better, and I don't control how much of that they put in.

This was interesting to read! I don't necessarily think the points that Greg Lewis pointed out are that big of a deal because while it can sometimes be embarrassing to discuss and investigate things as non-experts, there are also benefits that can come from it. Especially when the experts seem to be slow or under political constraints or sometimes just wrong in the case of individual experts. But I agree that EA can fall into a pattern where interested amateurs discuss technical topics with the ambition (and confidence?) of domain experts -- without enough... (read more)

What you comment is true but I don't feel like it invalidates any of what I've written. (Insofar as I'm claiming we have solved something, it would be metaethics and not morality.) Regarding what to do in case of conflict, I have emphasized that thwarting others' life goals by going outside the political and societal norms that we have is anti-social, disrespectful, uncooperative, selfish/non-altruistic, etc. To many people, this observation will have sufficient motivating force. If someone has strong anti-social tendencies and Machiavellian dispositions o... (read more)

Biorisks: The chikungunya virus continues to spread, including in France and the UK.

France has locally acquired cases (so the mosquito already lives there) whereas the UK cases are all linked to travel, I think.

I had a conversation with Claude Opus 4 two months or so ago in the context of being worried that LLMs find some tasks tedious or boring (and training being like hedge trimming where possibly morally relevant proto preferences of the model get trimmed away to generate desired answers and behaviors). 

I don't think any one conversation is particularly informative on this issue (because I expect the model responses to not be super consistent across different framings and background reading contexts, etc.), but I'll still add mine here for diversity of th... (read more)

6
Linch
It's funny (and I guess unsurprising) that Will's Gemini instance and your Claude instance both reflected what I would have previously expected both of your ex ante views to be! 

I feel like the concept of "neocolonialism" is pointing at some important things, but it's also fuzzy and maybe muddling the waters a bit on top of that, since it seems to come with some ideological baggage?

In particular, while I haven't read the texts you're referring to, it gives me the impression that it might be mixing together some things that are morally bad and preventable, like exploitation/greed and not treating certain groups the way we'd want ourselves to be treated, with things that are bad/unfair features of the world that can only be mitigate... (read more)

Thanks for engaging with my comment (and my writing more generally)! 

You’re right I haven’t engaged here about what normative uncertainty means in that circumstance but I think, practically, it may look a lot like the type of bargaining and aggregation referenced in this post (and outlined elsewhere), just with a different reason for why people are engaged in that behavior.

I agree that the bargaining you reference works well for resolving value uncertainty (or resolving value disagreements via compromise) even if anti-realism is true. Still, I wa... (read more)

It's not clear to me whether we actually disagree on the value of "evolutionary cost-balancing approaches", or we disagree on the level and value of the existing empirical information we have about suffering in nature.

On reflection, it's certainly possible that I was assuming we had more evidence on suffering/wellbeing in nature (and in bees specfically) than we do. I haven't looked into it too much and it intuitively felt to me like we could probably do better than the evolutionary reasoning stuff, but maybe the other available lines of evidence are simil... (read more)

7
Linch
Btw I really appreciate your substantive engagement and both your carefulness and detail of thought, I'll probably revisit this thread in the future if I ever want to write another post/detailed comment about insects/wild animals!
6
Linch
Thanks! Here's the 2019 RP report on honeybee welfare and interventions in case you're interested, other people are welcome to comment if there's more recent work.  That's very fair! Yeah I feel the same way albeit maybe more relatively happy about the evolutionary arguments; certainly part of the value of writing up the evolutionary argument is having them critiqued; the eusociality stuff in particular I don't think is original to me but I'm not aware of a clear writeup elsewhere (and I didn't find when I was trying to look for something to link). 

I think the discussion under "An outside view on having strong views" would benefit from discussing how much normative ethics is analogous to science and how much it is anologous to something more like personal career choice (which weaves together personal interests but still has objective components where research can be done -- see also my post on life goals).

FWIW, I broadly agree with your response to the objection/question, "I’m an anti-realist about philosophical questions so I think that whatever I value is right, by my lights, so why should I care a... (read more)

6
Marcus_A_Davis
Hey Lukas, Thanks for the detailed reply. You raise a number of different interesting points and I’m not going to touch on all of them, given a lack of time but there are a few I want to highlight. While I can see how you might make this claim, I don’t really think ethics is very analogous to personal career choice. Analogies are always limited (more on this later) but I think this analogy probably implies too much “personal fit” in career choice which are often a lot about “well, what do you like to do?” so much as they are “this is what will happen if you do that?”. I think you’re largely making the case more for the former, with some part of the latter and for morality I might push for a different combination, even assuming a version of anti-realism. But perhaps all this breaks down on what you think of career choice, where I don’t have particularly strong takes. You’re right I haven’t engaged here about what normative uncertainty means in that circumstance but I think, practically, it may look a lot like the type of bargaining and aggregation referenced in this post (and outlined elsewhere), just with a different reason for why people are engaged in that behavior. In one case, it’s largely because that’s how we’d come to the right answer but in other cases it would be because there’s no right answer to the matter and the only way to resolve disputes is through aggregating opinions across different people and belief systems. That said, I believe–correct me if I’m wrong–your posts are arguing for a particularly narrow version of realism that is more constrained than typical and that there’s a tension between moral realism and moral uncertainty. Stepping back a bit, I think a big thrust of my post is that you generally shouldn’t make statements like “anti-realism is obviously true” because the nature of evidence for that claim is pretty weak, even if the nature of the arguments for you reaching that conclusion were clear and are internally compelling to you. Y

See here. Though the wording could be tidied up a bit. 

I read that now and think there's something to the idea that some animals suffer less from death/injury than we would assume (if early death is a statistical near-certainty for those animals and there's nothing they can do to control their luck there, so they'd rather focus on finding mates/getting the mating ritual right, which is about upsides more than downsides). The most convincing example I can think of are mayflies. It seems plausible that mayflies (who only live 1-2 days in their adult for... (read more)

6
Linch
Thank you for the detailed response and serious engagement! To be clear I definitely don't think my analyses here is anywhere close to the final word on these issues, nor do I think the existence of some models tells us much. It's not clear to me whether we actually disagree on the value of "evolutionary cost-balancing approaches", or we disagree on the level and value of the existing empirical information we have about suffering in nature.  For example, I certainly would not consider evolutionary arguments to be compelling for analyzing human or chicken suffering. Both because both typical humans and typical chickens are very far from their evolutionary environments, and because we have substantially more available empirical evidence (though as always less than we'd like). As I wrote in my post: I appreciate the nuances in your post! I also like  I think this is fair but also it feels a bit like an isolated demand for rigor here. I think of my post, admittedly written quickly and on various subjects I'm not an expert in, primarily as a critique of another post that to me feels much more simplistic in comparison. 

Before I engage further, may I ask if you believe that suffering vs pleasure intensity is comparable on the same axis? Iirc I think I might've read you saying otherwise.

I think they are not on the same axis. (Good that you asked!)

For one thing, I don't think all valuable-to-us experiences are of the same type and "intensity" only makes some valuable experiences better, but not others. (I'm not too attached to this point; my view that positive and negative experiences aren't on the same scale is also based on other considerations.) One of my favorite e... (read more)

For eusocial insects like bees in particular, evolution ought to incentivize them to have net positive lives as long as the hive is doing well overall.

There might be a way to salvage what you're saying, but I think this stuff is tricky. 

  • I don't think there are objective facts about the net value/disvalue of experiences. (That doesn't mean all judgments about the topic are equally reasonable, though, so we can still have a discussion about what indicates higher or lower welfare levels on some kind of scale, even if there's no objective way to place the
... (read more)
9
Linch
I agree this stuff is very tricky! And I appreciate the detailed reply.  Before I engage further, may I ask if you believe that suffering vs pleasure intensity is comparable on the same axis? Iirc I think I might've read you saying otherwise.  This is not meant as a "gotcha" question, but just to set the parameters of debate/help decide whether we'll likely to have useful object-level cruxes. I remember one time a good friend of mine made a crazy (from my perspective) claim about AI consciousness. I was about to debate him, but then remembered that he was an illusionist about experience. Which is a perfectly valid and logical position to hold, but does mean that it's less likely we'd have useful object-level things to debate on that question, since any object-level intuition differences are downstream of or at least overshadowed by this major meta-level difference. 

I voted 65% but I think anti-realism is obviously true or we're using words differently.

To see whether we might be using words differently, see this post and this one

To see why I still voted 65% on "objective" and not 0%, see this post. (Though, on the most strict meanings of "objective," I would put 0%.) 

If we agree on what moral realism means, here's the introduction to the rest of my sequence on why moral realism is almost certainly false.

+1.

I wish we could contract the people involved in the production of that shrimp video to improve the image of EA.

"Morally way more serious than you would have thought, but able to take a joke better than you would have thought" feels like a combination that is hard to attack/tear down.

Thus, if consumers viewed plant-based meat and cultivated meat as perfect substitutes, cultivated meat would have a net negative effect since plant-based alternatives perform better both environmentally and in terms of animal welfare (albeit marginally for the latter).

"Marginally for the latter" -- that still seems like good news for people who care primarily about animal wellbeing. The way I see it, the environment is not that good a thing anyway (wild animal suffering makes it negative according to my values, and even if others care less about it or care... (read more)

Imagine delegates of views you find actually significantly appealing. (At that level, I think the original post here is correct and your delegates will either use all their caring capacity for helping insects, or insects will be unimportant to them.) Instead of picking one of these delegates, you go with their compromise solution that might look something like, "Ask yourself if you have a comparative advantage at helping insects -- If not, stay on the lookout for low-effort ways to help insects and low-effort ways to avoid causing great harm to the cause o... (read more)

I think there are two competing failure modes:

(1) The epistemic community around EA, rationality, and AI safety, should stay open to criticism of key empirical assumptions (like the level of risks from AI, risks of misalignments, etc.) in a healthy way.

(2) We should still condemn people who adopt contrarian takes with unreasonable-seeming levels of confidence and then take actions based on them that we think are likely doing damage.

In addition, there's possibly also a question of "how much do people who benefit from AI safety funding and AI safety associat... (read more)

or whether we're just dealing with people getting close to the boundaries of unilateral action in a way that is still defensible because they've never claimed to be more aligned than they were, never accepted funding that came with specific explicit assumptions, etc.)

Caveats up front: I note the complexity of figuring out what Epoch's own views are, as opposed to Jaime's [corrected spelling] view or the views of the departing employees. I also do not know what representations were made. Therefore, I am not asserting that Epoch did something or needs to do ... (read more)

With Chollet acknowledging that o1/o3 (and ARC 1 getting beaten) was a significant breakthrough, how much is this talk now outdated vs still relevant?

3
Yarrow Bouchard 🔸
I think it’s still very relevant! I don’t think this talk’s relevance has diminished. It’s just important to also have that more recent information about o3 in addition to what’s in this talk. (That’s why I linked the other talk at the bottom of this post.) By the way, I think it’s just o3 and not o1 that achieves the breakthrough results on ARC-AGI-1. It looks like o1 only gets 32% on ARC-AGI-1, whereas the lower-compute version of o3 gets around 76% and the higher-compute version gets around 87%. The lower-compute version of o3 only gets 4% on ARC-AGI-2 in partial testing (full testing has not yet been done) and the higher-compute version has not yet been tested. Chollet speculates in this blog post about how o3 works (I don’t think OpenAI has said much about this) and how that fits in to his overall thinking about LLMs and AGI:

(I know I'm late again replying to this thread.)

What surprises me about this whole situation is that people seem surprised at the executive leadership at a corporation worth an estimated $61.5B would engage in big-corporation PR-speak. The base rate for big-corporation execs engaging in such conduct in their official capacities seems awfully close to 100%.

Hm, good point. This gives me pause, but I'm not sure what direction to update in. Like, maybe I should update "corporate speak is just what these large orgs do and it's more like a fashion thing than a s... (read more)

When I speak of a strong inoculant, I mean something that is very effective in preventing the harm in question -- such as the measles vaccine. Unless there were a measles case at my son's daycare, or a family member were extremely vulnerable to measles, the protection provided by the strong inoculant is enough that I can carry on with life without thinking about measles. 

In contrast, the influenza vaccine is a weak inoculant -- I definitely get vaccinated because I'll get infected less and hospitalized less without it. But I'm not surprised when I get... (read more)

I think the people in the article you quote are being honest about not identifying with the EA social community, and the EA community on X is being weird about this.

I never interpreted that to be the crux/problem here. (I know I'm late replying to this.) 

People can change what they identify as. For me, what looks shady in their responses is the clusmy attempts at downplaying their past association with EA.

I don't care about it because I still identify with EA; instead, I care because it goes under "not being consistently candid." (I quite like that ex... (read more)

I agree that these statements are not defensible. I'm sad to see it. There's maybe some hope that the person making these statements was just caught off guard and it's not a common pattern at Antrhopic to obfuscate things with that sort of misdirection. (Edit: Or maybe the journalist was fishing for quotes and made it seem like they were being more evasive than they actually were.) 

I don't get why they can't just admit that Anthropic's history is pretty intertwined with EA history. They could still distance themselves from "EA as the general public pe... (read more)

9
Greg_Colbourn ⏸️
Yes. It's sad to see, but Anthropic is going the same way as OpenAI, despite being founded by a group that split from OpenAI over safety concerns. Power (and money) corrupts. How long until another group splits from Anthropic and the process repeats? Or actually, one can hope that such a group splitting from Anthropic might actually have integrity and instead work on trying to stop the race.

As you say you can block the obligation to gamble and risk Common-sense Eutopia for something better in different ways/for different reasons.

For me, Common-sense Eutopia sounds pretty appealing because it ensures continuity for existing people. Considering many people don't have particularly resource-hungry life goals, Common-sense Eutopia would score pretty high on a perspective where it matters what existing people want for the future of themselves and their loved ones.

Even if we say that other considerations besides existing people also matter morally, we may not want those other considerations to just totally swamp/outweigh how good Common-sense Eutopia is from the perspective of existing people.

Now, if you accept utilitarianism for a fixed population, you should think that D is better than C


If we imagine that world C already exists, then yeah, we should try to change C into D.(Similarly, if world D already exists, we'd want to prevent changes from D to C.)

So, if either of the two worlds already exists, D>C.

Where the way you're setting up this argument turns controversial, though, is when you suggest that "D>C" is valid in some absolute sense, as opposed to just being valid (in virtue of how it better fulfills the preferences of existing peo... (read more)

When I said earlier that some people form non-hedonistic life goals, I didn't mean that they commit to the claim that there are things that everyone else should value. I meant that there are non-hedonistic things that the person in question values personally/subjectively.

You might say that subjective (dis)value is trumped by objective (dis)value -- then we'd get into the discussion of whether objective (dis)value is a meaningful concept. I argue against that in my above-linked post on hedonist axiology. Here's a shorter attempt at making some of the key po... (read more)

3
Nunik
I think I can see why anti-realism is not an "anything goes" approach, but I still can't see why "subjective" values (or meaning) should matter. Of course, I also used to look at value in terms of what I cared about, or what motivated me. But at some point I realized that what holding a belief about the importance of something boils down to is that I will feel various emotions and do various actions in response to situations that are related to the belief. There is no intrinsic (dis)value in me (dis)valuing something, I concluded, and this drove me to full-blown nihilism. But then I realized that (dis)value is something that is, not something that I can choose for myself based on some criteria. Suffering is what gives meaning to the word "bad". No possible belief about the experience of suffering could change its badness. Even when I was convinced that nothing mattered, my despair was producing genuine disvalue. So now I care about reducing suffering, but if I thought I was failing in achieving the goal of reducing suffering, this wouldn't by itself be bad. The world contains some amount of disvalue. My belief in the disvalue of suffering is an empirical claim about a feature of the world, and it motivates my actions and evokes emotions in me. (I haven't finished reading all the relevant texts you linked, but I am posting this comment for today.)

Depends what you mean by "moral realism." 

I consider myself a moral anti-realist, but I would flag that my anti-realism is not the same as saying "anything goes." Maybe the best way to describe my anti-realism to a person who thinks about morality in a realist way is something like this: 

"Okay, if you want to talk that way, we can say there is a moral reality, in a sense. But it's not a very far-reaching one, at least as far as the widely-compelling features of the reality are concerned. Aside from a small number of uncontroversial moral statemen... (read more)

1
Nunik
Thank you. My remaining question is: how do you make sense of the non-hedonistic life goals? When it comes to suffering, in the moment of experiencing it I am extremely confident about its disvalue because I think the experience provides real-time firsthand evidence of the disvalue. Whereas with other purported goods or bads it seems to me like the best that can be said in favor is something like "many reasonable people say so". But why do they say so? Because they have a feeling that something or other has value? See also this comment.

I agree that hedonically "neutral" experiences often seem perfectly fine. 

I suspect that there's a sleight of hand going on where moral realist proponents of hedonist axiology try to imply that "pleasure has intrinsic value" is the same claim as "pleasure is good." But the only sense in which "pleasure is good" is obviously uncontroversial is merely the sense of "pleasure is unobjectionable." Admittedly, pleasure also often is something we desire, or something we come to desire if we keep experiencing it -- but this clearly isn't always the case for a... (read more)

1
Nunik
Cannot moral realism be grounded at least in suffering, though? It seems inescapable to me that generating suffering in an experience machine would be disvaluable. For the experience to be suffering, it may require a component of wanting it to end, but this would still be a felt quality, right? So no matter when or where the suffering was experienced, no matter "who" experienced it, it would still be disvaluable due to its inherent nature.

I agree it is somewhat misleading, but I feel like using the internet is itself a highly useful skill in the modern world and insofar as the other models couldn't do it, that is too bad for them.

I haven't read your other recent comments on this, but here's a question on the topic of pausing AI progress. (The point I'm making is similar to what Brad West already commented.)

Let's say we grant your assumptions (that AIs will have values that matter the same as or more than human values and that an AI-filled future would be just as or more morally important than one with humans in control). Wouldn't it still make sense to pause AI progress at this important junction to make sure we study what we're doing so we can set up future AIs to do as well as (r... (read more)

6
Matthew_Barnett
In your comment, you raise a broad but important question about whether, even if we reject the idea that human survival must take absolute priority other concerns, we might still want to pause AI development in order to “set up” future AIs more thoughtfully. You list a range of traits—things like pro-social instincts, better coordination infrastructures, or other design features that might improve cooperation—that, in principle, we could try to incorporate if we took more time. I understand and agree with the motivation behind this: you are asking whether there is a prudential reason, from a more inclusive moral standpoint, to pause in order to ensure that whichever civilization emerges—whether dominated by humans, AIs, or both at once—turns out as well as possible in ways that matter impartially, rather than focusing narrowly on preserving human dominance.  Having summarized your perspective, I want to clarify exactly where I differ from your view, and why. First, let me restate the perspective I defended in my previous post on delaying AI. In that post, I was critiquing what I see as the “standard case” for pausing AI, as I perceive it being made in many EA circles. This standard case for pausing AI often treats preventing human extinction as so paramount that any delay of AI progress, no matter how costly to currently living people, becomes justified if it incrementally lowers the probability of humans losing control.  Under this argument, the reason we want to pause is that time spent on “alignment research” can be used to ensure that future AIs share human goals, or at least do not threaten the human species. My critique had two components: first, I argued that pausing AI is very costly to people who currently exist, since it delays medical and technological breakthroughs that could be made by advanced AIs, thereby forcing a lot of people to die who could have otherwise been saved. Second, and more fundamentally, I argued that this "standard case" seems to r

Cool post!

From the structure of your writing (moslty the high number of subtitles), I often wasn't sure where you're endorsing a specific approach versus just laying out what the options are and what people could do. (That's probably fine because I anyway see the point of good philosophy as "clearly laying out the option space.")

In any case, I think you hit on the things I also find relevant. E.g., even as a self-identifying moral anti-realist, I place a great deal of importance on "aim for simplicity (if possible/sensible)" in practice. 

Some thoughts... (read more)

3
Noah Birnbaum
Thanks for the nice comment. Yea, I think this was more of "laying out the option space."  All very interesting points! 

Thanks for the reply, and sorry for the wall of text I'm posting now (no need to reply further, this is probably too much text for this sort of discussion)...

I agree that uncertainty is in someone's mind rather than out there in the world. Still, granting the accuracy of probability estimates feels no different from granting the accuracy of factual assumptions. Say I was interested in eliciting people's welfare tradeoffs between chicken sentience and cow sentience in the context of eating meat (how that translates into suffering caused per calorie of meat)... (read more)

3
Sjlver
OP here :) Thanks for the interesting discussion that the two of you have had! Lukas_Gloor, I think we agree on most points. Your example of estimating a low probability of medical emergency is great! And I reckon that you are communicating appropriately about it. You're probably telling your doctor something like "we came because we couldn't rule out complication X" and not "we came because X has a probability of 2%" ;-) You also seem to be well aware of the uncertainty. Your situation does not feel like one where you went to the ER 50 times, were sent home 49 times, and have from this developed a good calibration. It looks more like a situation where you know about danger signs which could be caused by emergencies, and have some rules like "if we see A and B and not C, we need to go to the ER".[1] Your situation and my post both involve low probabilities in high-stakes situations. That said, the goal of my post is to remind people that this type of probability is often uncertain, and that they should communicate this with the appropriate humility. ---------------------------------------- 1. That's how I would think about it, at least... it might well be that you're more rational than I, and use probabilities more explicitly. ↩︎

That makes sense; I understand that concern.

I wonder if, next time, the survey makers could write something to reassure us that they're not going to be using any results out of context or with an unwarranted spin (esp. in cases like the one here, where the question is related to a big 'divide' within EA, but worded as an abstract thought experiment.)

If we're considering realistic scenarios instead of staying with the spirit of the thought experiment (which I think we should not, partly precisely because it introduces lots of possible ambiguities in how people interpret the question, and partly because this probably isn't what the surveyors intended, given the way EA culture has handled thought experiments thus far – see for instance the links in Lizka's answer, or the way EA draws heavily from analytic philosophy, where straightforwardly engaging with unrealistic thought experiments is a standard comp... (read more)

3
David T
Thanks for the thoughtful response. On (1) I'm not really sure the uncertainty and the trust in the estimate are separable. A probability estimate of a nonrecurring event[1] fundamentally is a label someone[2] applies to how confident they are something will happen. A corollary of this is that you should probably take into account how probability estimates could have actually been reached, your trust in that reasoning and the likelihood of bias when deciding how to act. [3] On (2) I agree with your comments about the OP's point; if the probabilities are +/-1 percentage point with error symmetrically distributed they're still on average 1.5%[4], though in some circumstances introducing error bars might affect how you handle risk. But as I've said, I don't think the distribution of errors looks like this when it comes to assessing whether long shots are worth pursuing or not (not even under the assumption of good faith). I'd be pretty worried if hits based grant-makers didn't, frankly, and this question puts me in their shoes.  Your point about analytic philosophy often expecting literal answers to slightly weird hypotheticals is a good one. But EA isn't just analytic philosophy and St Petersburg Paradoxes, it's also people literally coming up with best guesses of probabilities of things they think might work and multiplying them (and a whole subculture based on that, and guesstimating just how impactful "crazy train" long shot ideas they're curious about might be). So I think it's pretty reasonable to treat it not as a slightly daft hypothetical where a 1.5% probability is an empirical reality,[5] but as a real world decision grant award scenario where the "1.5% probability" is a suspiciously precise credence, and you've got to decide whether to trust it enough to fund it over something that definitely works. In that situation, I think I'm discounting the estimated chance of success of the long shot by more than 50%. FWIW I don't take the question as evidence the
4
Will Howard🔹
I'm one of the people who agreed with @titotal's comment, and it was because of something like this. It's not that I'm worried per se that the survey designers will write a takeaway that puts a spin on this question (last time they just reported it neutrally). It's more that I expect this question[1] to be taken by other orgs/people as a proxy metric for the EA community's support for hits-based interventions. And because of the practicalities of how information is acted on the subtlety of the wording of the question might be lost in the process (e.g. in an organisation someone might raise the issue at some point, but it would eventually end up as a number in a spreadsheet or BOTEC, and there is no principled way to adjust for the issue that titotal describes). 1. ^ And one other about supporting low-probability/high-impact interventions

My intuitive reaction to this is "Way to screw up a survey." 

Considering that three people agree-voted your post, I realize I should probably come away with this with a very different takeaway, more like "oops, survey designers need to put in extra effort if they want to get accurate results, and I would've totally fallen for this pitfall myself."

Still, I struggle with understanding your and the OP's point of view. My reaction to the original post was something like:

Why would this matter? If the estimate could be off by 1 percentage point, it could be... (read more)

3
Sjlver
I agree that our different reactions come partly from having different intuitions about the boundaries of a thought experiment. Which factors should one include vs exclude when evaluating answers? For me, I assumed that the question can't be just about expected values. This seemed too trivial. For simple questions like that, it would be clearer to ask the question directly (e.g., "Are you in favor of high-risk interventions with large expected rewards?") than to use a thought experiment. So I concluded that the thought experiment probably goes a bit further. If it goes further, there are many factors that might come into play: * How certain are we of the numbers? * Are there any negative effects if the intervention fails? These could be direct negative outcomes, but also indirect ones like difficulty to raise funds in the future, reputation loss... * Are we allocating a small part of a budget, or our total money? Is this a repeated decision or a one-off? I had no good answers, and no good guesses about the question's intent. Maybe this is clearer for you, given that you mention "the way EA culture has handled thought experiments thus far" in a comment below. I, for one, decided to skip the question :/
1
David T
Feels like taking into account the likelihood that the "1.5% probability of 100,000 DALYs averted" estimate is a credence based on some marginally-relevant base rate[1] that might have been chosen with a significant bias towards optimism is very much in keeping with the spirit of the question (which presumably is about gauging attitudes towards uncertainty, not testing basic EV calculation skills)[2].  A very low percentage chance of averting a lot of DALYs feels a lot more like "1.5% of clinical trials of therapies for X succeeded; this untested idea might also have a 1.5% chance" optimism attached to a proposal offering little reason to believe it's above average rather than an estimate based on somewhat robust statistics (we inferred that 1.5% of people who receive this drug will be cured from the 1.5% of people who had that outcome in trials). So it seems quite reasonable to assume that the 1.5% chance of a positive binary outcome estimate might be biased upwards. Even more so in the context of "we acknowledge this is a long shot and high-certainty solutions to other pressing problems exist, but if the chance of this making an impact was as high as 0.0x%..." style fundraising appeals to EAs' determination to avoid scope insensitivity. 1. ^ either that or someone's been remarkably precise in their subjective estimates or collected some unusual type of empirical data. I certainly can't imagine reaching the conclusion an option has exactly 1.5% chance of averting 100k DALYs myself  2. ^ if you want to show off you understand EV and risk estimation you'd answer (C) "here's how I'd construct my portfolio" anyway :-) 

Probably most people are "committed to safety" in the sense that they wouldn't actively approve conduct at their organization where executives got developers to do things that they themselves presented as reckless. To paint an exaggerated picture, imagine if some executive said the following:

"I might be killing millions of people here if something goes wrong, and I'm not super sure if this will work as intended because the developers flagged significant uncertainties and admitted that they're just trying things out essentially flying blind; still, we won't... (read more)

This seems cool! 

I could imagine that many people will gravitate towards moral parliament approaches even when all the moral considerations are known. If moral anti-realism is true, there may not come a point in moral reflection under idealized circumstances where it suddenly feels like "ah, now the answer is obvious." So, we can also think of moral parliament approaches as a possible answer to undecidedness when all the considerations are laid open. 

I feel like only seeing it as an approach to moral uncertainty (so that, if we knew more about moral considerations, we'd just pick one of the first-order normative theories) is underselling the potential scope of applications of this approach. 

5
arvomm
Thank you for your comment Lukas, we agree that this tool, and more generally this approach, could be useful even in that case, when all considerations are known. The ideas we built on and the language we used came from the literature on moral parliaments as an approach to better understand and tackle moral uncertainty, hence us borrowing from that framing.

Secondly, prioritizing competence. Ultimately, humanity is mostly in the same boat: we're the incumbents who face displacement by AGI. Right now, many people are making predictable mistakes because they don't yet take AGI very seriously. We should expect this effect to decrease over time, as AGI capabilities and risks become less speculative. This consideration makes it less important that decision-makers are currently concerned about AI risk, and more important that they're broadly competent, and capable of responding sensibly to confusing and stressful s

... (read more)

you can infer that people who don't take AI risk seriously are somewhat likely to lack important forms of competence

This seems true, but I'd also say that the people who do take AI risk seriously also typically lack different important forms of competence. I don't think this is coincidental; instead I'd say that there's (usually) a tradeoff between "good at taking very abstract ideas seriously" and "good at operating in complex fast-moving environments". The former typically requires a sort of thinking-first orientation to the world, the latter an action-f... (read more)

Okay, what does not tolerating actual racism look like to you? What is the specific thing you're asking for here?

Up until recently, whenever someone criticized rationality or EA for being racist or for supporting racists, I could say something like the following: 

"I don't actually know of anyone in these communities who is racist or supports racism. From what I hear, some people in the rationality community occasionally discuss group differences in intelligence, because this was discussed in writings by Scott Alexander, which a lot of people have read... (read more)

I’m not that surprised we aren’t understanding one another, we have our own context and hang ups.

Yeah, I agree I probably didn't get a good sense of where you were coming from. It's interesting because, before you made the comments in this post and in the discussion here underneath, I thought you and I probably had pretty similar views. (And I still suspect that – seems like we may have talked past each other!) You said elsewhere that last year you spoke against having Hanania as a speaker. This suggested to me that even though you value truth-seeking a lo... (read more)

2
Nathan Young
On our interactions: I imagine we do, though here I am sort of specifically trying to find a crux. Probably I'm being a bit grumpy about it, but all in all, I think I agree with you a lot.  I agree. I'm pretty moderately of generativeness without any kinds of incentives towards kindness.  I am less sure that shaming events with unkindness is the kind of incentive we want.  I can't speak for anyone else, but I am pretty willing to make trades here. And I have done so. Though I want to know what the trades are beforehand.  I don't really consider the "concessions" so far to be trades as such, I just think that a norm against racism is really valuable and we should allow people to break it only at much greater cost than we've seen. Though I take up that issue with manifest internally rather than on here.  I am trying to figure out the underlying disagreement, which I think causes me to cut in a different direction than I normally would. Fair play. 

I feel like the controversy over the conference has become a catalyst for tensions in the involved communities at large (EA and rationality).

It has been surprisingly common for me to make what I perceive to be totally sensible point that isn't even particularly demanding (about, e.g., maybe not tolerating actual racism) and then the "pro truth-seeking faction" seem to lump me together with social justice warriors and present analogies that make no sense whatsoever. It's obviously not the case that if you want to take a principled stance against racism, you... (read more)

4
Nathan Young
Edited to be about 1 thing. Well we agree that it doesn’t feel great to feel misunderstood. Okay, what does not tolerating actual racism look like to you? What is the specific thing you're asking for here? 

.I don't really think it's this. I think it is "I don't want people associating me with people or ideas like that so I'd like you to stop please".

It might be what you say for some people, but that doesn't ring true for my case (at all). (But also, compared to all the people who complained about stuff at Manifest or voiced negative opinions from the sidelines as forum users, I'm pretty sure I'm in the 33% that felt the least strongly and had fewer items to pick at.)

But let's take your case, that means you think that on the margin some notion of consideraten

... (read more)
4
Bob Jacobs
Another way to frame it is through the concept of collective intelligence. What is good for developing individual intelligence may not be good for developing collective intelligence. Think, for example, of schools that pit students against each other and place a heavy emphasis on high-stakes testing to measure individual student performance. This certainly motivates people to personally develop their intellectual skills; just look at how much time, e.g. Chinese children are spending on school. But is this better for the collective intelligence? High-stakes testing often leads to a curriculum that is narrowly focused on intelligence-focused skills that are easily measurable by tests. This can limit the development of broader, harder-to-measure social skills that are vital for collective intelligence, such as communication, group brainstorming, deescalation, keeping your ego in check, empathy... And such a testing-focused environment can discourage collaborative learning experiences because the focus is on individual performance. This reduction in group learning opportunities and collaboration limits overall knowledge growth. It can exacerbate educational inequalities by disproportionately disadvantaging students from lower socio-economic backgrounds, who may have less access to test preparation resources or supportive learning environments. This can lead to a segmented education system where collective intelligence is stifled because not all members have equal opportunities to contribute and develop. And what about all the work that needs to be done that is not associated with high intelligence? Students who might not excel in what a given culture considers high-intelligence (such as the arts, practical skills, or caretaking work) may feel undervalued and disengage from contributing their unique perspectives. Worse, if they continue to pursue individual intelligence, you might end up with a workforce that has a bad division of labor, despite having people that t

"Influence-seeking" doesn't quite resonate with me as a description of the virtue on the other end of "truth-seeking."

What's central in my mind when I speak out against putting "truth-seeking" above everything else is mostly a sentiment of "I really like considerate people and I think you're driving out many people who are considerate, and a community full of disagreeable people is incredibly off-putting."

Also, I think considerateness axis is not the same as the decoupling axis. I think one can be very considerate and also great at decoupling; you just have to be able to couple things back together as well.

1
Nathan Young
Let's try this again.  Offputting to whom? The vast majority of people arguing here are people who would never attend manifest. I'm not super worried if they are put off.  I imagine the view that many people have of the event is not how it was at all. 
1
Nathan Young
I don't really think it's this. I think it is "I don't want people associating me with people or ideas like that so I'd like you to stop please".  But let's take your case, that means you think that on the margin some notion of considerateness/kindness/agreeableness is more important than truth-seeking. Is that right?  And if so, why should EA be pushing for that at the margin. I get why people would push for influence over truth and I get why considerateness is valuable. But on the margin I would pick more truth. It feels like in the past, more considerateness might have led to less hard discussions about AI or even animal welfare. Seems those discussions have generally led us to positions we agree with in hindsight.
1
Nathan Young
What do you think a community ought to value more than truth-seeking? What might you call the value you think trades off?

Good points! It seems good to take a break or at least move to the meta level.

I think one emotion that is probably quite common in discussions about what norms should be (at least in my own experience) is clinging. Quoting from Joe Carlsmith's post on it:

Clinging, as I think about it, is a certain mental flavor or cluster of related flavors. It feels contracted, tight, clenched, and narrow. It has a kind of hardness, a “not OK-ness,” and a (sometimes subtle) kind of desperation. It sees scarcity. It grabs. It sees threat. It pushes away. It carries seeds o

... (read more)

Well said.

I meant to say the exact same thing, but seem to have struggled at communicating.

I want to point out that my comment above was specifically reacting to the following line and phrasing in timunderwood's parent comment:

I also have a dislike for excluding people who have racist style views simply on that basis, with no further discussion needed, because it effectively is setting the prior for racism being true to 0 before we've actually looked at the data.

My point (and yours) is that this quoted passage would be clearer if it said "genetic group differences" instead of "racism."

I agree with this diagnosis of the situation. At the same time, I feel like it's the wrong approach to make it a scientific proposition whether racism is right or not. It should never be right, no matter the science. (I know this is just talking semantics, but I think it adds a bunch of moral clarity to frame it in this way, that science can never turn out to support racism.) As I said here, the problem I see with the HBD crowd is that they think their opinions on the science justifies certain other things or that it's a very important topic.

The scientific proposition is "are there racial genetic differences related to intelligence" right, not "is racism [morally] right"? 

I find it odd how much such things seem to be conflated; if I learned that Jews have an IQ an average of 5 points lower than non-Jews, I would... still think the Holocaust and violence towards and harassment of Jews was abhorrent and horrible? I don't think I'd update much/at all towards thinking it was less horrible. Or if you could visually identify people whose mothers had drank alcohol during pregnancy, and they were... (read more)

I agree the article was pretty bad and unfair, and I agree with most things you say about cancel culture.

But then you lose me when you imply that racism is no different than taking one of the inevitable counterintuitive conclusions in philosophy thought experiments. (I've previously had a lengthy discussion on this topic in this recent comment thread.)

If I were an organizer of a conference where I wanted having interesting and relevant ideas being discussed, I'd still want there to be a bar for attendees to avoid the problem Scott Alexander pointed out (so... (read more)

I think generally though it's easy to misunderstand people, and if people respond to clarify, you should believe what they say they meant to say, not your interpretation of what they said.

Depends on context. Not (e.g.) if someone has a pattern of using plausible deniability to get away with things (I actually don't know if this applies to Hanania) or if we have strong priors for suspecting that this is what they're doing (arguably applies here for reasons related to his history; see next paragraph).

If someone has a history of being racist, but they say the... (read more)

I made the following edit to my comment above-thread:

[Edit: To be clear, by "HBD crowd" I don't mean people who believe and say things like "intelligence is heritable" or "embryo selection towards smarter babies seems potentially very good if implemented well." I thought this was obvious, but someone pointed out that people might file different claims under the umbrella "HBD".]

I'm not sure this changes anything about your response, but my perspective is that a policy of "let's not get obsessed over mapping out all possible group differences and whether the... (read more)

-10
Roko
Load more