That makes sense; I understand that concern.
I wonder if, next time, the survey makers could write something to reassure us that they're not going to be using any results out of context or with an unwarranted spin (esp. in cases like the one here, where the question is related to a big 'divide' within EA, but worded as an abstract thought experiment.)
If we're considering realistic scenarios instead of staying with the spirit of the thought experiment (which I think we should not, partly precisely because it introduces lots of possible ambiguities in how people interpret the question, and partly because this probably isn't what the surveyors intended, given the way EA culture has handled thought experiments thus far – see for instance the links in Lizka's answer, or the way EA draws heavily from analytic philosophy, where straightforwardly engaging with unrealistic thought experiments is a standard component of the toolkit), then I agree that an advertized 1.5% chance of having a huge impact could be more likely upwards-biased than the other way around. (But it depends on who's doing the estimate – some people are actually well-calibrated or prone to be extra modest.)
[...] is very much in keeping with the spirit of the question (which presumably is about gauging attitudes towards uncertainty, not testing basic EV calculation skills
(1) what you described seems to me best characterized as being about trust. Trust in other's risk estimates. That would be separate from attitudes about uncertainty (and if that's what the surveyors wanted to elicit, they'd probably have asked the question very differently).
(Or maybe what you're thinking about could be someone having radical doubts about the entire epistemology behind "low probabilities"? I'm picturing a position that goes something like, "it's philosophically impossible to reason sanely about low probabilities; besides, when we make mistakes, we'll almost always overestimate rather than underestimate our ability to have effects on the world." Maybe that's what you think people are thinking – but as an absolute, this would seem weirdly detailed and radical to me, and I feel like there's a prudential wager against believing that our reasoning is doomed from the start in a way that would prohibit everyone from pursuing ambitious plans.)
(2) What I meant wasn't about basic EV calculation skills (obviously) – I didn't mean to suggest that just because the EV of the low-probability intervention is greater than the EV of the certain intervention, it's a no-brainer that it should be taken. I was just saying that the OP's point about probabilities maybe being off by one percentage point, by itself, without some allegation of systematic bias in the measurement, doesn't change the nature of the question. There's still the further question of whether we want to bring in other considerations besides EV. (I think "attitudes towards uncertainty" fits well here as a title, but again, I would reserve it for the thing I'm describing, which is clearly different from "do you think other people/orgs within EA are going to be optimistically biased?.")
(Note that it's one question whether people would go by EV for cases that are well within the bounds of numbers of people that exist currently on earth. I think it becomes a separate question when you go further to extremes, like whether people would continue gambling in the St Petersburg paradox or how they relate to claims about vastly larger realms than anything we understand to be in current physics, the way Pascal's mugging postulates.)
Finally, I realize that maybe the other people here in the thread have so little trust in the survey designers that they're worried that, if they answer with the low-probability, higher-EV option, the survey designers will write takeaways like "more EAs are in favor of donating to speculative AI risk interventions." I agree that, if you think survey designers will make too strong of an update from your answers to a thought experiment, you should point out all the ways that you're not automatically endorsing their preferred option. But I feel like the EA survey already has lots of practical questions along the lines of "Where do you actually donate to?" So, it feels unlikely that this question is trying to trick respondees or that the survey designers will just generally draw takeaways from this that aren't warranted?
My intuitive reaction to this is "Way to screw up a survey."
Considering that three people agree-voted your post, I realize I should probably come away with this with a very different takeaway, more like "oops, survey designers need to put in extra effort if they want to get accurate results, and I would've totally fallen for this pitfall myself."
Still, I struggle with understanding your and the OP's point of view. My reaction to the original post was something like:
Why would this matter? If the estimate could be off by 1 percentage point, it could be down to 0.5% or up to 2.5%, which is still 1.5% in expectation. Also, if this question's intention were about the likelihood of EA orgs being biased, surely they would've asked much more directly about how much respondees trust an estimate of some example EA org.
We seem to disagree on use of thought experiments. The OP writes:
When designing thought experiments, keep them as realistic as possible, so that they elicit better answers. This reduces misunderstandings, pitfalls, and potentially compounding errors. It produces better communication overall.
I don't think this is necessary and I could even see it backfiring. If someone goes out of their way to make a thought experiment particularly realistic, maybe respondees might get the impression that it is asking about a real-world situation where they are invited to bring in all kinds of potentially confounding considerations. But that would defeat the point of the thought experiment (e.g., people might answer based on how much they trust the modesty of EA orgs, as opposed to giving you their personal tolerance of risk of the feeling of having had no effect/wasted money in hindsight). The way I see it, the whole point of thought experiments is to get ourselves to think very carefully and cleanly about the principles we find most important. We do this by getting rid of all the potentially confounding variables. See here for a longer explanation of this view.
Maybe future surveys should have a test to figure out how people understand the use of thought experiments. Then, we could split responses between people who were trying to play the thought experiment game the intended way, and people who were refusing to play (i.e., questioning premises and adding further assumptions).
*On some occasions, it makes sense to question the applicability of a thought experiment. For instance, in the classical "what if you're a doctor who has the opportunity to kill a healthy patient during routine chek-up so that we could save the lives of 4 people needing urgent organ transplants," it makes little sense to just go "all else is equa! Let's abstract away all other societal considerations or the effect on the doctor's moral character."
So, if I were to write a post on thought experiments today, I would add something about the importance of re-contextualizing lessons learned within a thought experiments to the nuances of real-world situations. In short, I think my formula would be something like, "decouple within thought experiments, but make sure add an extra thinking step from 'answers inside a thought experiment' to 'what can we draw from this in terms of real-life applications.'" (Credit to Kaj Sotala, who once articulated a similar point in probably a better way.)
Probably most people are "committed to safety" in the sense that they wouldn't actively approve conduct at their organization where executives got developers to do things that they themselves presented as reckless. To paint an exaggerated picture, imagine if some executive said the following:
"I might be killing millions of people here if something goes wrong, and I'm not super sure if this will work as intended because the developers flagged significant uncertainties and admitted that they're just trying things out essentially flying blind; still, we won't get anywhere if we don't take risks, so I'm going to give everyone the go-ahead here!"
In that scenario, I think most people would probably object quite strongly.
But as you already suggest, the more relevant question is one of willingness to go the extra mile, and of qualifications: Will the board members care about and be able to gain an informed understanding of the risks of certain training runs, product releases, or model security measures? Alternatively, will they care about (and be good at) figuring out whose opinions and judgment they can trust and defer to on these matters?
With the exodus of many of the people who were concerned about mitigating risks of AI going badly wrong, the remaining culture there will likely be more focused on upsides, on moving fast, on beating competitors, etc. There will likely be fewer alarming disagreements among high-ranking members of the organization (because the ones who had alarming disagreements already left). The new narrative on safety will likely be something like, "We have to address the fact that there was this ideological club of doomer folks who used to work at OpenAI. I think they were well-intentioned (yada yada), but they were wrong because of their ideological biases, and it's quite tragic because the technology isn't actually that risky the way we're currently building it." (This is just my guess; I don't have any direct info on what the culture is now like, so I might be wrong.)
So, my guess is that the rationales the leadership will present to board members on any given issue will often seem very reasonable ASSUMING that you go into this with the prior of "I trust that leadership has good judgment here." The challenging task for board members will be that they might have to go beyond just looking at things the way they get presented to them and ask questions that the leadership wouldn't even put on the meeting agenda. For instance, they could ask for costly signals of things the organization could do to create a healthy culture for assessing risks and for discussing acceptable risk tradeoffs. (Past events suggest that this sort of culture is less likely to exist than it was before the exodus of people who did safety-themed work.)
To summarize, my hope is for board members to take seriously the following three possibilities: (1) That there might be big risks in the AGI tech tree. (2) That org leadership might not believe in these risks or might downplay them because it's convenient for them that way. (3) That org-internal discussions on risks from AI might appear one-sided because of "evaporative cooling" (most of the people who were more particularly concerned having already left for reasons that weren't related to a person's judgment/forecasting abilities).
This seems cool!
I could imagine that many people will gravitate towards moral parliament approaches even when all the moral considerations are known. If moral anti-realism is true, there may not come a point in moral reflection under idealized circumstances where it suddenly feels like "ah, now the answer is obvious." So, we can also think of moral parliament approaches as a possible answer to undecidedness when all the considerations are laid open.
I feel like only seeing it as an approach to moral uncertainty (so that, if we knew more about moral considerations, we'd just pick one of the first-order normative theories) is underselling the potential scope of applications of this approach.
Secondly, prioritizing competence. Ultimately, humanity is mostly in the same boat: we're the incumbents who face displacement by AGI. Right now, many people are making predictable mistakes because they don't yet take AGI very seriously. We should expect this effect to decrease over time, as AGI capabilities and risks become less speculative. This consideration makes it less important that decision-makers are currently concerned about AI risk, and more important that they're broadly competent, and capable of responding sensibly to confusing and stressful situations, which will become increasingly common as the AI revolution speeds up.
I think this is a good point.
At the same time, I think you can infer that people who don't take AI risk seriously are somewhat likely to lack important forms of competence. This is inference is only probabilistic, but it's IMO pretty strong already (it's a lot stronger now than it used to be four years ago) and it'll get stronger still.
It also depends how much a specific person has been interacting with the technology; meaning, it probably applies a lot less to DC policy people, but applies more to ML scientists or people at AI labs.
Okay, what does not tolerating actual racism look like to you? What is the specific thing you're asking for here?
Up until recently, whenever someone criticized rationality or EA for being racist or for supporting racists, I could say something like the following:
"I don't actually know of anyone in these communities who is racist or supports racism. From what I hear, some people in the rationality community occasionally discuss group differences in intelligence, because this was discussed in writings by Scott Alexander, which a lot of people have read and so it gives them shared context. But I think this doesn't come from a bad place. I'm pretty sure people who are central to these communities (EA and rationality) would pretty much without exception speak up strongly against actual racists."
It would be nice if I could still say something like that, but it no longer seems like I can, because a surprising number of people have said things like "person x is quite racist, but [...] interesting ideas."
I’m not that surprised we aren’t understanding one another, we have our own context and hang ups.
Yeah, I agree I probably didn't get a good sense of where you were coming from. It's interesting because, before you made the comments in this post and in the discussion here underneath, I thought you and I probably had pretty similar views. (And I still suspect that – seems like we may have talked past each other!) You said elsewhere that last year you spoke against having Hanania as a speaker. This suggested to me that even though you value truth-seeking a lot, you also seem to think there should be some other kinds of standards. I don't think my position is that different from "truth-seeking matters a ton, but there should be some other kinds of standards." That's probably the primary reason I spent a bunch of time commenting on these topics: the impression that the "pro truth-seeking" faction in my view seemed to be failing to make even some pretty small/cheap concessions. (And it seemed like you were one of the few people who did make such concessions, so, I don't know why/if it feels like we're disagreeing a lot.)
(This is unrelated, but it's probably good for me to separate timeless discussion about norms from an empirical discussion of "How likely is it that Hanania changed a lot compared to his former self?" I do have pessimistic-leaning intuitions about the latter, but they're not very robust because I really haven't looked into this topic much, and maybe I'm just prejudiced. I understand that, if someone is more informed than me and believes confidently that Hanania's current views and personality are morally unobjectionable, it obviously wouldn't be a "small concession" for them to disinvite or not platform someone they think is totally unobjectionable! I think that can be a defensible view depending on whether they have good reasons to be confident in these things. At the same time, the reason I thought that there were small/cheap concessions that people could make that they weirdly enough didn't make, was that a bunch of people explicitly said things like "yeah he's pretty racist" or "yeah he recently said things that are pretty racist" and then still proceeded to talk as though this is just normal and that excluding racists would be like excluding Peter Singer. That's where they really lost me.)
Just as a heads-up, I'm planning to get off the EA forum for a while to avoid the time-sink issues, so I may not leave more comments here anytime soon.
I feel like the controversy over the conference has become a catalyst for tensions in the involved communities at large (EA and rationality).
It has been surprisingly common for me to make what I perceive to be totally sensible point that isn't even particularly demanding (about, e.g., maybe not tolerating actual racism) and then the "pro truth-seeking faction" seem to lump me together with social justice warriors and present analogies that make no sense whatsoever. It's obviously not the case that if you want to take a principled stance against racism, you're logically compelled to have also objected to things that were important to EA (like work by Singer, Bostrom/Savulescu human enhancement stuff, AI risk, animal risk [I really didn't understand why the latter two were mentioned], etc.). One of these things is not like the others. Racism is against universal compassion and equal consideration of interests (also, it typically involves hateful sentiments). By contrast, none of the other topics are like that.
To summarize, it seems concerning if the truth-seeking faction seems to be unable to understand the difference between, say, my comments, and how a social justice warrior would react to this controversy. (This isn't to say that none of the people who criticized aspects of Manifest were motivated by further-reaching social justice concerns; I readily admit that I've seen many comments that in my view go too far in the direction of cancelling/censorship/outrage.)
Ironically, I think this is very much an epistemic problem. I feel like a few people have acted a bit dumb in the discussions I've had here recently, at least if we consider it "dumb" when someone repeatedly fails at passing Ideological Turing Tests or if they seemingly have a bit of black-and-white thinking about a topic. I get the impression that the rationality community has suffered quite a lot defending itself against cancel culture, to the point that they're now a bit (low-t) traumatized. This is understandable, but that doesn't change that it's a suboptimal state of affairs.
Offputting to whom?
If it bothers me, I can assume that some others will react similarly.
You don't have to be a member of the specific group in question to find it uncomfortable when people in your environment say things that are riling up negative sentiments against that group. For instance, twelve-year-old children are unlikely to attend EA or rationality events, but if someone there talked about how they think twelve-year olds aren't really people and their suffering matters less, I'd be pissed off too.
All of that said, I'm overall grateful for LW's existence; I think habryka did an amazing job reviving the site, and I do think LW has overall better epistemic norms than the EA forum (even though I think most of the people who I intellectually admire the most are more EAs than rationalists, if I had to pick only one label, but they're often people who seem to fit into both communities).
Thanks for the reply, and sorry for the wall of text I'm posting now (no need to reply further, this is probably too much text for this sort of discussion)...
I agree that uncertainty is in someone's mind rather than out there in the world. Still, granting the accuracy of probability estimates feels no different from granting the accuracy of factual assumptions. Say I was interested in eliciting people's welfare tradeoffs between chicken sentience and cow sentience in the context of eating meat (how that translates into suffering caused per calorie of meat). Even if we lived in a world where false-labelling of meat was super common (such that, say, when you buy things labelled as 'cow', you might half the time get tuna, and when you buy chicken, you might half the time get ostrich), if I'm asking specifically for people's estimates on the moral disvalue from chicken calories vs cow calories, it would be strange if survey respondees factored in information about tunas and ostriches. Surely, if I was also interested in how people thought about calories from tunas and ostriches, I'd be asking about those animals too!
Also, circumstances about the labelling of meat products can change over time, so that previously elicited estimates on "chicken/cow-labelled things" would now be off. Survey results will be more timeless if we don't contaminate straightforward thought experiments with confounding empirical considerations that weren't part of the question.
A respondee might mention Kant and how all our knowledge about the world is indirect, how there's trust involved in taking assumptions for granted. That's accurate, but let's just take them for granted anyway and move on?
On whether "1.5%" is too precise of an estimate for contexts where we don't have extensive data: If we grant that thought experiments can be arbitrarily outlandish, then it doesn't really matter.
Still, I could imagine that you'd change your mind about never using these estimates if you thought more about situations where they might become relevant. For instance, I used estimates in that area (roughly around 1.5% chance of something happening) several times within the last two years:
My wife developed lupus a few years ago, which is the illness that often makes it onto the whiteboard in the show Dr House because it can throw up symptoms that mimic tons of other diseases, sometimes serious ones. We had a bunch of health scares where we were thinking "this is most likely just some weird lupus-related symptom that isn't actually dangerous, but it also resembles that other thing (which is also a common secondary complication from lupus or its medications), which would be a true emergency. In these situations, should we go to the ER for a check-up or not? With a 4-5h average A&E waiting time and the chance to catch viral illnesses while there (which are extra bad when you already have lupus), it probably doesn't make sense to go in if we think the chance of a true emergency is only <0.5%. However, at 2% or higher, we'd for sure want to go in. (In between those two, we'd probably continue to feel stressed and undecided, and maybe go in primarily for peace of mind, lol). Narrowing things down from "most likely it's nothing, but some small chance that it's bad!" to either "I'm confident this is <0.5%" or "I'm confident this is at least 2%" is not easy, but it worked in some instances. This suggests some usefulness (as a matter of practical necessity of making medical decisions in a context of long A&E waiting times) to making decisions based on a fairly narrowed down low-probability estimate. Sure, the process I described is still a bit more fuzzy than just pulling a 1.5% point estimate from somewhere, but I feel like it approaches similar levels of precision needed to narrow things down that much, and I think many other people would have similar decision thresholds in a situation like ours.
Admittedly, medical contexts are better studied than charity contexts, and especially influencing-the-distant-future charity contexts. So, it makes sense if you're especially skeptical of that level of precision in charitable contexts. (And I indeed agree with this; I'm not defending that level of precision in practice for EA charities!) Still, like habryka pointed out in another comment, I don't think there's a red line were fundamental changes happen as probabilities get lower and lower. The world isn't inherently frequentist, but we can often find plausibly-relevant base rates. Admittedly, there's always some subjectivity, some art, in choosing relevant base rates, assessing additional risk factors, making judgment calls about "how much is this symptom a match?." But if you find the right context for it (meaning: a context where you're justifiably anchoring to some very low-probability base rate), you can get well below the 0.5% level for practically-relevant decisions (and maybe make proportional upwards or downwards adjustments from there). For these reasons, it doesn't strike me as totally outlandish that some group will at some point come up with ranged very-low-probability estimate of averting some risk (like asteroid risk or whatever), while being well-calibrated. I'm not saying I have a concrete example in mind, but I wouldn't rule it out.