Econ PhD student at Oxford and research associate at the Global Priorities Institute. I'm slightly less ignorant about economic theory than about everything else.
The probability of success in some project may be correlated with value conditional on success in many domains, not just ones involving deference, and we typically don’t think that gets in the way of using probabilities in the usual way, no? If you’re wondering whether some corner of something sticking out of the ground is a box of treasure or a huge boulder, maybe you think that the probability you can excavate it is higher if it’s the box of treasure, and that there’s only any value to doing so if it is. The expected value of trying to excavate is P(treasure) * P(success|treasure) * value of treasure. All the probabilities are “all-things-considered”.
I respect you a lot, both as a thinker and as a friend, so I really am sorry if this reply seems dismissive. But I think there’s a sort of “LessWrong decision theory black hole” that makes people a bit crazy in ways that are obvious from the outside, and this comment thread isn’t the place to adjudicate all that. I trust that most readers who aren’t in the hole will not see your example as demonstration that you shouldn’t use all-things-considered probabilities when making decisions, so I won’t press the point beyond this comment.
I'm a bit confused by this. Suppose that EA has a good track record on an issue where its beliefs have been unusual from the get-go.... Then I should update towards deferring to EAs
I'm defining a way of picking sides in disagreements that makes more sense than giving everyone equal weight, even from a maximally epistemically modest perspective. The way in which the policy "give EAs more weight all around, because they've got a good track record on things they've been outside the mainstream on" is criticizable on epistemic modesty grounds is that one could object, "Others can see the track record as well as you. Why do you think the right amount to update on it is more than they think the right amount is?" You can salvage a thought along these lines in a epistemic-modesty-criticism-proof way, but it would need some further story about how, say, you have some "inside information" about the fact of EAs' better track record. Does that help?
Your quote is replying to my attempt at a "gist", in the introduction--I try to spell this out a bit more further in the middle of the last section, in the bit where I say "More broadly, groups may simply differ in their ability to acquire information, and it may be that a particular group’s ability on this front is difficult to determine without years of close contact." Let me know if that bit clarifies the point.
Re
I currently don't think that epistemic deference as a concept makes sense, because defying a consensus has two effects that are often roughly the same size ,
I don't follow. I get that acting on low-probability scenarios can let you get in on neglected opportunities, but you don't want to actually get the probabilities wrong, right?
In any event, maybe messing up the epistemics also makes it easier for you to spot neglected opportunities or something, and maybe this benefit sometimes kind of cancels out the cost, but this doesn't strike me as relevant to the question of whether epistemic deference as a concept makes sense. Startup founders may benefit from overconfidence, but overconfidence as a concept still makes sense.
Would you have a moment to come up with a precise example, like the one at the end of my “minimal solution” section, where the argument of the post would justify putting more weight on community opinions than seems warranted?
No worries if not—not every criticism has to come with its own little essay—but I for one would find that helpful!
Hey, I think this sort of work can be really valuable—thanks for doing it, and (Tristan) for reaching out about it the other day!
I wrote up a few pages of comments here (initially just for Tristan but he said he'd be fine with me posting it here). Some of them are about nitpicky typos that probably won't be of interest to anyone but the authors, but I think some will be of general interest.
Despite its length, even this batch of comments just consists of what stood out on a quick skim; there are whole sections (especially of the appendix) that I've barely read. But in short, for whatever it's worth:
By the way, someone wrote this Google doc in 2019 on "Stock Market prediction of transformative technology". I haven't taken a look at it in years, and neither has the author, so understandably enough, they're asking to remain nameless to avoid possible embarrassment. But hopefully it's at least somewhat relevant, in case anyone's interested.
It should, thanks! Fixed