Hello! My name is Vaden Masrani and I'm a grad student at UBC in machine learning. I'm a friend of the community and have been very impressed with all the excellent work done here, but I've become very worried about the new longtermist trend developing recently.
I've written a critical review of longtermism here in hopes that bringing an 'outsiders' perspective might help stimulate some new conversation in this space. I'm posting the piece in the forum hoping that William MacAskill and Hilary Greaves might see and respond to it. There's also a little reddit discussion forming as well that might be of interest to some.
Cheers!
If someone who I have trusted with working out the answer to a complicated question makes an error that I can see and verify, I should also downgrade my assessment of all their work which might be much harder for me to see and verify.
Related: Gell-Mann Amnesia
(Edit: Also related, Epistemic Learned Helplessness)
The correct default response to this effect, in my view, mostly does not look like 'ignoring the bad arguments and paying attention to the best ones'. That's almost exactly the approach the above quote describes and (imo correctly) mocks; ignoring the show business article because your expertise lets you see the arguments are bad and taking the Palestine article seriously because the arguments appear to be good.
I think the correct default response is something closer to 'focus on your areas of expertise, and see how the proponents conduct themselves within that area. Then use that as your starting point for guessing at their accurracy in areas which you know less well'.
I appreciate stuff like the above is part of why you wrote this. I still wanted to register that I think this framing is backwards; I don't think you should evaluate the strength of arguments across all domains as they come and then adjust for trustworthiness of the person making them; in general I think it's much better (measured by believing more true things) to assess the trustworthiness of the person in some domain you understand well and only then adjust to a limited extent based on the apparent strength of the arguments made in other domains.
It's plausible that this boils down to a question of 'how good are humans at assessing the strength of arguments in areas they know little about'. In the ideal, we are perfect. In reality, I think I am pretty terrible at it, in pretty much exactly the way the Gell-Mann quote describes, and so want to put minimal weight on those feelings of strength; they just don't have enough predictive power to justify moving my priors all that much. YMMV.