Hello! My name is Vaden Masrani and I'm a grad student at UBC in machine learning. I'm a friend of the community and have been very impressed with all the excellent work done here, but I've become very worried about the new longtermist trend developing recently.
I've written a critical review of longtermism here in hopes that bringing an 'outsiders' perspective might help stimulate some new conversation in this space. I'm posting the piece in the forum hoping that William MacAskill and Hilary Greaves might see and respond to it. There's also a little reddit discussion forming as well that might be of interest to some.
Cheers!
I think the probability of these events regardless of our influence is not what matters; it's our causal effect that does. Longtermism rests on the claim that we can predictably affect the longterm future positively. You say that it would be overconfident to assign probabilities too low in certain cases, but that argument also applies to the risk of well-intentioned longtermist interventions backfiring, e.g. by accelerating AI development faster than we align it, an intervention leading to a false sense of security and complacency, or the possibility that the future could be worse if we don't go extinct. Any intervention can backfire. Most will accomplish little. With longtermist interventions, we may never know, since the feedback is not good enough.
I also disagree that we should have sharp probabilities, since this means making fairly arbitrary but potentially hugely influential commitments. That's what sensitivity analysis and robust decision-making under deep uncertainty are for. The requirement that we should have sharp probabilities doesn't rule out the possibility that we could come to vastly different conclusions based on exactly the same evidence, just because we have different priors or weight the evidence differently.