Hello! My name is Vaden Masrani and I'm a grad student at UBC in machine learning. I'm a friend of the community and have been very impressed with all the excellent work done here, but I've become very worried about the new longtermist trend developing recently.
I've written a critical review of longtermism here in hopes that bringing an 'outsiders' perspective might help stimulate some new conversation in this space. I'm posting the piece in the forum hoping that William MacAskill and Hilary Greaves might see and respond to it. There's also a little reddit discussion forming as well that might be of interest to some.
Cheers!
Greaves and MacAskill do discuss risk aversion, uncertainty/ambiguity aversion and the issue of seemingly arbitrary probabilities in sections 4.2 and 4.5. They admit that risk aversion with respect to the difference one makes does undermine strong longtermism (and I think ambiguity aversion with respect to the difference one makes would, too, although it might also lead you to doing as little as possible to avoid backfiring), although they cited (Snowden, 2015) claiming that aversion with respect to the difference on makes is too agent-relative and therefore incompatible with impartiality.
Apparently they're working on another paper with Mogensen on these issues.
They also point out that organizations like GiveWell, deal with cluelessness by effectively assuming it away, and you haven't really addressed this point. However, I think the steelman for GiveWell is that they're extremely skeptical about causal effects (or optimistic about the speculative long-term causal effects of their charities' interventions) and possibly uncertainty/ambiguity-averse with respect to the difference one makes (EDIT: although it's not clear that this justifies ignoring speculative future effects; rather it might mean assuming worst cases).
See also the following posts and the discussion:
Greaves and MacAskill, in my view, don't adequately address concerns about skepticism of causal effects and the value of their specific proposals. I discuss this in this thread and this thread.