One reason to be suspicious of taking into account lost potential lives here is that if you always do so, it looks like you might get a general argument for "development is bad". Rich countries have low fertility compared to poor countries. So anything that helps poor countries develop is likely to prevent some people from being born. But it seems pretty strange to think we should wait until we find out how much development reduces fertility before we can decide if it is good or bad.
A bit of a tangent in the current context, but I have slight issues with your framing here: mechanisms that prevent the federal government telling the state governments what to do are not necessarily mechanisms that protect individuals citizens, although they could be. But equally, if the federal government is more inclined to protect the rights of individual citizens than the state government is, then they are the opposite. And sometimes framing it in terms of individual rights is just the wrong way to think about it: i.e. if the federal government wants some economic regulation and the state government doesn't, and the regulation has complex costs and benefits that work out well for some citizens and badly for others, then "is it the feds or the state government protecting citizen's rights" might not be a particularly helpful framing.
This isn't just abstract, historically in the South, it was often the feds who wanted to protect Black citizens and the state governments who wanted to avoid this under the banner of state's rights.
I am biased because Stuart is an old friend, but I found this critique of the idea that social media use causes poor mental health fairly convincing when I read it: https://www.thestudiesshowpod.com/p/episode-25-is-it-the-phones Though obviously you shouldn't just make your mind up about this based on a single source, and there's might be a degree of anti-woke and therefore anti-anti-tech bias creeping in.
I have some sympathy with that view, except that I think this is a problem for a much wider class of views than utiliarianism itself. The problem doesn't (entirely) go away if you modify utilitarianism in various attractive ways like "don't violate rights", or "your allowed/obligated to favour friends and family to some degree" or "doing the best thing is just good, not obligatory. The underlying issue is that it seems silly to ever think you can do more good by helping insects than more normal beneficiaries, or that you can do more good in a galaxy-brained indirect way than directly, but there are reasonably strong theoretical arguments that those claims are either true, or at least could be true for all we know. That is an issue for any moral theory that says we can rank outcomes by desirability, regardless of how they think the desirability of various outcomes factors into determining what the morally correct action is. And any sane theory, in my view, thinks that how good/bad the consequences of an action are is relevant to whether you should do it, whether or not other things are also relevant to whether the action should be performed.
Of course it is open to the non-consequentialist to say that goodness of consequences are sometimes relevant, but never with insects. But that seems a like cheating to me unless they can explain why.
What should be done about the possibility that insects or anthropods are conscious and affected by our interventions in your view?
EDITED to add: Just reviving the idea that its ok to favour humans over animals to a very high degree won't help here, since it's animal versus animal interests we are dealing with.
But the critique also mentions that Ord says the long reflection could involve the wider public and that he admits other disciplines will be important too. I think you are just reacting to the fact that he clearly doesn't like Ord or longtermism, and that he thinks that even Ord's moderate position is still elitist. That's different from misrepresentation of a kind that makes him an untrustworthy source.
"More generally, I am very skeptical of arguments of the form "We must ignore X, because otherwise Y would be bad". Maybe Y is bad! What gives you the confidence that Y is good? If you have some strong argument that Y is good, why can't that argument outweigh X, rather than forcing us to simply close our eyes and pretend X doesn't exist?"
This is very difficult philosophical territory, but I guess my instinct is to draw a distinction between:
a) ignoring new evidence about what properties something has, because that would overturn your prior moral evaluation of that thing.
b) Deciding that well-known properties of a thing don't contribute towards it being bad enough to overturn the standard evaluation of it, because you are committed to the standard moral evaluation. (This doesn't involve inferring that something has particular non-moral properties from the claim that it is morally good/bad, unlike a).)
A) feels always dodgy to me, but b) seems like the kind of thing that could be right, depending on how much you should trust judgments about individual cases versus judgements about abstract moral principles. And I think I was only doing b) here, not a).
Having said that, I remember a conversation I had in grad school with a faculty member who was probably much better at philosophy than me claimed that even a) is only automatically bad if you assume moral anti-realism.