I was thinking about a scenario where the scan has not yet happened, but the scan will happen before prices finalize. In that scenario at a minimum, you are not incentivized to bid according to your true beliefs of what will happen. Maybe that incentive disappears before the market finalizes in this particular case, but it's still pretty disturbing—to me it suggests that the basic idea of having the market make the choices is a dangerous one. Even if the incentives problem were to go away before finalization in general (which is unclear to me) it still means that earlier market prices won't work properly for sharing information.
OK I tried to think of an intuitive example where using the market could cause heavy distortions in incentives. Maybe something like the following works?
If I've got that right, then having the market make decisions could be very harmful. (Let me know if this example isn't clear.)
Certainly, if your decision is a deterministic function of the final market price, then there's no way that any hidden information can influence the decision except via the market price. However, what I worry about here is: Do investors in such a market still have the right incentives—will they produce the same prices as they would if the decision was guaranteed to be made randomly? That might be true—and I can't easily come up with a counterexample—but it would be nice to have an argument. Do I correctly understand your second to last paragraph as meaning that you aren't sure of this either?
Somewhat of an aside but I was wondering if some others could confirm or deny my observations:
From interacting with EA folks, my impression was that the community has quite a pluralistic attitude towards utilitarianism. That is, not everyone is utilitarian, and those who are are often aware of the debates in philosophy, meaning that there's a variety of views in terms of if people accept total utilitarianism vs. average utilitarianism, as well as act/rule/preference utilitarianism. Yet a lot of the criticisms seem to see the EA community as having an rigid consensus on kind of OG Bentham-esque act utilitarianism.
Thus it seems to me that either
Incidentally, I think this is a well-written and useful article. It voices concerns that many people have, and does so with an unusual amount of charity, so I think it would be a positive thing to try to read it in good faith.