A failure mode I see here is where philosophy education comes to be regarded as something like math education is now: something that everyone believes has no practical application, but we are forced to learn it anyway. Why does a farmer or engineer need to know the difference "between consequentialism and deontology"? If philosophy comes to be seen as rigor for the sake of pointless rigor, it will be trusted less.
-"In each case, I think EA emphasizes estimating the impact in terms of human outcomes like lives saved. Successful Supreme Court cases could be a useful intermediate outcome, but ultimately I'd want to know something like the impact of the average case on well-being, as well as the likelihood of cases going the other way in the absence of funding the Institute for Justice."
But a Supreme Court case could have potentially infinite effects in the future, since it will be used as precedent for further cases etc. Is it really possible to model this? If it is not possible, then is it possible that IJ is the most effective charity, even though it cannot be analyzed under an EA framework?