Parallels Between AI Safety by Debate and Evidence Law

Thanks for this very thoughtful comment!

I think it is accurate to say that the rules of evidence have generally aimed for truth-seeking per se. That is their stated goal, and it generally explains the liberal standard for admission (relevance, which is a very low bar and tracks Bayesian epistemology well), the even more liberal standards for discovery, and most of the admissibility exceptions (which are generally explainable by humans' imperfect Bayesianism).

You're definitely right that the legal system as a whole has many goals other than truth-seeking. However, those other goals are generally advanced through other aspects of the justice system. As an example, finality is a goal of the legal system, and is advanced through, among other things, statutes of limitations and repose. Similarly, the "beyond reasonable doubt" standard for criminal conviction is in some sense contrary to truth-seeking but advances the policy preference for underpunishment over overpunishment.

You're also right that there are some exceptions to this within evidence law itself, but not many. For example, the attorney–client privilege exists not to facilitate truth-seeking, but to protect the attorney–client relationship. Similarly, the spousal privileges exist to protect the marital relationship. (Precisely because such privileges are contrary to truth-seeking, they are interpreted narrowly. See, e.g., United States v. Aramony, 88 F.3d 1369, 1389 (4th Cir. 1996); United States v. Suarez, 820 F.2d 1158, 1160 (11th Cir. 1987)). And of course, some rules of evidence have both truth-seeking and other policy rationales. Still, on the whole and in general, the rules of evidence are aimed towards truth.

High stakes instrumentalism and billionaire philanthropy

Good post! Thanks for linking to my work. I also agree with Larks that it's nice to have academic political theory brought in here.

AI Benefits Post 1: Introducing “AI Benefits”

Thanks! You are correct. Updated to clarify that this is meant to be "the subset of AI Benefits on which I am focusing"—i.e., nonmarket benefits.

Is it suffering or involuntary suffering that's bad, and when is it (involuntary) suffering?
  1. suffering that is "meaningful" (such as mourning)

This might be a specific instance of

3*) Suffering that is a natural result of healthy/normal/inevitable/desirable emotional reactions

Load More