Hide table of contents

Summary:

The touchstone of antitrust compliance is competition. To be legally permissible, any industrial restraint on trade must have sufficient countervailing procompetitive justifications. Usually, anticompetitive horizontal agreements like boycotts (including a refusal to produce certain products) are per se illegal.

The “learned professions,” including engineers, frequently engage in somewhat anticompetitive self-regulation through professional standards. These standards are not exempt from antitrust scrutiny. However, some Supreme Court opinions have nevertheless held that some forms of professional self-regulation that would otherwise receive per se condemnation could receive more preferential antitrust analysis under the “Rule of Reason.” This Rule weighs procompetitive and anticompetitive impacts to determine legality. To receive the rule-of-reason review, such professional self-regulation would need to:

  1. Be promulgated by a professional body;
  2. Not directly affect price or output level; and
  3. Seek to correct some market failure, such as information asymmetry between professionals and their clients.

Professional ethical standards promulgated by a professional body (i.e., comparable to the American Medical Association or American Bar Association) that prohibit members from building unsafe AI could plausibly meet all of these requirements.

This paper does not argue that this would clearly win in court, or that such an agreement would be legal. Nor does it argue that it would survive rule-of-reason review. It merely says that there exists a colorable argument for analyzing such an agreement under the Rule of Reason, rather than a per se rule. Thus, this could be a plausible route to an antitrust-compliant horizontal agreement to not engineer AI unsafely.

26

0
0

Reactions

0
0
Comments1
Sorted by Click to highlight new comments since: Today at 6:49 PM

Planned summary for the Alignment Newsletter:

One way to reduce the risk of unsafe AI systems is to have agreements between corporations that promote risk reduction measures. However, such agreements may run afoul of antitrust laws. This paper suggests that this sort of self-regulation could be done under the “Rule of Reason”, in which a learned profession (such as “AI engineering”) may self-regulate in order to correct a market failure, as long as the effects of such a regulation promote rather than harm competition.

In the case of AI, if AI engineers self-regulate, this could be argued as correcting the information asymmetry between the AI engineers (who know about risks) and the users of the AI system (who don’t). In addition, since AI engineers arguably do not have a monetary incentive, the self-regulation need not be anticompetitive. Thus, this seems like a plausible method by which AI self-regulation could occur without running afoul of antitrust law, and so is worthy of more investigation.
More from Cullen
Curated and popular this week
Relevant opportunities