Working (0-5 years experience)


I am a lawyer and policy researcher interested in improving the governance of artificial intelligence using the principles of Effective Altruism. In May 2019, I received a J.D. cum laude from Harvard Law School. I currently work as a Research Scientist in Governance at OpenAI.

I am also a Research Affiliate with the Centre for the Governance of AI at the Future of Humanity Institute; Founding Advisor and Research Affiliate at the Legal Priorities Project; and a VP at the O’Keefe Family Foundation.

My research focuses on the law, policy, and governance of advanced artificial intelligence.

You can share anonymous feedback with me here.


Law-Following AI
AI Benefits


Topic Contributions

I do think it'd be interesting to have an AGI-pilled economist talk to one of the economists that do GWP forecasting to see if they can find cruxes.

Hi John! You might be interested in my Law-Following AI Sequence, where I've explored very similar ideas:

I'm glad we've seemed to converge on similar ideas. I would love to chat sometime!

Rise of the Conservative Legal Movement is very interesting and good.

There's also this interesting discussion from Bentham against the attorney–client privilege:

(I don't endorse Bentham's view).

Thanks! Was not aware of this; definitely relevant :-)

Agreed, and in part why I'm not very sold on this critique.

I was curious about who would be the firm's opponent in this scenario, i.e. the actor trying to legally implement the Windfall Clause.

This is underdetermined by the concept of the WC itself, but is a very important design consideration.

The worst-case scenario for this failure mode is that some very large number of people are plaintiffs in their individual capacity. Coordinating to enforce would be hard for them, but class action mechanisms (of which I'm not an expert!) could probably help.

A better approach would be to have some identifiable small number (including one) of recipients. This is in fact what we suggest in the report (Appendix II). This helps that actor internalize the costs of seeking to enforce the WC.

I think we can improve on that too, as suggested in the report to some degree. For example, I strongly believe the WC should include fee-shifting provisions for exactly this reason, so that the AI developer would be on the hook for legal fees from those trying to enforce the WC. And a variety of standard covenants in debt arrangements—such as accounting, indemnification, and domicile requirements—could further reduce risk. I also think the gold standard is securitizing the WC payment instrument, which would probably make enforcement easier for a variety of reasons.

(Just flagging that this is very related to the discussion in the first part of Reasons and Persons, and for the reasons presented therein I don't think it's a decisive argument against consequentialism as criterion of rightness.)

I would note that advocating for improving utility (a core EA concept!) is not the same thing as utilitarianism.

Load More