J

johnjnay

107 karmaJoined Sep 2022
law.stanford.edu/directory/john-nay/

Bio

Stanford - A.I. research. 

Founder of an A.I. technology company, Brooklyn Artificial Intelligence Research (Skopos Labs, Inc.), which owns the investment management firm, Brooklyn Investment Group (https://bkln.com). 

Conducted research funded by the U.S. National Science Foundation and the U.S. Office of Naval Research. Created first A.I. course at the NYU School of Law. Published research on A.I., finance, law, policy, economics, and climate change. Publications at http://johnjnay.com, and Twitter at https://twitter.com/johnjnay.

Comments
14

I agree this would require new legislation to fully address (rather than merely a change to a rule under existing statute).

As far as I'm aware there has not been any consideration of relevant legislation, but I would love to learn about anything that others have seen that may be relevant.

Unfortunately, I think the upside of considering amendments to lobbying disclosure laws to attempt to address implications of this outweigh downsides of people learning more about this.

Also, the more well-funded special interest groups are more likely to independently discover and advance AI-driven lobbying than the less well-funded / more diffuse interests of average citizens.

Cross-posted here: https://law.stanford.edu/2023/01/06/large-language-models-as-lobbyists/

I think AI alignment can draw from existing law to a large degree. New legal concepts may be needed but I think there is a lot legal reasoning, legal concepts, legal methods, etc. that are directly applicable now (discussed in more detail here https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4218031).

Also, I think we should keep the involvement of AI in law-making (broadly defined) as limited as we can. And we should train AI to understand when there is sufficient legal uncertainty that a human is needed to resolve the correct action to be taken.

This is a great post. 

Law is the best solution I can think to address the issues you raise. 

This was cross-posted here as well: 

A follow-up thought based on conversations catalyzed by this post:

Much of the research on governing AI and managing its potential unintended consequences currently falls into two ends of a spectrum related to assumptions of the imminence of transformative AGI. Research operating under the assumption of a high probability of near-term transformative AI (e.g., within 10-15 years) is typically focused more on how to align AGI with ideal aggregations of human preferences (through yet to be tested aggregation processes). Research operating under the assumption of a low probability of near-term transformative AI is typically focused on how to reduce discriminatory, safety, and privacy harms posed by present-day (relatively "dumb") AI systems. The proposal in this post seeks a framework that, over time, bridges these two important ends of the AI safety spectrum. 

Load more