Cullen_OKeefe

I am a lawyer and policy researcher interested in improving the governance of artificial intelligence using the principles of Effective Altruism. In May 2019, I received a J.D. cum laude from Harvard Law School. I currently work as a Research Scientist in Policy at OpenAI.

I am also a Research Affiliate with the Centre for the Governance of AI at the Future of Humanity Institute; Founding Advisor and Research Affiliate at the Legal Priorities Project; and a VP at the O’Keefe Family Foundation.

My research focuses on the law, policy, and governance of advanced artificial intelligence. To learn more, visit my personal website, cullenokeefe.com

Comments

EA and the Possible Decline of the US: Very Rough Thoughts

Thanks David! I guess I was implicitly thinking of scenarios where the decline of the US was not caused by a GCR, since such cases would already qualify for EA prioritization. But agree that decline of the US due to a GCR would meet my stated definition of Collapse.

EA and the Possible Decline of the US: Very Rough Thoughts

I am pretty confident that's wrong. The disanalogy is that with financial markets, you can presently withdraw money and move it to safer assets or spend it on present consumption.

EA and the Possible Decline of the US: Very Rough Thoughts

Yeah, they are definitely quite different, and probably less important from an EA perspective. I just included them for completeness because of the definition of "Collapse" I gave.

EA and the Possible Decline of the US: Very Rough Thoughts

Figuring out how to move politics towards the exhausted majority seems interesting. They probably care about stability a lot more than hyper-partisans do.

EA and the Possible Decline of the US: Very Rough Thoughts

Thanks David. Great analysis as usual :-)

I'm not actually sure we disagree on anything. I agree that

if the worst case happens, we're still likely looking at a decades-long process, during which most of the worst effects are mitigated by other countries taking up the slack, and pushing for the US's decline to be minimally disruptive to the world.

I definitely also agree that it behooves EAs to try to avoid myopia. I have tried to do so here but may very well have failed!

In terms of expected disvalue, I would guess that severe and rapid collapses (more like the USSR than France or Spain) are the most important, due to the nuclear insecurity and possible triggering of great-power conflict.

As for cost-competitiveness with other longtermist interventions, it seems that increasing nuclear security from domestic instability is actually pretty tractable and may be neglected. If so, that suggests to me that it may be approximately as cost-effective as most marginal nuclear security work generally. The only other things that seem plausibly cost-effective to me now are contingency planning for key longtermist institutions so that their operations are minimally disrupted by a turbulent decline.

Cullen_OKeefe's Shortform

Although I've seen some people say they feel like EA is in a bit of an intellectual slump right now, I think the number of new, promising EA startups may be higher than ever. I'm thinking of some of the recent Charity Entrepreneurship incubated programs and some new animal welfare orgs like the Fish Welfare Initiative.

Long-Term Future Fund: Ask Us Anything!

What processes do you have for monitoring the outcome/impact of grants, especially grants to individuals?

Cullen_OKeefe's Shortform

The venerable Judge Easterbrook appears to understand harms from unaligned AI. In this excerpt, he invokes a favorite fictional example among the AI risk community:

The situation is this: Customer incurs a debt and does not pay. Creditor hires Bill Collector to dun Customer for the money. Bill Collector puts a machine on the job and repeatedly calls Cell Number, at which Customer had agreed to receive phone calls by giving his number to Creditor. The machine, called a predictive dialer, works autonomously until a human voice comes on the line. If that happens, an employee in Bill Collector's call center will join the call. But Customer no longer subscribes to Cell Number, which has been reassigned to Bystander. A human being who called Cell Number would realize that Customer was no longer the subscriber. But predictive dialers lack human intelligence and, like the buckets enchanted by the Sorcerer's Apprentice, continue until stopped by their true master.

Soppet v. Enhanced Recovery Co., LLC, 679 F.3d 637, 638–39 (7th Cir. 2012).

Load More