Cullen_OKeefe

Cullen_OKeefe's Comments

FHI Report: The Windfall Clause: Distributing the Benefits of AI for the Common Good

You are not the only person to have expressed interest in such an arrangement :-) Unfortunately I think there might be some antitrust problems with that.

FHI Report: The Windfall Clause: Distributing the Benefits of AI for the Common Good

I am fairly confident that corporate policy is better. Corporate policy has a number of advantages:

  • Firms get more of a reputational boost
  • The number of actors you need to persuade is very small
  • Corporate policy is much more flexible
  • EA is probably better-equipped to getting corporate policy changes than new legislation/regulation
  • It's easier to make corporate policy permanent
Are there any public health funding opportunities with COVID-19 that are plausibly competitive with Givewell top charities per dollar?

To my understanding, China produce these masks so massively that they can afford selling them to whole population. But, let's say, in US, we have the opposite situation.

Then shouldn't we should just buy them from China?

Activism for COVID-19 Local Preparedness

Thanks! Here's the quote:

Harvard epidemiologist Marc Lipsitch estimates that 40 to 70 percent of the human population could potentially be infected by the virus if it becomes pandemic. Not all of those people would get sick, he noted.

Activism for COVID-19 Local Preparedness

COVID-19 may infect 40-70 percent of the world's population.

What is your source for this? This seems way too high given that even in Hubei (population: 58.5 million), only about 1.1 in 1,000 people (total: 67,103) had confirmed cases.

What are the challenges and problems with programming law-breaking constraints into AGI?

The key difference in my mind is that the AI system does not need to determine the relative authoritativeness of different pronouncements of human value, since the legal authoritativeness of e.g. caselaw is pretty formalized. But I agree that this is less of an issue if the primary route to alignment is just getting an AI to follow the instructions of its principal.

What are the challenges and problems with programming law-breaking constraints into AGI?

Certainly you still need legal accountability -- why wouldn't we have that? If we solve alignment, then we can just have the AI's owner be accountable for any law-breaking actions the AI takes.

I agree that that is a very good and desirable step to take. However, as I said, it also incentives the AI-agent to obfuscate its actions and intentions to save its principal. In the human context, human agents do this but are independently disincentivized from breaking the law they face legal liability (a disincentive) for their actions. I want (and I suspect you also want) AI systems to have such incentivization.

If I understand correctly, you identify two ways to do this in the teenager analogy:

  1. Rewiring
  2. Explaining laws and their consequences and letting the agent's existing incentives do the rest.

I could be wrong about this, but ultimately, for AI systems, it seems like both are actually similarly difficult. As you've said, for 2. to be most effective, you probably need "AI police." Those police will need a way of interpreting the legality of an AI agent's {"mental" state; actions} and mapping them only existing laws.

But if you need to do that for effective enforcement, I don't see why (from a societal perspective) we shouldn't just do that on the actor's side and not the "police's" side. Baking the enforcement into the agents has the benefits of:

  1. Not incentivizing an arms race
  2. Giving the enforcer's a clearer picture of the AI's "mental state"
What are the challenges and problems with programming law-breaking constraints into AGI?

But my real reason for not caring too much about this is that in this story we rely on the AI's "intelligence" to "understand" laws, as opposed to "programming it in"; given that we're worried about superintelligent AI it should be "intelligent" enough to "understand" what humans want as well (given that humans seem to be able to do that).

My intuition is that more formal systems will be easier for AI to understand earlier in the "evolution" of SOTA AI intelligence than less-formal systems. Since law is more formal than human values (including both the way it's written and the formal significance of interpretative texts), then we might get good law-following before good value alignment.

I'm not sure what you're trying to imply with this -- does this make the AIs task easier? Harder? The generality somehow implies that the AI is safer?

Sorry. I was responding to the "all laws" point. My point was that I think that making a law-following AI that can follow (A) all enumerated laws is not much harder than one that can be made to follow (B) any given law. That is, difficulty of construction scales sub-linearly with the number of laws it needs to follow. The interpretative tools that should get to (B) should be pretty generalizable to (A).

What are the challenges and problems with programming law-breaking constraints into AGI?

First, it would be hard to do. I am a programmer / ML researcher and I have no idea how to program an AI to follow the law in some guaranteed way. I also have an intuitive sense that it would be very difficult. I think the vast majority of programmers / ML researchers would agree with me on this.

This is valuable information. However, some ML people I have talked about this with have given positive feedback, so I think you might be overestimating the difficulty.

Second, it doesn't provide much value, because you can get most of the benefits via enforcement, which has the virtue of being the solution we currently use.

Part of the reason that enforcement works, though, is that human agents have an independent incentive not to break the law (or, e.g., report legal violations) since they are legally accountable for their actions.

But AI-enabled police would be able to probe actions, infer motives, and detect bad behavior better than humans could. In addition, AI systems could have fewer rights than humans, and could be designed to be more transparent than humans, making the police's job easier.

This seems to require the same type of fundamental ML research that I am proposing: mapping AI actions onto laws.

Load More