Nick_Beckstead

3247Joined Aug 2014

Comments
75

Thanks, I think this is subtle and I don't think I expressed this perfectly.

> If someone uses AI capabilities to create a synthetic virus (which they wouldn't have been able to do in the counterfactual world without that AI-generated capability) and caused the extinction or drastic curtailment of humanity, would that count as "AGI being developed"?

No, I would not count this. 

I'd probably count it if the AI a) somehow formed the intention to do this and then developed the pathogen and released it without human direction, but b) couldn't yet produce as much economic output as full automation of labor.

No official rules on that. I do think that if you have some back and forth in the comments that's a way to make your case more convincing, so some edge there.

1 - counts for purposes of this question
2 - doesn't count for purposes of this question (but would be a really big deal!)

Thanks for this post! Future Fund has removed this project from our projects page in response.

Thanks for the feedback! I think this is a reasonable comment, and the main things that prevented us from doing this are:
(i) I thought it would detract from the simplicity of the prize competition, and would be hard to communicate clearly and simply
(ii) I think the main thing that would make our views more robust is seeing what the best arguments are for having quite different views, and this seems like it is addressed by the competition as it stands.

For simplicity on our end, I'd appreciate if you had one post at the end that was the "official" entry, which links to the other posts. That would be OK!

Plausibility, argumentation, and soundness will be inputs into how much our subjective probabilities change. We framed this in terms of subjective probabilities because it seemed like the easiest way to crisply point at ideas which could change our prioritization in significant ways.

Thanks! The part of the post that was supposed to be most responsive to this on size of AI x-risk was this:

For "Conditional on AGI being developed by 2070, humanity will go extinct or drastically curtail its future potential due to loss of control of AGI." I am pretty sympathetic to the analysis of Joe Carlsmith here. I think Joe's estimates of the relevant probabilities are pretty reasonable (though the bottom line is perhaps somewhat low) and if someone convinced me that the probabilities on the premises in his argument should be much higher or lower I'd probably update. There are a number of reviews of Joe Carlsmith's work that were helpful to varying degrees but would not have won large prizes in this competition.

I think explanations of how Joe's probabilities should be different would help. Alternatively, an explanation of why some other set of propositions was relevant (with probabilities attached and mapped to a conclusion) could help.

Load More