3476Joined Aug 2014


Sorted by New


Thank you Max for your years of dedicated service at CEA. Under your leadership as Executive Director, CEA grew significantly, increased its professionalism, and reached more people than it had before. I really appreciate your straightforward but kind communication style, humility, and eagerness to learn and improve. I'm sorry to see you go, and wish you the best of luck in whatever comes next.

Thanks, I think this is subtle and I don't think I expressed this perfectly.

> If someone uses AI capabilities to create a synthetic virus (which they wouldn't have been able to do in the counterfactual world without that AI-generated capability) and caused the extinction or drastic curtailment of humanity, would that count as "AGI being developed"?

No, I would not count this. 

I'd probably count it if the AI a) somehow formed the intention to do this and then developed the pathogen and released it without human direction, but b) couldn't yet produce as much economic output as full automation of labor.

No official rules on that. I do think that if you have some back and forth in the comments that's a way to make your case more convincing, so some edge there.

1 - counts for purposes of this question
2 - doesn't count for purposes of this question (but would be a really big deal!)

Thanks for this post! Future Fund has removed this project from our projects page in response.

Thanks for the feedback! I think this is a reasonable comment, and the main things that prevented us from doing this are:
(i) I thought it would detract from the simplicity of the prize competition, and would be hard to communicate clearly and simply
(ii) I think the main thing that would make our views more robust is seeing what the best arguments are for having quite different views, and this seems like it is addressed by the competition as it stands.

For simplicity on our end, I'd appreciate if you had one post at the end that was the "official" entry, which links to the other posts. That would be OK!

Plausibility, argumentation, and soundness will be inputs into how much our subjective probabilities change. We framed this in terms of subjective probabilities because it seemed like the easiest way to crisply point at ideas which could change our prioritization in significant ways.

Load more