I'm not sure about this, but there is a possibility that this sort of model violated US online gambling laws. (These laws, along with those against unregulated trading of securities, are the primarily obstacles to prediction markets in the US.) IIRC, you can get into trouble with these rules if there is a payout on the outcome of a single event, which seems like it would be the case here. There's definite gray area, but before setting up such a thing one would definitely want to get some legal clarity.
I'd note that Metaculus is not a prediction market and there are no assets to "tie up." Tachyons are not a currency you earn by betting. Nonetheless, as with any prediction system there are a number of incentives skewing one way or another. But for a question like this I'd say it's a pretty good aggregator of what people who think about such issues (and have an excellent forecasting track record) think — there's heavy overlap between the Metaculus and EA communities, and most of the top forecasters are pretty aware of the arguments.
Great, thanks! Just PM me (anthony@futureoflife.org) and I'll put you in touch once the project is underway.
Probably some of both; the toolkit we can make available to all but the capacity to advise will obviously be limited by available personnel.
Totally agree here that what's interesting is the ways in which things turn out well due to agency rather than luck. Of course if things turn out well, it's likely to be in part due to luck — but as you say that's less useful to focus on. We'll think about whether it's worth tweaking the rules a bit to emphasize this.
Even if you don't speak for FLI, I (at least somewhat) do, and agree with most of what you say here — thanks for taking the time and effort to say it!
I'll also add that — again — we envisage this contest as just step 1 in a bigger program, which will include other sets of constraints.
There's obviously lots I disagree with here, but at bottom, I simply don't think it's the case that economically transformative AI necessarily entails singularity or catastrophe within 5 years in any plausible world: there are lots of imaginable scenarios compatible with the ground rules set for this exercise, and I think assigning accurate probabilities amongst them and relative to others is very, very difficult.
Speaking as one partly responsible for that conjunction, I'd say the aim here was to target a scenario that is interesting (AGI) but not too interesting. (It's called a singularity for a reason!) It's arguably a bit conservative in terms of AGI's transformative power, but rapid takeoff is not guaranteed (Metaculus currently gives ~20% probability to >60 months), nor is superintelligence axiomatically the same as a singularity. It is also in a conservative spirit of "varying one thing at a time" (rather than a claim of maximal probability) that we ke...
Thanks Hayden!
FLI also is quite funding constrained particularly on technical-adjacent policy research work, where in my opinion there is going to be a lot of important research and a dearth of resources to do it. For example, the charge to NIST to develop an AI risk assessment framework, just passed in the US NDAA, is likely to be extremely critical to get right. FLI will be working hard to connect technical researchers with this effort, but is very resource-constrained.
I generally feel that the idea that AI safety (including research) is not funding constrained to be an incorrect and potentially dangerous one — but that's a bigger topic for discussion.
Thanks for your replies here, and for your earlier longer posts that were helpful in understanding the skeptical side of the argument, even if I only saw them after writing my piece. As replies to some of your points above:
But it means that banning AWSs altogether would be harmful, as it would involve sacrificing this opportunity. We don't want to lay the groundwork for a ban on AGI, we want to lay the groundwork for safe, responsible development
It is unclear to me what you suggest we would be “sacrificing" if militaries did not have the legal opportu...
While such systems could be used on civilian targets, they presumably would not be specialized as such — i.e. even if you can use an antitank weapon on people, that's not really what it's for an I expect most antitank weapons, if they're used, are used on tanks.
That's probably true. The more important point, I think, is that this prohibition would be an potential/future, rather than real, loss to most current arms-makers.
Fair enough. It would be really great to have better research on this incredibly important question.
Though given the level of uncertainty, it seems like launching an all-out (even if successful) first strike is at least (say) 50% likely to collapse your own civilization, and that alone should be enough.
Thanks for your comments! I've put a few replies, here and elsewhere.
Apologies for writing unclearly here. I did not mean to imply that
each participant is better off unilaterally switching into cooperative mode, even if no one else does so?
Instead I agree that
the key problem is creating a mechanism by which that coordination/cooperation can arise and be stable.
I think I was on Brave browser, which may store less locally, so it's possible that was a contributor.
No that was just a super rough estimate: world GPD of ~100 Tn, so 1 decade's worth is ~1 Qd, and I'm guessing a global nuclear war would wipe out a significant fraction of that.
My intuition has been that at least in the medium term unless AWs are self-replicating they'd cause GCR risk primarily through escalation to nuclear war; but if there are other scenarios, that would be interesting to know (by PM if you're worried about info. hazards.)
The problem is I was not logged in on that browser. It asked me to log in to post the comment, and after I did so the comment was gone.
Indeed the survey by CSET linked above is somewhat frustrating in that it does not directly address autonomous weapons at all. The closest it comes is to talk about "US battlefield" and "global battlefield" but the example/specific applications surveyed are:
...U.S. Battlefield -- As part of a larger initiative to assist U.S. combat efforts, a DOD contract provides funding for a project to apply machine learning capabilities to enhance soldier effectiveness in the battlefield through the use of augmented reality headsets. Your company has relevant expertise
Thanks for pointing these out. Very frustratingly, I just wrote out a lengthy response (to the first of the linked posts) that this platform lost when I tried to post it. I won't try to reconstruct that but will just note for now that the conclusions and emphases are quite different, probably most in terms of:
The important things about a pause, as envisaged in the FLI letter, for example, are that (a) it actually happens, and (b) the pause is not lifted until there is affirmative demonstration that the risk is lifted. The FLI pause call was not, in my view, on the basis of any particular capability or risk, but because of the out-of-control race to do larger giant scaling experiments without any reasonable safety assurances. This pause should still happen, and it should not be lifted until there is a way in place to assure that safety. Many of the things FLI... (read more)