Global moratorium on AGI, now (Twitter). Founder of CEEALAR (née the EA Hotel; ceealar.org)
Ok, fair point. Maybe OpenPhil then? Or Rethink Priorities? I think in general the EA community and its leadership are asleep at the wheel here. We're in the midst of an unprecedented global emergency and the stakes couldn't be higher, yet there is very little movement apart from amongst a rag-tag bunch of the rank and file (AGI Moratorium HQ Slack -- please join if you want to help)
I think CEA needs to get behind the push for a global moratorium on AGI. Everything else is downstream of that (i.e. without such a moratorium there likely won't even be a world to do good in, or any sentient beings to help.)
I think for that money you're going to need to prove that you're worth it - can you link to any of your work? Also, as per my note at the top of the OP, I think that there basically isn't time to spin up an alignment career now, so unless you are a genius or have some novel insights into the problem already, then I'm not very hopeful that your work could make a difference at this late stage. I'm more excited about people pivoting to work on getting a global AGI moratorium in place asap. Once we have that, then we can focus on a "Manhattan Project" for Alignment.
Paid subscriptions started with the official release of GPT-4 (March). 100M is likely a significant underestimate now, I don't think the user-base saturated there. This say 1B users (but doesn't seem that credible). Also 1% seems kind of low when the GPT-4 answers are significantly better (I guess you can also get GPT-4 for free on Bing though). I'd be surprised if there were <10M paid subscribers (c.f. Netflix and Spotify with ~200M each).
My post has a long list of potential actions. "Steely determination to survive" (as per Geoffrey Miller's comment) is the vibe I'm going for.
Agree with your background claims. But think we should be pivoting toward advocacy for slowing down / pausing / shutting down AI capabilities in general, in the post GPT-4+AgentGPT era. Short timelines means we should lower the bar for funding, and not worry quite so much about downside risks (especially if we only have months to get a moratorium in place).
Thanks for the reply. I think the talk of 20 years is a red herring as we might only have 2 years (or less). Re your example of "A Conjunctive Case for the Disjunctive Case for Doom", I don't find the argument convincing because you use 20 years. Can you make the same arguments s/20/2?
And what I'm arguing is not that we are doomed by default, but the conditional on being doomed given AGI; P(doom|AGI). I'm actually reasonably optimistic that we can just stop building AGI and therefore won't be doomed! And that's what I'm working toward (yes, it's going to be a lot of work; I'd appreciate more help).
On my way of viewing things, an argument for a disjunctive framing shows that “failure on intent alignment (with success in the other areas) leads to a high P(Doom | AGI), failure on outer alignment alignment (with success in the other areas) leads to a high P(Doom | AGI), etc …”. I think that you have not shown this for any of the disjuncts
Isn't it obvious that none of {outer alignment, inner alignment, misuse risk, multipolar coordination} have come anywhere close to being solved? Do I really need to summarise progress to date and show why it isn't a solution, when no one is even claiming to have a viable, scalable, solution to any of them!? Isn't it obvious that current models are only safe because they are weak? Will Claude-3 spontaneously just decide not to make napalm with the Grandma's bedtime story napalm recipe jailbreak when it's powerful enough to do so and hooked up to a chemical factory?
So far, I’ve discussed just one disjunct, but I can imagine outlining similar assumptions for the other disjuncts.
Ok, but you really need to defeat all of them given that they are disjuncts!
I don’t think instrumental convergence alone gets you to ‘doom with >50%’.
Can you elaborate more on this? Is it because you expect AGIs to spontaneously be aligned enough to not doom us?
I’m unclear what, exactly, your arguments are meant to be. Also, I would personally find it much easier to engage with arguments in premise-conclusion format
Judging by the overall response to this post, I do think it needs a rewrite.
It's looking highly likely that the current paradigm of AI architecture (Foundation models), basically just scales all the way to AGI. These things are “General Cognition Engines” (watching that linked video helped it click for me). Also consider multimodal - the same architecture can do text, images, audio, video, sensor data, robotics. Add in planners, plugins and memory (they "System 2" to the foundation model's "System 1") and you have AGI. This will be much more evident with Google Gemini (currently in training).
It seems like there is no "secret sauce" left - all is needed is more compute and data (for which there aren't significant bottlenecks). More here.