Global moratorium on AGI, now (Twitter). Founder of CEEALAR (née the EA Hotel; ceealar.org)
It's looking highly likely that the current paradigm of AI architecture (Foundation models), basically just scales all the way to AGI. These things are “General Cognition Engines” (watching that linked video helped it click for me). Also consider multimodal - the same architecture can do text, images, audio, video, sensor data, robotics. Add in planners, plugins and memory (they "System 2" to the foundation model's "System 1") and you have AGI. This will be much more evident with Google Gemini (currently in training).
It seems like there is no "secret sauce" left - all is needed is more compute and data (for which there aren't significant bottlenecks). More here.
Ok, fair point. Maybe OpenPhil then? Or Rethink Priorities? I think in general the EA community and its leadership are asleep at the wheel here. We're in the midst of an unprecedented global emergency and the stakes couldn't be higher, yet there is very little movement apart from amongst a rag-tag bunch of the rank and file (AGI Moratorium HQ Slack -- please join if you want to help)
I think CEA needs to get behind the push for a global moratorium on AGI. Everything else is downstream of that (i.e. without such a moratorium there likely won't even be a world to do good in, or any sentient beings to help.)
I think for that money you're going to need to prove that you're worth it - can you link to any of your work? Also, as per my note at the top of the OP, I think that there basically isn't time to spin up an alignment career now, so unless you are a genius or have some novel insights into the problem already, then I'm not very hopeful that your work could make a difference at this late stage. I'm more excited about people pivoting to work on getting a global AGI moratorium in place asap. Once we have that, then we can focus on a "Manhattan Project" for Alignment.
Paid subscriptions started with the official release of GPT-4 (March). 100M is likely a significant underestimate now, I don't think the user-base saturated there. This say 1B users (but doesn't seem that credible). Also 1% seems kind of low when the GPT-4 answers are significantly better (I guess you can also get GPT-4 for free on Bing though). I'd be surprised if there were <10M paid subscribers (c.f. Netflix and Spotify with ~200M each).
My post has a long list of potential actions. "Steely determination to survive" (as per Geoffrey Miller's comment) is the vibe I'm going for.
Agree with your background claims. But think we should be pivoting toward advocacy for slowing down / pausing / shutting down AI capabilities in general, in the post GPT-4+AgentGPT era. Short timelines means we should lower the bar for funding, and not worry quite so much about downside risks (especially if we only have months to get a moratorium in place).
Coming back to the point about data. Whilst Epoch gathered some data showing that the stock high quality text data might soon be exhausted, their overall conclusion is that there is only a “20% chance that the scaling (as measured in training compute) of ML models will significantly slow down by 2040 due to a lack of training data.”. Regarding Jacob Buckman's point about chess, he actually outlines a way around that (training data provided by narrow AI). As a counter to the wider point about the need for active learning, see DeepMind's Adaptive Agent and the Voyager "lifelong learning" Minecraft agent, both of which seem like impressive steps in this direction.