Despite the 'proof' being public the scaling hypothesis remained a Thielian secret until recently. It took chat-GPT to convince convince Facebook/Meta AI research that scaled up prosaic AI was a plausible path to beyond human intelligence. They recently had internal meetings sounding the alarm. Deepmind also believed that a significant amount of fundamental advances would be needed. Sadly this is not the case. The scaling curves don't bend anywhere near fast enough. I highly recommend reading about how many 'bees' worth of synpases/neurons various animals and LLMs have. For example "Davinci is a 175-bee model, which gets us up to hedgehog (or quail) scale. Gopher (280 BP) is partridge (or ferret) sized. More research into actual gophers is needed to know how many gophers worth of parameters Gopher has. "
Here is Anthropic's version of events
In 2019, several members of what was to become the founding Anthropic team made this idea precise by developing scaling laws for AI, demonstrating that you could make AIs smarter in a predictable way, just by making them larger and training them on more data. Justified in part by these results, this team led the effort to train GPT-3, arguably the first modern “large” language model2, with over 173B parameters.
Since the discovery of scaling laws, many of us at Anthropic have believed that very rapid AI progress was quite likely. However, back in 2019, it seemed possible that multimodality, logical reasoning, speed of learning, transfer learning across tasks, and long-term memory might be “walls” that would slow or halt the progress of AI. In the years since, several of these “walls”, such as multimodality and logical reasoning, have fallen. Given this, most of us have become increasingly convinced that rapid AI progress will continue rather than stall or plateau. AI systems are now approaching human level performance on a large variety of tasks, and yet training these systems still costs far less than “big science” projects like the Hubble Space Telescope or the Large Hadron Collider – meaning that there’s a lot more room for further growth
Imo less formalized version of the scaling hypothesis/laws were known well before 2019. Recent better understanding of scaling laws should not make us much less bullish on ai progress. Though perhaps there is hope we 'run out of data' before AI becomes too dangerous. I would not bank on this.
As long as the scaling hypothesis remained a secret* doing anything that risked convincing more people of the scaling hypothesis was extremely risky, and imo very unethical. It might have seemed from the inside view that either the 'secret' was obvious or OpenAI was going to convince everyone of the secret anyway. But until recently there was hope OpenAI would eventually stop being so reckless. But at this point the secret is pretty much out. An AI arms race has been triggered and existing actors will shortly convince many of the remaining important doubters. This will soon include the government of [insert country you don't like].
EAs systematically doubt that the dangers posed by aligned AI. There is not much reason to assume that just because an AI system is aligned to the goals of some humans it will be good for humanity as a whole. Of course many AIs are focused on 'at least not everyone dies' as a wincon but I would hope for a better future. If AI is extremely hard to align the current game-board is just not too unlikely to be winnable, time is running out fast. It is probably still bad to work on capabilities at top labs like Anthropic or OpenAI. If you happen to make a big advance, like switching to relu, you will burn a huge amount of timeline. But working on cool AI projects now seems positive to me.
Previously any cool project unacceptably risked convincing even more people of the scaling hypothesis. But if the secret is out it seems worth while to try to steer AI in a positive direction. This has been a very big update for me. Until very recently I promoted a hardline stance of 'absolutely do not work on AI'. But now we might as well play to our out of 'AI isnt that hard to align' and work on steering toward a brighter, based future.
Thank you for sharing this, I particularly enjoyed the bee comparisons, which I hadn't seen before.
I didn't quite follow the logic behind "working on cool AI projects now seems positive to me".
It's perhaps because I don't know quite what you mean by "working on cool AI projects".
Are you saying that capabilities research on a "cool AI project" is safer than capabilities research at OpenAI or Anthropic? If so I'm not clear on why?
Or does a cool AI project mean applying AI rather than developing new capabilities?
i don't think that's how dignity points works.
for me, p(alignment hard) is still big enough that when weighing
it's still better to keep working on hard alignment (see my plan). that's where the dignity points are.
"shut up and multiply", one might say.
Im not trying to get dignity points. Im just trying to have a positive impact. At this point if AI is hard to align we all die (or worse!). I spent years trying to avoid contributing to the problem and helping when I could. But at this point its better to just hope alignment isn't that hard (lost cause timelines) and try to steer the trajectory positively.
"dignity points" means "having a positive impact".
if alignment is hard we need my plan. and it's still very likely alignment is hard.
and "alignment is hard" is a logical fact not indexical location, we don't get to save "those timelines".