Most of my stuff (even the stuff of interest to EAs) can be found on LessWrong: https://www.lesswrong.com/users/daniel-kokotajlo
I agree with the claims "this problem is extremely fucking hard" and "humans aren't cracking this anytime soon" and I suspect Yudkowsky does too these days.
I disagree that nanotech has to predate taking over the world; that wasn't an assumption I was making or a conclusion I was arguing for at any rate. I agree it is less likely that ASIs will make nanotech before takeover than that they will make nanotech while still on earth.
I like your suggestion to model a more earthly scenario but I lack the energy and interest to do so right now.
My closing statement is that I think your kind of reasoning would have been consistently wrong had it been used in the past -- e.g. in 1600 you would have declared so many things to be impossible on the grounds that you didn't see a way for the natural philosophers and engineers of your time to build them. Things like automobiles, flying machines, moving pictures, thinking machines, etc. It was indeed super difficult to build those things, it turns out -- 'impossible' relative to the R&D capabilities of 1600 -- but R&D capabilities improved by many OOMs, and the impossible became possible.
Cool. Seems you and I are mostly agreed on terminology then.
Yeah we definitely disagree about that crux. You'll see. Happy to talk about it more sometime if you like.
Re: galaxy vs. earth: The difference is one of degree, not kind. In both cases we have a finite amount of resources and a finite amount of time with which to do experiments. The proper way to handle this, I think, is to smear out our uncertainty over many orders of magnitude. E.g. the first OOM gets 5% of our probability mass, the second OOM gets 5% of the remaining probability mass, and so forth. Then we look at how many OOMs of extra research and testing (compared to what humans have done) a million ASIs would be able to do in a year, and compare it to how many OOMs extra (beyond that level) a galaxy worth of ASI would be able to do in many years. And crunch the numbers.
What if he just said "Some sort of super-powerful nanofactory-like thing?"
He's not citing some existing literature that shows how to do it, but rather citing some existing literature which should make it plausible to a reasonable judge that a million superintelligences working for a year could figure out how to do it. (If you dispute the plausibility of this, what's your argument? We have an unfinished exchange on this point elsewhere in this comment section. Seems you agree that a galaxy full of superintelligences could do it; I feel like it's pretty plausible that if a galaxy of superintelligences could do it, a mere million also could do it.)
I think the tech companies -- and in particular the AGI companies -- are already too powerful for such an informal public backlash to slow them down significantly.
I said IMO. In context it was unnecessary for me to justify the claim, because I was asking whether or not you agreed with it.
I take it that not only do you disagree, you agree it's the crux? Or don't you? If you agree it's the crux (i.e. you agree that probably a million cooperating superintelligences with an obedient nation of humans would be able to make some pretty awesome self-replicating nanotech within a few years) then I can turn to the task of justifying the claim that such a scenario is plausible. If you don't agree, and think that even such a superintelligent nation would be unable make such things (say, with >75% credence), then I want to talk about that instead.
(Re: people tipping off, etc.: I'm happy to say more on this but I'm going to hold off for now since I don't want to lose the main thread of the conversation.)
What part of the scenario would you dispute? A million superintelligences will probably exist by 2030, IMO; the hard part is getting to superintelligence at all, not getting to a million of them (since you'll probably have enough compute to make a million copies)
I agree that the question is about the actual scenario, not the galaxy. The galaxy is a helpful thought experiment though; it seems to have succeeded in establishing the right foundations: How many OOMs of various inputs (compute, experiments, genius insights) will be needed? Presumably a galaxy's worth would be enough. What about a solar system? What about a planet? What about a million superintelligences and a few years? Asking these questions helps us form a credence distribution over OOMs.
And my point is that our credence distribution should be spread out over many OOMs, but since a million superintelligences would be capable of many more OOMs of nanotech research in various relevant dimensions than all humanity has been able to achieve thus far, it's plausible that this would be enough. How plausible? Idk I'm guessing 50% or so. I just pulled that number out of my ass, but as far as I can tell you are doing the same with your numbers.
I didn't say they'd covertly be building it. It would probably be significantly harder if covert, they wouldn't be able to get as many OOMs. But they'd still get some probably.
I don't think using humans would mean going at a human pace. The humans would just be used as actuators. I also think making a specialized automated lab might take less than a year, or else a couple years, not more than a few years. (For a million superintelligences with an obedient human nation of servants, that is)
I also would like to see such breakdowns, but I think you are drawing the wrong conclusions from this example.
Just because Yudkowsky's first guess about how to make nanotech, as an amateur, didn't pan out, doesn't mean that nanotech is impossible for a million superintelligences working for a year. In fact it's very little evidence. When there are a million superintelligences they will surely be able to produce many technological marvels very quickly, and for each such marvel, if you had asked Yudkowsky to speculate about how to build it, he would have failed.
(Similarly, the technological marvels produced in the 20th century would not have been correctly guessed-how-to-build by people in the 19th century, yet they still happened, and someone in the 19th century could have predicted that many of them would happen despite not being able to guess how. E.g. heavier-than-air flight.)
Thanks for this thoughtful and detailed deep dive!
I think it misses the main cruxes though. Yes, some people (Drexler and young Yudkowsky) thought that ordinary human science would get us all the way to atomically precise manufacturing in our lifetimes. For the reasons you mention, that seems probably wrong.
But the question I'm interested in is whether a million superintelligences could figure it out in a few years or less. (If it takes them, say, 10 years or longer, then probably they'll have better ways of taking over the world) Since that's the situation we'll actually be facing.
To answer that question, we need to ask questions like
(1) Is it even in principle possible? Is there some configuration of atoms, that would be a general-purpose nanofactory, capable of making more of itself, that uses diamandoid instead of some other material? Or is there no such configuration?
Seems like the answer is "Probably, though not necessarily; it might turn out that the obstacles discussed are truly insurmountable. Maybe 80% credence." If we remove the diamandoid criterion and allow it to be built of any material (but still require it to be dramatically more impressive and general-purpose / programmable than ordinary life forms) then I feel like the credence shoots up to 95%, the remaining 5% being model uncertainty.
(2) Is it practical for an entire galactic empire of superintelligences to build in a million years? (Conditional on 1, I think the answer to 2 is 'of course.')
(3) OK, conditional on the above, the question becomes what the limiting factor is -- is it genius insights about clever binding processes or mini-robo-arm-designs exploiting quantum physics to solve the stickiness problems mentioned in this post? Is it mucking around in a laboratory performing experiments to collect data to refine our simulations? Is it compute & sim-algorithms, to run the simulations and predict what designs should in theory work? Genius insights will probably be pretty cheap to come by for a million superintelligences. I'm torn about whether the main constraint will be empirical data to fit the simulations, or compute to run the simulations.
(4) What's our credence distribution over orders of magnitude of the following inputs: Genius, experiments, and compute, in each case assuming that it's the bottleneck? Not sure how to think about genius, but it's OK because I don't think it'll be the bottleneck. Our distributions should range over many orders of magnitude, and should update on our observation so far that however many experiments and simulations humans have done didn't seem close to being enough.
I wildly guess something like 50% that we'll see some sort of super powerful nanofactory-like thing. I'm more like 5% that it consists of diamandoid in particular, there are so many different material designs and even if diamandoid is viable and in some sense theoretically the best, the theoretical best probably takes several OOMs more inputs to achieve than something else which is just merely good enough.
Thanks for discussing with me!
(I forgot to mention an important part of my argument, oops -- You wouldn't have said "at least 100 years off" you would have said "at least 5000 years off." Because you are anchoring to recent-past rates of progress rather than looking at how rates of progress increase over time and extrapolating. (This is just an analogy / data point, not the key part of my argument, but look at GWP growth rates as a proxy for tech progress rates: According to this GWP doubling time was something like 600 years back then, whereas it's more like 20 years now. So 1.5 OOMs faster.) Saying "at least a hundred years off" in 1600 would be like saying "at least 3 years off" today. Which I think is quite reasonable.)