I'm a computational physicist, I generally donate to global health. I am skeptical of AI x-risk and of big R Rationalism, and I intend explaining why in great detail.
Hey, thanks for engaging. I saved the AGI theorizing for last because it's the most inherently speculative: I am highly uncertain about it, and everyone else should be too.
But the question I'm interested in is whether a million superintelligences could figure it out in a few years or less. (If it takes them, say, 10 years or longer, then probably they'll have better ways of taking over the world) Since that's the situation we'll actually be facing.
I would dispute that "a million superintelligence exist and cooperate with each other to invent MNT" is a likely scenario, but even given that, my guess would still be no. The usual disclaimer that the following is all my personal guesses as a non-experimentalist and non-future knower:
(1) Is it even in principle possible? Is there some configuration of atoms, that would be a general-purpose nanofactory, capable of making more of itself, that uses diamandoid instead of some other material? Or is there no such configuration?
If we restrict to diamondoid, my credence would be very low, somewhere in the 0 to 10% range. The "diamondoid massively parallel builds diamondoid and everything else" process is intensely challenging: we only need one step to be unworkable for the whole thing to be kaput, and we've already identified some potential problems (tips sticking together, hydrogen hitting, etc). With all materials available, my credence is very likely (above 95%) that something self-replicating that is more impressive than bacteria and viruses is possible, but I have no idea how impressive the limits of possibility are.
(2) Is it practical for an entire galactic empire of superintelligences to build in a million years? (Conditional on 1, I think the answer to 2 is 'of course.')
I'd agree that this is almost certain conditional on 1.
(3) OK, conditional on the above, the question becomes what the limiting factor is -- is it genius insights about clever binding processes or mini-robo-arm-designs exploiting quantum physics to solve the stickiness problems mentioned in this post? Is it mucking around in a laboratory performing experiments to collect data to refine our simulations? Is it compute & sim-algorithms, to run the simulations and predict what designs should in theory work? Genius insights will probably be pretty cheap to come by for a million superintelligences. I'm torn about whether the main constraint will be empirical data to fit the simulations, or compute to run the simulations.
To be clear, all forms of bonds are "exploiting quantum physics", in that they are low-energy configurations of electrons interacting with each other according to quantum rules. The answer to the sticky fingers problem, if there is one, will almost certainly involve the bonds we already know about, such as using weaker Van-der-Waals forces to stick and unstick atoms, as I think is done in biology?
As for the limiting factor: In the case of the million years of superintelligences, it would probably be a long search over a gargantuan set of materials, and a gargantuan set of possible designs and approaches, to identify ones that are theoretically promising, tests them with computational simulations to whittle them down, and then experimentally create each material and each approach and test them all in turn. The galaxy cluster would be able to optimize each step to calculate what balance will be fastest overall.
The balance will be different in the galaxy than in the human scale, because they would have orders of magnitude more compute available (including quantum computing), would have a galaxy worth of materials available, wouldn't have to hide from people, etc. So you really have to ask about the actual scenario, not the galaxy.
In the actual scenario of super-AI trying to covertly build nanotech, the bottleneck would likely be experimental. The problem is a twin dilemma: If you have to rely on employing humans in a lab, then they go at human pace, and hence will not get the job done in a few years. If you try to eliminate the humans from the production process, you need to build a specialized automated lab... which also requires humans, and would probably take more than a few years.
Great reply! In fact, I think that the speech you wrote for the police reformer is probably the best way to advance the police corruption cause in that situation, with one change: they should be very clear that they don't think that demons exist.
I think there is an aspect where the AI risk skeptics don't want to be too closely associated with ideas they think are wrong: because if the AI x-riskers are proven to be wrong, they don't want to go down with the ship. IE: if another AI winter hits, or an AGI is built that shows no sign of killing anyone, then everyone who jumped on the x-risk train might look like fools, and they don't want to look like fools (for both personal and cause related reasons).
I think there definitely is an aspect of "AI x-risk people suck", but I worry that casting it as a team sports thing makes it seem overly irrational. When Timnit Gebru says that AI x-risk people suck, she's saying they are net negative: they do far more harm in promoting the incorrect x-risk idea and the actions they take (for example, helping start openAI) than they do incidental good in raising AI ethics awareness. You might think this belief is wrong, but the resulting actions make perfect sense, given this belief.
To modify the Gaia example, it'd be like if the Gaia people were trying to block all renewable energy building because it interrupted the chakras of the earth, and also loudly announcing that an earth spirit will become visible to the whole planet in 5 years. Yes, they are objectively increasing attention to your actual cause, but debunking them is still the correct move here. They've moved from on your team to not on your team because of objective object level disagreements over what beliefs are true and what actions should be taken.
I'm not a big fan of the distraction argument, and I encourage cooperation between ethicists and x-riskers. However, I don't think you fully inhabited the mind of the x-risk skeptic here.
From their perspective, AI x-risk is absurd. They think it's all based on shoddy thinking and speculation by wacky internet people who are wrong about everything.
From your perspective, it's a matter of police corruption vs police incompetence.
from their perspective, it's a matter of police corruption vs police demonic possession.
Imagine if you're a police reformer who wakes up one day to see article after article worried that the police are being possessed by demons into doing bad things, and seeing a huge movement out there worried about the demon cops. You are then interviewed about whether you are concerned by the demonic possession in the police force.
I think the distraction argument is a natural response to this kind of situation. You want to be clear that you don't believe in demons at all, and that demonic cop possession is not a real problem, but also that police corruption is a real issue. Hence: "demonic cop possession is a distraction from police corruption". I think this is a defensible statement! It's certainly true about the interview itself, in that you want to talk about issues that are real, not ones that aren't.
Time between surviving AGI and solving aging
I model this as an exponential distribution with a mean time of 5 years. I mostly think of it as requiring a certain amount of “intellectual labor” (distributed lognormally) to be solved, with the amount of intellectual labor per unit of time increasing rapidly with the advent of AGI as its price decreases dramatically.
This is an extremely wild claim, and one I believe to be almost certainly false. Efforts to even slow down aging in some parts of the body have barely gotten anywhere, you think a mere AGI can suddenly jumpstart us to immortality? Running experiments on aging requires people to age, which inherently puts a bottleneck on this type of experiment.
I am somewhat concerned that people are ascribing near-godlike abilities to AGI without even bothering to supply evidence or even an argument in favor of this hypothesis. All intelligences are flawed, and all intelligences have computational limits.
Interesting post!
I think the "tech will be more dangerous later" argument is underappreciated, and it's partly because the ability of AI to speed up research is overestimated. From your passage here:
It’s unclear how much marginal impacts matter here: if the malicious AI has to run 100 biological experiments in the span of a month or only 50, these may be similarly easy.
This seems like a drastic underestimate of how many experiments would be required to make an x-risk deadly virus (as opposed to just a really really bad one). It's easy to make a virus that is reliably deadly, and one that reliably spreads, but the two tradeoff against each other in a way that makes reliable world-murder ridiculously difficult. I don't think a mere 100 experiments is enough to overcome this.
I'm seeing a few comments so far with the sentiment that "lawsuits don't have the ultimate aim of reducing x-risk, so we shouldn't pursue them". I want to push back on this.
Let's say you're an environmental group trying to stop a new coal power plant from being built. You notice that the proposed site has not gone through proper planning permissions, and the locals think the plant will ruin their nice views. They are incredibly angry about this, and are doing protests and lawsuits on the matter. Do you support them?
Under the logic above, the answer would be no. Your ultimate aim has nothing to do with planning permissions or nice views, it's stopping carbon emissions. If they moved it to a different location, the locals objections would be satisfied, but yours wouldn't be.
But you'd still be insane not to support the locals here. The lawsuits and protest damage the coal project, in terms of PR, money, and delays. New sites are hard to find, and it's quite possible that if the locals win, the project will end up cancelled. Most of the work is being done by people who wouldn't have otherwise helped you in your cause (and might be persuaded to join your cause in solidarity!). And while protecting nice views may not be your number one priority, it's still a good thing to do.
I hope you see that in this analogy, the AI x-risk person is the environmental group, and the AI ethics person is the locals (or vice versa, depending on which view you believe). Sure, protecting creatives from plagiarism might not be your highest priority, but forcing creative compliance might also have the side effect of slowing down AI development for all companies at once, which you may think helps with x-risk. And it's likely to be easier to implement than a full AI pause, thanks to the greater base of support.
I'm assuming this is an uncharitable and somewhat dickish way to accuse me of not reading your comment. I assure you I have. You are saying that it is not worth putting 500 hours into the level of investigation required to get it to the level of evidence required for a "public expose". I am saying that this is worth it, because the community gets far more than 500 hours of benefit from this investigation. The lesser amount of investigation you advocate for will have a comparably smaller effect.
Also, I recommend reading up on the forum guidelines again.
It seems like you're very focused on the individual cost of investigation, and not the community wide benefit of preventing abuse from occurring.
The first and most obvious point is that bad actors cause harm, and we don't want harm in our community. Aside from the immediate effect, there are also knock-on effects. Bad actors are more likely to engage in unethical behavior (like the FTX fraud), are likely to misuse funds, are non-aligned with our values (do you want an AGI designed by an abuser?), etc.
Even putting morality aside, it doesn't stack up. 500 hours is roughly 3 months of full-time work. I would say the mistreated employees of nonlinear have lost far more than that. Hell, if a team of 12 loses one week of useful productivity from a bad boss, that cancels out the 500 hours.
So, I'll give two more examples of how burden of proof gets used typically:
I think in both these cases, the statements made are quite reasonable. Let me try to translate the objections into your language:
These are fine, but I'm not sure I prefer either of these. It seems like the other party can just say "well my priors are high, so I guess both our beliefs are equally valid".
I think "burden of proof" translates to "you should provide a lot of proof for your position in order for me or anyone else to believe you". It's a statement of what peoples priors should be.
I think you've entirely missed my actual complaint here. There would have been nothing wrong with inventing a new term and using it to describe a wide class of structures. The problem is that the term already existed, and already had an accepted scientific definition since the 1960's (adamantane family materials). If a term already has an accepted jargon definition in a scientific field, using the same term to mean something else is just sloppy and confusing.