MM

Mantas Mazeika

48 karmaJoined

Comments
2

My question is more about what the capabilities of a superintelligence would be once equipped with a quantum computer, not whether quantum computing will play into the development of AGI. This question is important for AI safety concerns, and few people are talking about it / qualified to tackle it.

Quantum algorithms seem highly relevant to this question. At the risk of revealing my total lack of expertise in quantum computing, one might even wonder what learnable quantum circuits / neural networks would entail. Idk. It just seems wide open.

Some questions:

  • Forecasting is highly information limited. A superintelligence that can't see half the chessboard can still lose. Does quantum computing provide a differential advancement here?
  • Does alphafold et al render the quantum computing hopes to supercharge simulation of chemical/physical systems irrelevant? Or would a 'quantum version of alphafold' trounce the original? (again, I am no expert here)
  • Where will exponential speedups play a role in practical problems? Simulation? Of just quantum systems, or does it help with simulating complex systems more generally? Any case where the answer is "yes" is worth thinking about the implications of wrt AI safety.

Given your background, you can probably contribute a lot to AI safety efforts by continuing in quantum computing.

Photonics and analog neural net hardware will probably have enormous impacts on capabilities (qualitatively similar to the initial impacts of GPUs in 2012-2019). Quantum computing is basically another fundamental hardware advance that may be a bit further out.

The community needs people thinking about the impacts of quantum computing on advanced AI. What sorts of capabilities will quantum computing grant AI? How will this play into x-risk? I haven't heard any good answers to these questions.