I am not the only quantum computing PhD student I know with an interest in AI existential safety, and there’s a reason for that: I don’t think quantum computing is going to be that useful that soon. That’s not to say that I’m pessimistic about it! But my timelines for AI and useful quantum computers are pretty similar, and that does not favour the latter.
Regardless, there’s a lot of funding for quantum computing and no good reason I see for EA to get involved. Let me make a few illustrative comments.
For this reason, my project sought out to show that there are indeed pressing world issues and areas of interest to EAs where Quantum Computation could have an edge in mitigating existential risk.
A concerningly motivated start.
I’ve no interest in fighting about D-Wave and assigning flights at Frankfurt airport, though I will note that their devices do not attempt to perform gate-based quantum computation.
Unrelatedly, people like to do things with their quantum computers. Usually, they are motivated by publicity.
Then you refer to an ability for quantum computers to solve the travelling salesman problem faster in order to aid drug distribution. The problem is NP-complete, so we only see a trivial Grover-type quadratic speedup. And…really? Drug distribution? This is a subset of shipping route optimisation. These companies are aware of quantum computing, and don’t need EA’s help.
You finish by throwing around a number of material design claims. But like everything you’ve mentioned here, none of these things are neglected, and it’s not clear why EA should be involved.