PabloAMC 🔸

Quantum algorithm scientist @ Xanadu.ai
1153 karmaJoined Working (6-15 years)Madrid, España

Bio

Participation
5

Hi there! I'm an EA from Madrid. I am currently finishing my Ph.D. in quantum algorithms and would like to focus my career on AI Safety. Send me a message if you think I can help :)

Comments
148

I agree there is certainly quite a lot of hype, though when people want to hype quantum they usually target AI or something. My comment was echoing that quantum computing for material science (and also chemistry) might be the one application where there is good quality science being made. There are also significantly less useful papers, for example those related to "NISQ" (non-error-corrected) devices, but I would argue the QC community is doing a good job at focusing on the important problems, not just hyping around.

Hi there, I am a quantum algorithm researcher at one of the large startups in the field and I have a couple of comments, one to back up the conclusion on ML for DFT, and another to push back a bit on the quantum computing end.

For the ML for DFT, one and a half years ago we tried (code here) to replicate and extend the DM21 work, and despite some hard work we failed to get good accuracy training ML functionals. Now, this could be because I was dumb or lacked abundant data or computation, but we mostly concluded that it was unclear how to make ML-based functionals work.

On the other hand, I feel this paragraph is a bit less evidence-based.

Quantum computing was basically laughed off as overhyped and useless. M said no effect on the field “in my lifetime” (he’s like, 50). N said it was “far away”, and G said it was “hype”, and that it is “absolutely not true” that QC material science will happen soon. They were very disdainful, and this matches what I’ve heard through the grapevine of the wider physics community: there is a large expectation that quantum computing is in a bubble that will soon burst.

I think there are genuine reasons to believe QC can become a pretty useful tool once we figure out how to build large-scale fault-tolerant quantum computers. In contrast to logistics, finance, optimization, etc which are poor target areas for quantum computing, material science is where (fault-tolerant) quantum computing could shine brightest. The key reason is that we could numerically integrate the Schrodinger equation to large system sizes with polynomial scaling in the system size and polylogarithmic cost in the (guaranteed) precision, without the vast majority of the approximations needed in classical methods. I would perhaps argue that some of the following represent roughly the state of the art on the quantum algorithms we may be able to run:

  1. Quantum simulation of realistic materials in first quantization using non-local pseudopotentials.
  2. Faster quantum chemistry simulations on a quantum computer with improved tensor factorization and active volume compilation.
  3. Quantum simulation of exact electron dynamics can be more efficient than classical mean-field methods

The takeaway of these papers is that with a few thousand logical qubits and logical gates that run at MHz (something that people in the field believe to be reasonable), it may be possible to simulate relatively large correlated systems with high accuracy in times of the order of days. Now, there are of course very important limitations. First and foremost, you need some rough approximation to the ground state that we can prepare (here, here) and project with quantum computing methods. This limits the size of the system that we can model because there is a dependence on classical methods, but it extends the range of accurate simulations efficiently.

Second, as noted in the post, classical methods are pretty good are modeling ground state. Thus, it makes sense to focus most of the quantum computing efforts on modeling strongly correlated systems, excited states, or dynamic processes involving light-matter interaction and the sort. I would argue we still have not found good ways to go beyond the Born-Oppenheimer approximation though, except if you are willing to model everything (nuclei, electrons) in plane waves and first quantization, which is feasible but may make the simulation perhaps one or two orders of magnitude more costly.

This is all assuming fault-tolerant quantum computing. I can't say much on the timelines though because I am an algorithmic researcher so I do not have a very good understanding of the hardware challenges, but I would not find it unsurprising to see companies building fault-tolerant quantum computers with hundreds of logical qubits in 5 to 15 years from now. For example, people have been making good progress and Google recently showed the first experiment where they can reliably reduce the error with quantum error correction. The next step for them is to build a logical qubit that can be corrected for arbitrary time scales.

Overall, I think the field of fault-tolerant quantum computing is putting forward solid science, and it would be overly dismissive to say it is just hype, or a bubble.

I think this had to do more with GDPR than the AI act, so the late release in the EU might be a one-off case. Once you figure out how to comply with data collection, it should be straightforward to extend to new models, if they want to.

My point is that slowing AI down is often an unwanted side effect, from the regulator perspective. Thus, the main goal is raising the bar for safety practices across developers.

I don’t think the goal of regulation or evaluations is to slow down AGI development. Rather, the goal of regulation is to standardise minimal safety measures (some AI control, some security etc across labs) and create some incentives for safer AI. With evaluations, you can certainly use them for pausing lobbying, but I think the main goal is to feed in to regulation or control measures.

My donation strategy:

It seems that we have some great donation opportunities in at least some cases such as AI Safety. This has made me wonder what donation strategies I prefer. Here are some thoughts, also influenced by Zvi Mowshowitz's:

  1. Attracting non-EA funding to EA causes: I prefer donating to opportunities that may bring external or non-EA funding to some causes that EA may deem relevant.
  2. Expanding EA funding and widening career paths: Similarly, if possible fund opportunities that could increase the funds or skills available to the community in the future. For this reason, I feel highly supportive of Ambitious Impact project to create onramps for careers with impact in earning to give, for instance. This is in contrast to incubating new charities (Charity Entrepreneurship), which is slightly harder to motivate unless you have strong reasons to believe your impact is more cost-effective than typical charities. I am a bit wary that uncertainty might be too large to clearly distinguish charities in the frontier.
  3. Fill in the gap left by others: Aim to fund charities that are medium-sized between their 2nd to 5th years of life: they are not small and young enough that they can rely on Charity Entrepreneurship seed funding. But they are also not large enough to get funding from large funders. One could similarly argue that you should fund causes that non-EAs are less likely to fund (e.g. animal welfare), though I find this argument more strongly if non-EA funding was close to fully funding those other causes (e.g. global health) or if the full support of the former (animal welfare) fully depends on the EA community.
  4. Value stability for people running charities: By default and unless there are clearly better opportunities, keep donating to the same charities as previously done, and do so with unrestricted funds. This allows some stability for charities, which is very much welcomed for the charities. Also, do not push too hard on the marginal cost-effectiveness of donations, because that creates some poor incentives.
  5. Favour hits-based strategies and local-knowledge: Favour hits-based strategies particularly those in which you profit from local knowledge of opportunities that may not be visible to others in the community.

One example of a charity I will support is ARMoR which fits well with points 1 and 3. I am also excited about local knowledge opportunities in the AI Safety ecosystem. Otherwise, I am also particularly optimistic about the work of Apollo Research on evaluations and Redwood Research on AI control; as I believe those to be particular enablers of more robust AI governance.

I agree with most except perhaps the framing of the following paragraph.

Sometimes that seems OK. Like, it seems reasonable to refrain from rescuing the large man in my status-quo-reversal of the Trolley Bridge case. (And to urge others to likewise refrain, for the sake of the five who would die if anyone acted to save the one.) So that makes me wonder if our disapproval of the present case reflects a kind of speciesism -- either our own, or the anticipated speciesism of a wider audience for whom this sort of reasoning would provide a PR problem?

In my opinion the key difference is that here the bad outcome (eg animal suffering but any other, really), may happen because of decisions taken by the people you are saving. So, in a sense it is not an externally imposed mechanism. The key insight to me is that the children always have the chance to prevent the suffering that follows, people can reason and become convinced, as I was, that this suffering is important and should be prevented. Consequently, I feel strongly against letting innocent people die in these situations. So overall I do not think this has to do with speciesism or cause prioritisation.

Incidentally, this repeats with many cultural themes in films and books, that people can change their minds, and that they should be given the chance to. Similarly, it is a common theme that you should not kill innocent people to prevent some bad thing from happening (think Thanos and overpopulation, Herod convicting Jesus to die to prevent greater wrongdoings…). Clearly these are not strong ethical arguments, but I think they contain a grain of truth; and one should probably have a very strong bias against (taboo level) endorsing (not discussing) conclusions that justifies letting innocent people die.

For what is worth, I like the work of Good Food Institute on pushing the science and market of alternative proteins. They also do some policy work though I fear their lobbying might have orders of magnitude less strength than the industry’s.

Also, as far as I know the Shrimp Welfare Initiative is directly buying and giving away the stunners (hopefully to create some standard practice around it). So counterfactually it seems a reasonable bet for the direct impact at least.

But I resonate with the broad concerns with corporate outreach and advocacy. I am particularly wary of bad cop strategies. While I feel they may work, I easily see how companies could set up some public advertising campaign about how their work is good for farmers and the community. I see them doing it all the time, and they are way better financed than charities.

Hey Vasco, on a constructive intention, let me explain how I believe I can be a utilitarian, maybe hedonistic to some degree, value animals highly and still not justify letting innocent children die, which I take as a sign of the limitations of consequentialism. Basically, you can stop consequence flows (or discount them very significantly) whenever they go through other people's choices. People are free to make their own decisions. I am not sure if there is a name for this moral theory, but it would be roughly what I subscribe to.

I do not think this is an ideal solution to the moral problem, but I think it is much better than advocating to let innocent children die because of what they may end up doing.

Load more