## Effective Altruism ForumEA Forum

Jsevillamol

PhD student at Aberdeen University studying Bayesian reasoning

Interested in practical exercises and theoretical considerations related to causal inference, forecasting and prioritization.

Some thoughts on EA outreach to high schoolers

Without entering into too many sensitive details, when I have looked at the output of similar programs I have noticed that I was excited about the career path of 1 out of every 3 participants.

But a) I dont know how much of it was counterfactual, b) when I made the estimation I had an incentive to produce an optimistic answer and c) it relies on my subjective judgement, which you may not trust.

Also worth noting that I think that the raw conversion rate is not the right metric to focus on - the outliers usually account for most the impact of these programs.

Quantum computing timelines

It is not intended to be a calibrated estimated, though we were hoping that it could help others make calibrated estimations.

The ways that a calibrated estimate would differ include:

1. The result is a confidence interval, not a credence interval (most places in the paper where it says probability is should say confidence, I apologize for the oversight), so your choice of prior can make a big difference to the associated credence interval.

2. The model is assuming that no discontinuous progress will happen, but we do not know whether this will hold. (Grace, 2020) estimates a yearly rate of discontinous breakthroughs on any given technology of 0.1%, so I'd naively expect a 1-(1-0.1%)^20 = 2% chance that there is such a discontinuous breakthrough for quantum computing in the next 20 years.

3. The model makes optimistic assumptions of progress - namely that a) the rate of exponential progress will hold for both the physical qubit count and the gate error rate, b) there is no correlation between the metrics in a system (which we show it is probably an optimistic assumption, since it is easier to optimize only one of the metrics than both) and c) we ignore the issue of qubit connectivity due to lack of data and modelling difficulty.

If I was pressed to put a credence bound on it, I'd assign about 95% chance that EITHER the model is basically correct OR that the timelines are slower than expected (most likely if the exponential trend of progress on gate error rate does not hold in the next 20 years), for an upper bound on the probability that we will have RSA 2048 quantum attacks by 2040 of <5% + 95% 5% ~= 10%.

Either case, I think that the model should make us puzzle over the expert timelines, and inquire whether they are taking into account any extra information or being too optimistic.

EDIT: I made an artihmetic mistake, now corrected (thanks to Eric Martin for pointing it out)

What are some low-information priors that you find practically useful for thinking about the world?

In a context where multiple forecasts have already been made (by you or other people), use the geometric mean of the odds as a blind aggregate:

If you want to get fancy, use an extremized version of this pooling method, by scaling the log odds by a factor :

Satopaa et al have found that in practice gives the best results.

Assessing the impact of quantum cryptanalysis

I think we broadly agree.

I believe that chemistry and material science are two applications where quantum computing might be a useful tool, since simulating very simple physical systems is something where a quantum computer excels at but arguably significantly slower to do in a classical computer.

On the other hand, people more versed on material science and chemistry I talked to seemed to believe that (1) classical approximations will be good enough to approach problems in these areas and (2) in silico design is not a huge bottleneck anyway.

So I am open to a quantum computing revolution in chemistry and material science, but moderately skeptical.

Summarizing my current beliefs about how important quantum computing will be for future applications:

• Cryptoanalysis => very important for solving a handful of problems relevant for modern security, with no plausible alternative
• Chemistry and material science => Plausibly useful, not revolutionary.
• AI and optimization => unlikely to be useful, huge constraints to overcome
• Biology and medicine => not useful, systems too complex to model
Assessing the impact of quantum cryptanalysis

Thank you so much for your kind words and juicy feedback!

Google has already deployed post-quantum schemes as a test

I did not know about this, and this actually updates me on how much overhead will be needed for post quantum crypto (the NIST expert I interviewed gave me an impression that it was large and essentially would need specialized hardware to meet performance expectations, but this seems to speak to the contrary (?))

here may be significant economic costs due to public key schemes deployed "at rest"

To make sure I understand your point, let me try to paraphase. You are pointing out that:

1) past communications that are recorded will be rendered insecure by quantum computing

2) there are some transition costs associated with post quantum crypto - which are related to for example the cost of rebuilding PGP certificate networks.

If so, I agree that this is a relevant consideration but does not change the bottom line.

Thank you again for reading my paper!

Assessing the impact of quantum cryptanalysis

As in, the quantum computer Sycamore from Google is capable of solving a (toy) problem that we currently believe unfeasible in a classical computer.

Of course, there is a more interesting question of when will we be able to solve practical problems using quantum computing. Experts believe that the median for a practical attack on modern crypto is ~2035.

I regardless believe that outside (and arguably within) quantum cryptanalysis the applications will be fairly limited.

The paper in my post goes in more detail about this.

Update on civilizational collapse research
I'm currently working on explicating some of these factors, but some examples would be drastic climate change, long-lived radionuclides, increase in persistent pathogens.

Can you explain the bit about long-lived radionuclides?

How would they be produced? How would they affect "technological carrying capacity"?

[WIP] Summary Review of ITN Critiques

Thank you for writing this up - always good to see criticism of key ideas.

I want to contest point 4.

The fact that we can decompose "Good done / extra person or $" into three factors that can be roughly interpreted as Scale, Tractability and Neglectedness is not a problem, but a desirable property. In ultimate instance, we want to evaluate marginal cost effectiveness ie "Good done / extra person or$". However this is difficult, so we want to split it up in simpler terms.

The mathematical equation that decomposes the cost serves as a guarantee that by estimating all three factors we will not be leaving anything important behind.

Implications of Quantum Computing for Artificial Intelligence alignment research (ABRIDGED)

I do agree with your assesment, and I would be medium excited about somebody informally researching what algorithms can be quantized to see if there is low hanging fruit in terms of simplifying assumptions that could be made in a world where advanced AI is quantum-powered.

However my current intuition is there is no much sense in digging in this unless we were sort of confident that 1) we will have access to QC before TAI and that 2) QC will be a core component of AI.

To give a bit more context to the article, Pablo and me originally wrote it because we disagreed on whether current research in AI Alignment would still be useful if quantum computing was a core component of advanced AI systems.

Had we concluded that quantum ofuscation threatened to invalidate some assumptions made by current research, we would have been more emphatic about the necessity of having quantum computing experts working on "safeguarding our research" on AI Alignment.