RavenclawPrefect

Posts

Sorted by New

Comments

EA capital allocation is an inner ring

Opening with a strong claim,  making your readers scroll through a lot of introductory text, and ending abruptly with "but I don't feel like justifying my point in any way, so come up with your own arguments" is not a very good look on this forum. 

Insightful criticism of the capital allocation dynamics in EA is a valuable and worthwhile thing that I expect most EA Forum readers would like to see! But this is not that, and the extent to which it appears to be that for several minutes of the reader's attention comes across as rather rude. My gut reaction to this kind of rhetorical strategy is "if even the author doesn't want to put forth the effort to make this into a coherent argument, why should I?"

[I have read the entirety of The Inner Ring, but not the vast series of apparent prerequisite posts to this one. I would be very surprised if reading them caused me to disagree with the points in this comment, though.]

Things I recommend you buy and use.

Alexey Guzey has posted a very critical review of Why We Sleep - I haven't deeply investigated the resulting debate, but my impression from what I've seen thus far is that the book should be read with a healthy dose of skepticism.

If one doesn't have strong time discounting in favor of the present, the vast majority of the value that can be theoretically realized exists in the far future.

As a toy model, suppose the world is habitable for a billion years, but there is an extinction risk in 100 years which requires substantial effort to avert.

If resources are dedicated entirely to mitigating extinction risks, there is net -1 utility each year for 100 years but a 90% chance that the world can be at +5 utility every year afterwards once these resources are freed up for direct work. (In the extinction case, there is no more utility to be had by anyone.)

If resources are split between extinction risk and improving current subjective experience, there is net +2 utility each year for 100 years, and a 50% chance that the world survives to the positive longterm future state above. It's not hard to see that the former case has massively higher total utility, and remains such under almost any numbers in the model so long as we can expect billions of years of potential future good.

A model like this relies crucially on the idea that at some point we can stop diverting resources to global catastrophic risk, or at least do so less intensively, but I think this is an accurate assumption. We currently live in an unusually risk-prone world; it seems very plausible that pandemic risk, nuclear warfare, catastrophic climate change, unfriendly AGI, etc. are all safely dealt with in a few centuries if modern civilization endures long enough to keep working on them.

One's priorities can change over time as their marginal value shifts; ignoring other considerations for the moment doesn't preclude focusing on them once we've passed various x-risk hurdles.


Quantum computing concerns?

It seems to me that there are quite low odds of 4000-qubit computers being deployed without proper preparations? There are very strong incentives for cryptography-using organizations of almost any stripe to transition to post-quantum encryption algorithms as soon as they expect such algorithms to become necessary in the near future, for instance as soon as they catch wind of 200- and 500- and 1000- bit quantum computers. Given that post-quantum algorithms already exist, it does not take much time from worrying about better quantum computers to protecting against them.

In particular, it seems like the only plausible route by which many current or recent communications are decrypted using large quantum computers is one in which a large amount of quantum computation is suddenly directed towards these goals without prior warning. This seems to require both (1) an incredible series of both theoretical and engineering accomplishments produced entirely in secret, perhaps on the scale of the Manhattan project and (2) that this work be done by an organization which is either malicious in its own right or distributes the machines publicly to other such actors.

(1) is not inconceivable (the Manhattan project did happen*), but (2) seems less likely; in particular, the most malicious organizations I can think of with the resources to pull off (1) are something like the NSA, and I think there is a pretty hard upper bound on how bad their actions can be (in particular, "global financial collapse from bank fraud" doesn't seem like a possibility). Also, the NSA has already broken various cryptographic schemes in secret and the results seem to have been far from catastrophic.

I don't see a route by which generic actors could acquire RSA-breaking quantum tech and where the users of RSA wouldn't also be able to recognize this event coming months if not years in advance.

*Though note that there were no corporations working to develop nuclear bombs, while there are various tech giants looking at ways of developing quantum computers, so the competition is greater.