I read the original comment not as an exhortation to always include lots of nuanced reflection in mostly-unrelated posts, but to have a norm that on the forum, the time and place to write sentences that you do not think are actually true as stated is "never (except maybe April Fools)".
The change I'd like to see in this post isn't a five-paragraph footnote on morality, just the replacement of a sentence that I don't think they actually believe with one they do. I think that environments where it is considered a faux pas to point out "actually, I don't think you can have a justified belief in the thing you said" are extremely corrosive to the epistemics of a community hosting those environments, and it's worth pushing back on them pretty strongly.
it doesn't seem good for people to face hardship as a result of this
I agree, but the tradeoff is not between "someone with a grant faces hardship" and "no one faces hardship", it's between "someone with a grant faces hardship" and "someone with deposits at FTX faces hardship".
I expect that the person with the grant is likely to put that money to much better uses for the world, and that's a valid reason not to do it! But in terms of the direct harms experienced by the person being deprived of money, I'd guess the median person who lost $10,000 to unrecoverable FTX deposits is made a fair bit worse off by that than the median person with a $10,000 Future Fund grant would be by returning it.
I assume you mean something like “return the money to FTX such that it gets used to pay out customer balances”, but I don’t actually know how I’d go about doing this as an individual. It seems like if this was a thing lots of people wished to do, we’d need some infrastructure to make it happen, and doing so in a way that led to the funds having the correct legal status to be transferred back to customers in that fashion might be nontrivial.
(Or not; I’m definitely not an expert here. Happy to hear from someone with more knowledge!)
What level of feedback detail do applicants currently receive? I would expect that giving a few more bits beyond a simple yes/no would have a good ROI, e.g. at least having the grantmaker tick some boxes on a dropdown menu.
"No because we think your approach has a substantial chance of doing harm", "no because your application was confusing and we didn't have the time to figure out what it was saying", and "no because we think another funder is better able to evaluate this proposal, so if they didn't fund it we'll defer to their judgment" seem like useful distinctions to applicants without requiring much time from grantmakers.
Opening with a strong claim, making your readers scroll through a lot of introductory text, and ending abruptly with "but I don't feel like justifying my point in any way, so come up with your own arguments" is not a very good look on this forum.
Insightful criticism of the capital allocation dynamics in EA is a valuable and worthwhile thing that I expect most EA Forum readers would like to see! But this is not that, and the extent to which it appears to be that for several minutes of the reader's attention comes across as rather rude. My gut reaction to this kind of rhetorical strategy is "if even the author doesn't want to put forth the effort to make this into a coherent argument, why should I?"
[I have read the entirety of The Inner Ring, but not the vast series of apparent prerequisite posts to this one. I would be very surprised if reading them caused me to disagree with the points in this comment, though.]
Alexey Guzey has posted a very critical review of Why We Sleep - I haven't deeply investigated the resulting debate, but my impression from what I've seen thus far is that the book should be read with a healthy dose of skepticism.
It seems to me that there are quite low odds of 4000-qubit computers being deployed without proper preparations? There are very strong incentives for cryptography-using organizations of almost any stripe to transition to post-quantum encryption algorithms as soon as they expect such algorithms to become necessary in the near future, for instance as soon as they catch wind of 200- and 500- and 1000- bit quantum computers. Given that post-quantum algorithms already exist, it does not take much time from worrying about better quantum computers to protecting against them.
In particular, it seems like the only plausible route by which many current or recent communications are decrypted using large quantum computers is one in which a large amount of quantum computation is suddenly directed towards these goals without prior warning. This seems to require both (1) an incredible series of both theoretical and engineering accomplishments produced entirely in secret, perhaps on the scale of the Manhattan project and (2) that this work be done by an organization which is either malicious in its own right or distributes the machines publicly to other such actors.
(1) is not inconceivable (the Manhattan project did happen*), but (2) seems less likely; in particular, the most malicious organizations I can think of with the resources to pull off (1) are something like the NSA, and I think there is a pretty hard upper bound on how bad their actions can be (in particular, "global financial collapse from bank fraud" doesn't seem like a possibility). Also, the NSA has already broken various cryptographic schemes in secret and the results seem to have been far from catastrophic.
I don't see a route by which generic actors could acquire RSA-breaking quantum tech and where the users of RSA wouldn't also be able to recognize this event coming months if not years in advance.
*Though note that there were no corporations working to develop nuclear bombs, while there are various tech giants looking at ways of developing quantum computers, so the competition is greater.
Since no one has said anything in reply to this comment yet: I suspect it is getting downvotes because it doesn't seem especially relevant to the current discussion and feels like it would fit better as a standalone post or an Intercom message or something.