Sharmake

Posts

Sorted by New

Topic Contributions

Comments

Introducing Asterisk

Is there a link to your website, because I didn't see it in this post.

Are you really in a race? The Cautionary Tales of Szilárd and Ellsberg

The surrender was really the Emperor having a way out, and giving "the most cruel bomb" statement via a discontinuous power scale. Even so, a group of 20 year olds tried to continue the war, and the reason it failed was the Emperor chose surrender, and to Japan, the Emperor was basically as important as the God Emperor of Mankind has in the Imperium of Man from 40k. Japan had up to this point despite steadily getting worse still couldn't surrender, and I think it was the fact that everything got worse continuously, there was no moment where it was strong enough as a rupture moment to force surrender into their heads.

I'll grant you this though, this isn't inevitable as a scenario. Obviously without hindsight and nukes it's really hard to deal with, but it may not happen at all.

10 non-EA books you might find interesting

I must say, I wince at 1 book here, and I'll explain why.

On the Emperor's new mind book, I see a wealth of wrongness in the book. The misuse of Godel's Incompleteness Theorems is astounding, and the problem is 'human understanding' is really a way to say that there are hidden inconsistencies in your proof and not complete (it'd require having uncountably infinite proofs to be solved by humans in a finite time, which is very exceptional as a claim.)

The Chinese Room issue is that a look up table understanding Chinese is only physically impossible due to storage limits, information limits and thermodynamic issues with heat dissipation, not logically impossible. If you're willing to accept that thermodynamics is utterly broken, then you can arbitrarily add more energy to get more cases in the look up table until you learn Chinese, or arbitrarily force the efficiency beyond 100% until you get every rule down with a minimum of computing. The Chinese Room is a philosophical toy, nothing more.

Microtubules are known what they do, and they're not quantum. Actually, the quantum brain has severe problems, and is basically an attempt to claim there's a soul in a physical sense.

Re determinism and quantum mechanics, there is a variant called super determinism which says there is no free will, and all actions are pre-staged. It is just as computational, if not more than the free will quantum version.

This book is the perfect example of expertise in one area doesn't equal expertise in all areas.

Are you really in a race? The Cautionary Tales of Szilárd and Ellsberg

The big difference is Japan doesn't even exist as a nation or culture due to Operation Downfall, starvation and insanity. The reason is without nukes, the invasion of Japan would begin, and one of the most important characteristics they had is both an entire generation under propaganda, which is enough to change cultural values, and their near fanaticism of honorable death. Death and battle was frankly over glorified in Imperial Japan, and soldiers would virtually never surrender. The result is the non existence of Japan in several years.

A hypothesis for why some people mistake EA for a cult

The real issue is really the fact that AI tends to be both the public facing side of EA, and one where there's a lot of existential claims that sound similar to cultish claims like "If AGI happens, we'll go extinct." We really need specific cause areas for new EAs to make it less a personal identity.

Sharmake's Shortform

My guess is I don't think the tech for nukes is dual use or easily hidden, unlike other existential risks because they require enrichment levels so high it's easy to distinguish them from peaceful uses and it's probably not going to be so easy that every state can make a nuke. That said, agree with the other parts of your comment.

Sharmake's Shortform

The War in Ukraine that started on February 24th has some important consequences for EA. Specifically, nuclear war within 2 years likelihood is still very low, despite Russian threats of nukes. On the other hand, long term the nuclear warfare existential risk has increased. This is because Russia invaded Ukraine and used it's nuclear warheads as a shield. This deals a significant blow to arms control creates a more unstable international order. It also will incentivize more states to take up nukes. What this means could be expanded though.

Virtue signaling is sometimes the best or the only metric we have

I agree somewhat, but I think this represents a real difference between rationalist communities like LessWrong and the EA community. Rationalists like LessWrong focus on truth, Effective Altruism is focused on goodness. Quite different goals when we get down to it.

While Effective Altruism uses a lot more facts than most moral communities, it is a community focused on morality, and their lens is essentially "weak utilitarianism." They don't accept the strongest conclusions of utilitarianism, but there is no "absolute dos or don'ts", unlike dentologists.

The best example is "What if P=NP?" was proven true. It isn't, but I will use it as an example of the difference between rationalists and EAs. Rationalists would publish it for the world, focusing on the truth. EAs would not, because one of the problems we'd be able to solve efficiently is encryption. Essentially this deals a death blow to any sort of security on computers. It's a hacker's paradise. They would focus on how bad such an information hazard it would be, this for the good of the world, they wouldn't publish it.

So what's all those words for? To illustrate point of view differences between rationalists like LessWrong and EAs on the question of prioritization of truth vs goodness.