Hide table of contents

Inciting Incident

A friend recently sent me this message on Discord:

so apparently one of the engineers helping elon essentially coup the government is a diehard rationalist lmao https://web.archive.org/web/20241124181237/http://colekillian.com/ lists "rationality a-z" as the book that most influenced him

is he doing some weird game theory shit behind the scenes do you think? or has he legit drank the koolaid

This friend is like half-familiar with the rationalist community and its ideas, as I've gotten him more into it over the years. He was not, however, necessarily as up-to-date about NRx, so some of my reply will be old news to many of you.

The following is my reply, lighted edited (i.e. conversational, without removing hyperboles or figurative language or simplifications) and heavily augmented with links:

What I Said

Y'know, I think somewhere in the process he drank Elon's kool-aid and sorta left his brain on like 2014/15-era autopilot.

He might literally think Elon's Nazi salute wasn't a Nazi salute.

On a game-theory level, it's extremely easy to basically think of yourself as a technolibertarian, and all the fascist stuff is just "necessary evils" to save America from SF-like zoning laws. (see: Anduril, Palantir)

For smart people, unfortunately, lots of the time we just rationalize stuff harder. Plenty of people swear by Rationality A-Z and have even read Meditations On Moloch, but then think they're one of the good capitalists and that the social system totally won't turn into a virus that eats them.

IMHO a bigger and more worrying influence is Curtis Yarvin, aka Mencius Moldbug, aka "the guy who wants a monarchical dictatorial CEO president". He beefed with and unfortunately apparently converted some rationalists to his cause, he was invested in by Peter Thiel, Thiel funds JD Vance, and Yarvin has written a buttload of posts over the years detailing how he wants Trump to do a coup.

Most lizardlike people on earth --> manipulate --> idealistic rationalists --> extract their intelligence --> use them up like ammo (a metaphor used by pre-Nazi-Musk employees to describe how he hired and fired) --> crystallize the intelligence into AI --> competence on-demand for the most powerful (i.e. mostly-selected to be the most-ruthless/psychotic) people on earth --> we all die by rogue AI and/or non-rogue AI drones and/or slow-burn police state and/or climate wars.

"Manipulate? But they're rationalists!" Sorry, get in line. I won thousands of dollars from a writing contest indirectly funded by SBF.

>"I'm one of the good ones."

>"I'm making tough choices."

>"It's a necessary evil."

>"Just survive until AI safety is solved, anyway let's build bigger and more-powerful AI."

>"I'll just take my piece of the pie before leaving, they can't golden-handcuff me!"

>"I don't believe in AI, so my actions make sense on a long-run future [40 paragraphs about how early Christians had more babies and now the world is perfect and Jesus-like, therefore if we just...]"

"Is there still hope?" Yes! It relies on unlikely, difficult, and/or unprecedented things happening. Then again, the world is getting more "unprecedented" every day, so maybe that's not the "drawback" it sounds like.

Bonus: a joke from later in the conversation

Yey, the awareness-of-the-Neoreactionaries (the sect that Yarvin basically owns) has now reached "leftist redditor tinfoil-hat comments", which basically means it'll be government policy by next week.

Closing Thoughts

I wish leftists existed who read and made The Sequences a part of themselves.

I wish those same people existed, and didn't then decide to become a cult of pro-Hell alleged serial killers.

I wish more leftists understood natural selection, selection effects in general, and memetics in particular.

I wish more rationalists and AI safety/alignment/governance people took Moloch seriously on the ground level, in their real everyday lives, and how they related to and are influenced by a systemic society ideology social system (insert 50 leftist buzzwords that are really just normal words).

I wish more rationalists would bother steelmanning classic leftist paranoid-redditor-tier leftist ideas. Sure, the "political journey" and the "valley of bad rationality" would've gotten even messier (I can attest to this!), but the long-term gains, I think, would've been worth it. Who knows, maybe the left and the rats would've learned more about memetics and group dynamics. Ideas mixing and developing, which could have (Do I dare hope?) eventually become free of blank slates and noble savages and might-makes-rights and power-ignorance and sociopathy-denial.

I wish that both the "tip top" and the "medium top" of educated people had consensus that both "property is downstream of power" and "prices are downstream of supply and demand".

I wish /r/leftrationalism had, like, any activity whatsoever.

But enough wishes.

Remember that hope I mentioned? And how it may flourish even now, with how weird things are going? Well, it would do you well to remember how the rationality community has prepared each of us for COVID and Big AI. The nontrivial percentage of us who made money from crypto and NVDA. The people with strange thoughts and the tools to check them against reality, if we desire.

Building and updating and extrapolating world-models, and from there making positive and creative change. The rest is commentary. Better late than never, eh?

When the going gets weird, the weird turn pro. - Hunter S. Thompson

-18

0
0

Reactions

0
0

More posts like this

No comments on this post yet.
Be the first to respond.
Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
 ·  · 9m read
 · 
TL;DR In a sentence:  We are shifting our strategic focus to put our proactive effort towards helping people work on safely navigating the transition to a world with AGI, while keeping our existing content up. In more detail: We think it’s plausible that frontier AI companies will develop AGI by 2030. Given the significant risks involved, and the fairly limited amount of work that’s been done to reduce these risks, 80,000 Hours is adopting a new strategic approach to focus our efforts in this area.   During 2025, we are prioritising: 1. Deepening our understanding as an organisation of how to improve the chances that the development of AI goes well 2. Communicating why and how people can contribute to reducing the risks 3. Connecting our users with impactful roles in this field 4. And fostering an internal culture which helps us to achieve these goals We remain focused on impactful careers, and we plan to keep our existing written and audio content accessible to users. However, we are narrowing our focus as we think that most of the very best ways to have impact with one’s career now involve helping make the transition to a world with AGI go well.   This post goes into more detail on why we’ve updated our strategic direction, how we hope to achieve it, what we think the community implications might be, and answers some potential questions. Why we’re updating our strategic direction Since 2016, we've ranked ‘risks from artificial intelligence’ as our top pressing problem. Whilst we’ve provided research and support on how to work on reducing AI risks since that point (and before!), we’ve put in varying amounts of investment over time and between programmes. We think we should consolidate our effort and focus because:   * We think that AGI by 2030 is plausible — and this is much sooner than most of us would have predicted 5 years ago. This is far from guaranteed, but we think the view is compelling based on analysis of the current flow of inputs into AI