402Joined Jul 2020


Lumpy is an undergraduate at some state college somewhere in the States. He isn't an interesting person and interesting things seldom happen to him.

Among his skills are such diverse elements as linguistic tomfoolery, procrastination, being terrible with computers yet running Linux anyway, a genial temperament and magnanimous spirit, a fairly swell necktie if he does say so himself, mounting dread, and quiet desperation.

Plays as a wizard in any table top or video game where that's an option, regardless of whether it's a [i]strong[/i] option. Has never failed a Hogwarts sorting test, of any sort or on any platform. (If you were about to say how one can't fail a sorting test . . . one surmises that you didn't make Ravenclaw.) Read The Fellowship, Two Towers, and Return of the King over the course of three sleepless days at age seven; couldn't keep down solid food after, because he'd forgotten to eat. Was really into the MBTI as a tweenager; thought it ridiculous how people said that no personality type was "better" than the others when ENTJ is clearly the most powerful. (Scored INFP, his self, but hey, one out of four isn't so bad. (However, found a better fit in INTP.)) Out of the Disney princesses Lumpy is Mulan--that is, if one is willing to trust BuzzFeed. Which, alas, one is not.

No, but seriously.

Mulan?? 0_o

If, despite this exhaustive list of traits and deeds, your burning question is left unanswered, send a missive in private. Should your quest be noble and intentions pure, it is said that Lumpyproletariat might respond in kind.


1. For each AGI, there will be tasks that have difficulty beyond it’s capabilities. 

2. You can make the task “subjugate humanity under these constraints” arbitrarily more difficult or undesirable by adding more and more constraints to a goal function. 


(Apologies for terseness here, I do appreciate that you put effort that went into writing this up.)

1. It seems to me you underestimate the capabilities of early AGI. Speed alone is sufficient for superintelligence, FOOM isn't necessary for AI to be overwhelmingly more mentally capable.

2. One can't actually make the task "subjugate humanity under these constraints" arbitrarily more difficult or undesirable by adding more constraints to the goal function. Constraints aren't uncorrelated with each other--you can't make invading medieval France arbitrarily hard by adding more pikemen, archers, cavalry, walls, trenches, sailboats. Innovative methods to bypass pikemen from outside your paradigm also sidestep archers, cavalry, walls, etc. If you impose all the constraints available to you, they are correlated because you/your culture/your species came up with them. Saying that you can pile on more safeguards to reduce the probability of failure to zero is like saying that if a wall made out of red bricks is only 50% likely to be breached, creating a second wall out of blue bricks will drop the probability of a breach to 25%. 

Oh, these people are certainly not bots. Chatbots aren't very, uh, good at disguising themselves. They're more likely to do things like unpromptedly say "bot? I'm not a bot. are you a bot?" in response to your saying "bot flies are nasty insects" or link you to an h-game than they are likely to ask whether you're a Luddite or for college advice or tell you how to contact them on Discord where they send the conversation up to that point as a text file. Humans sound like humans, bots sound like bots. (Also these people have sleep schedules and stuff and all the other thousand tells that make one confident that someone is made of flesh and blood.)

Why, then, are Omeglers more amenable to convincing than meat people? I'm not sure. Part of it might be that on average there's a larger gap between how smart they are and how smart the average EA is, than the the gap between the average EA and the average person EAs find themselves trying to convince. I'm not sure that having good ideas was super important in how convincingly I came across.

Another part is that they're somewhat preselected for hearing weird ideas out; these are, after all, people who chose to spend their time listening to utter strangers utter their politics. 

Another part could be that they're starved for good conversation. Presuming that the average EA isn't far behind the average LessWronger or Slate Star Codex reader, average IQ is in the global 2%. It doesn't seem outlandish that some Omeglers found me the most intelligent person that they'd had an extended conversation with.  

And, finally--I probably spoke with a couple hundred people on Omegle, filtering out people who weren't interesting to talk to very quickly. Median conversation length was measured in seconds, those that lasted longer, only a few minutes, highly enjoyable conversations lasted hours and ended in shared contact info maybe 25% of the time. Extricating oneself literally took only the click of a button. Four people who wanted to stay in contact, does not seem like an outlandish hit rate.

None of this theorizing is particularly grounded; I have not and do not intend to spend much in the way of braincycles here. 

The 100-130 IQ range contains most of the United State's senators. 

You don't need a license to be more ambitious than the people around you, you don't need to have  131 IQ or greater to find the most important thing and do your best. I'm confident in your ability to have a tremendous outsize impact in the world, if you choose to attempt it.

If you're unconvinced about AI danger and you tell me what specifically are your cruxes, I might be able to connect you with Yudkowskian short stories that address your concerns. 

The ones which come immediately to mind are:

That Alien Message

Sorting Pebbles Into Correct Heaps

I can't speak for anyone but myself, but I really don't like the idea of creating humans because other people want them for something. Hearing arguments framed that way fills me with visceral horror and makes it relatively harder for me to pay attention to anything else. 

If you want to catch up quickly to the front of the conversation on AI safety, you might find this YouTube channel helpful:

If you prefer text to video, I'm less able to give you an information-dense resource--I haven't kept track of which introductory sources and compilations have been written in the past six years. Maybe other people in the comments could help.

If you want to learn  the mindset and background knowledge that goes into thinking productively about AI (and EA in general, since this is--for many of the old hands--where it all started) this is the classic introduction:

Strong upvote because I think this should be at the top of the conversation and this is what I came here to say. 

Tofu has strong negative associations for many Americans; if you want to sell something which does not taste like American tofu and doesn't have the texture of American tofu I would advise you in the strongest possible language to call it anything but tofu.

Criticism has become so distorted from what it should be that my intention would not even be to criticize. Yet there is no way to suggest any organization could be doing anything better without it someone interpreting it as an attempt to sabotage it. It's not that I'm afraid of how others will respond. It's that so many individual actors have come to fear each other and the community itself. It's too much of a hassle to make it worthwhile to resolve the barrage of hostility from trying to contribute to anything.

I notice that the OP has gotten twenty upvotes--including one from me--, but that I myself have never encountered the phenomenon described. My experience, like D0TheMath's, is that people who offer criticism are  taken seriously. Other people in this comment section, at least so far, seem to have similar experiences.

Could some of the people who've experienced such chilling effects give more details about it? By PM if they don't anticipate as strongly as I do that the responses on the open forum will be civil and gracious?

Oh, I'm sorry for being unclear! The second phrasing emphasizes different words (as and adult human) in a way I thought made the meaning of the original post clearer.

Load More