933 karmaJoined Feb 2017


I'm a senior software developer in Canada (earning ~US$70K in a good year) who, being late to the EA party, earns to give. Historically I've have a chronic lack of interest in making money; instead I've developed an unhealthy interest in foundational software that free markets don't build because their effects would consist almost entirely of positive externalities.

I dream of making the world better by improving programming languages and developer tools, but AFAIK no funding is available for this kind of work outside academia. My open-source projects can be seen at loyc.net, core.loyc.net, ungglish.loyc.net and ecsharp.net (among others).


I would caution against thinking the Hard Problem of Consciousness is unsolvable "by definition" (if it is solved, qualia will likely become quantifiable). I think the reasonable thing is to presume it is solvable. But until it is solved we must not allow AGI takeover, and even if AGIs stay under human control, it could lead to a previously unimaginable power imbalance between a few humans and the rest of us.

To me, it's important whether the AGIs are benevolent and have qualia/consciousness. If AGIs are ordinary computers but smart, I may agree; if they are conscious and benevolent, I'm okay being a pet.

quickly you discover that [the specifics of the EA program] are a series of tendentious perspectives on old questions, frequently expressed in needlessly-abstruse vocabulary and often derived from questionable philosophical reasoning that seems to delight in obscurity and novelty

He doesn't talk or quote specifics, as if to shield his claim from analysis. "tendentious"? "abstruse"? He's complaining that I, as an EA, am "abstruse" meaning obscure/difficult to understand, but I'm the one that has to look up his words in the dictionary. As for how EAs "seem", if one doesn't try to understand them, they may "seem" different than they are.

EA leads people to believe that hoarding money for interstellar colonization, is more important than feeding the poor.

Hmm, I've been around here awhile and I recall no suggestions to hoard money for interstellar colonization. Technically I haven't been feeding the poor ― I just spent enough on AMF to statistically save one or two children from dying of malaria. But I'm also trying to figure out how AGI ruin might play out and whether there's a way to stop it, so I assume deBoer doesn't like this for some reason. The implication of the title seems to be that because I'm interested in the second thing, I'm engaged in a "Shell Game"?

researching EA leads you to debates about how sentient termites are

I haven't seen any debates about that. Maybe deBoer doesn't want the question raised at all? Like, when he squishes a bug, it bothers him that anyone would wonder whether pain occurred? I've seen people who engage in "moral obvious-ism": "whatever my moral intuitions may be, they are obviously right and yours are obviously wrong". deBoer's anti-EA stance might be simply that.

I’ve pointed to the EA argument, which I assure you sincerely exists, that we should push all carnivorous species in the wild into extinction, in order to reduce the negative utility caused by the death of prey animals. (This would seem to require a belief that prey animals dying of disease and starvation is superior to dying from predation, but ah well.) I pick this, obviously, because it’s an idea that most people find self-evidently ludicrous

The second sentence there is surely inaccurate, but the third is the crux of the matter: he claims it's "self-evidently ludicrous" to think extinction of predators is preferable to the suffering and death of prey. It's an appeal-to-popularity fallacy: since the naturalistic fallacy is very popular, it is right. But also, deBoer implies, since one EA argues this, it's evidence that the entire movement is mad. Like, is debate not something intellectuals should be doing?

“what’s distinctive about EA is that… its whole purpose is to shine light on important problems and solutions in the world that are being neglected.” But that isn’t distinctive at all! Every do-gooder I have ever known has thought of themselves as shining a light on problems that are neglected. So what?

So, maybe he's never met anyone who did mainstream things like give to cancer research or local volunteering. But it's a straw man anyway, since he simply ignores key elements of EA like tractability, comparing different causes with each other via cost-effectiveness estimates and prioritization specialists, etc.

“Let’s be effective in our altruism,” “let’s pursue charitable ends efficiently,” “let’s do good well” - however you want to phrase it, that’s not really an intellectual or political or moral project, because no one could object to it. There is no content there

Yet he is objecting to it, and there are huge websites filled with the EA content which... counts as "no content"?

But oh well, haters gonna hate. Wait a minute, didn't Scott Alexander respond to this already?

I was one of those who criticized Kat's response pretty heavily, but I really appreciated TracingWoodgrains' analysis and it did shift my perspective. I was operating from an assumption that Ben & Hab were using an appropriate truthseeking process, because why wouldn't they? But now I have the sense that they didn't respond to counterevidence from Spencer G (and others), and the promise of counterevidence from Nonlinear, appropriately. So now I'm confused enough to agree with TW's conclusion: mistrial!

(edit: mind you, as my older comments suggest, in the end I won't end up thinking Kat did nothing wrong at all. This post raises doubts about Ben's approach to the case, though, due to which it's hard to tell out how bad or not bad the conduct was.)

TW, I want to thank you for putting this together, for looking at the evidence more closely than I did, for reminding me that Ben's article violated my own standards of truthseeking, and for highlighting some of Kat's evidence in more effective ways than Kat herself did. (of course, it also helps that you're an outside observer.)

I hadn't read some of those comments under Ben's article (e.g. by Spencer G) until now. I am certain that if I personally had received evidence that I was potentially quite wrong about details of an important article I'd just published (or was about to publish), I'd be very concerned about that. I would start investigating and making changes immediately, I'd be embarrassed, and I'd be potentially very apologetic insofar as I was wrong ― but Ben and Hab wasn't, and I don't know why. So, I was persuaded to switch my vote from up to down on Ben's article.

Now, Ben indicated that there were "many" other anonymous sources for the "Nonlinear is bad" vibe, but... well, he was unaware of the existence of more than half of NL's staff. So I'd like to stop putting weight on these mysterious additional sources unless I find out more about who they were and what they had to say. A mistrial! Yes, let's declare a mistrial.

This "unambiguous" contradiction seems overly pedantic to me. Surely Kat didn't expect Ben would receive her evidence and do nothing with it? So when Kat asked for time to "gather and share the evidence", she expected Ben, as a reasonable person, would change the article in response, so it wouldn't be "published as is".

Opinions on this are pretty diverse. I largely agree with the bulleted list of things-you-think, and this article paints a picture of my current thinking.

My threat model is something like: the very first AGIs will probably be near human-level and won't be too hard to limit/control. But in human society, tyrants are overrepresented among world leaders, relative to tyrants in the population of people smart enough to lead a country. We'll probably end up inventing multiple versions of AGI, some of which may be straightforwardly turned into superintelligences and others not. The worst superintelligence we help to invent may win, and if it doesn't it'll probably be because a different one beats it (or reaches an unbeatable position first). Humans will probably be sidelined if we survive a battle between super-AGIs. So it would be much safer not to invent them―but it's also hard to avoid inventing them! I have low confidence in my P(catastrophe) and I'm unsure how to increase my confidence.

But I prefer estimating P(catastrophe) over P(doom) because extinction is not all that concerns me. Some stories about AGI lead to extinction, others to mass death, others to dystopia (possibly followed by mass death later), others to utopia followed by catastrophe, and still others to a stable and wonderful utopia (with humanity probably sidelined eventually, which may even be a good thing). I think I could construct a story along any of these lines.

I strongly agree with the end of your post:


Almost nobody is evil.

Almost everything is broken.

Almost everything is fixable.

I want you to know that I don't think you're a villain, and that your pain makes me sad. I wrote some comments that were critical of your responses ... and still I stand by those comments. I dislike and disapprove the approach you took. But I also know that you're hurting, and that makes me sad.

So... I'd like you to dwell on that for a minute.

I wrote something in an edited paragraph deep within a subthread, and thought I should raise the point more directly. My sense is that you and Emerson have some characteristics or habits that I would call flawed or bad, and that it was justified to publicly write something about that.

But I also have a sense that Ben's post contains errors.

I think you are EAs and rationalists at heart. I respect that. And I respect the (unknown to me but probably large) funds you've put into trying to do good. Because of that, I think Ben & co should've spent more time to get Ben's initial post right.

And I guess I'm sad about this situation because I feel that both Ben's post and your post were worded in somewhat unfair ways, and I'm unconvinced that quite so much acrimony was necessary. I like to imagine a softer version of Ben's post, and a softer version of your response, in which Ben basically says "I've spoken to a bunch of people who disapprove of the way Kat & Emerson handle A, B, C, and D, and two people I'm calling Alice and Chloe were hurt by factors A, B, C and E".... and you end up saying "after a lot of soul-searching we've decided to apologize about A and B and handle those differently in the future, but we still contend that E was an inaccurate characterization, and we stand by C and D because reasons, and we accept that some people won't like that."

Do you think an alternate timeline like that was possible?

Exaggeration is fun, but not what this situation calls for. So for me, the only reason I didn't upvote you was the word "deranged". Naivety? Everybody's got some, but I think EAs tend to be below average in that respect.

I think you've both raised good points. Way upthread @Habryka said "I don't see a super principled argument for giving two weeks instead of one week", but if I were unfairly accused I'd certainly want a full two weeks! So Kat's request for a full week to gather evidence seems reasonable [ed: under the principle of due process], and I don't see what sort of opportunities would've existed for retribution from K&E in the two-week case that didn't exist in the one-week case.

However, when I read Ben's post (like TW, I did this "fresh" about two days ago; I didn't see Ben's post until Kat's post was up) it sounds like there was more evidence behind it than he specifically detailed (e.g. "I talked to many people who interacted with Emerson and Kat who had many active ethical concerns about them and strongly negative opinions"). Given this, plus concerning aspects of Kat's response, I think Ben's post is probably broadly accurate―perhaps overbiased against NL based on the evidence I've seen, but perhaps that's compensated by evidence I haven't seen, that was only alluded to.

(Edit: but it also seems like the wording of Ben's piece would've softened if they'd waited a bit longer, so... basically I lean more toward TW's position. But also, I don't expect the wording to have softened that much. This is all so damn nuanced! Also, I actually think even a partial softening of Ben's post would've been important and might have materially changed Kat's response and increased community cohesion. K&E likely have personality flaws, but are also likely EAs and rationalists at heart. I respect that, and I respect the apparently substantial funds they put into trying to do good, and so it seems like it would've been worth spending more time to get Ben's initial post right. I'm sad about this situation, I guess because I feel that both Ben and Kat's posts were worded in somewhat unfair ways, and I'm unconvinced that quite so much acrimony was necessary.)

Load more