742Joined Feb 2022


Topic Contributions

The point is this distorts the apparent balance of reason - maybe this is like Marxism, or NGDP targetting, or Georgism, or general semantics, perhaps many of which we will recognise were off on the wrong track.

I do note one is not like the others here. Marxism is probably way more wrong than any of the other beliefs, and I feel like the inclusion of the others rather weakens the case here.

Honestly, a lot of the problems from politics stems from both it's totalizing nature, comparable to strong longtermism, plus emotion hampers more often than it helps in political discussions compared to longtermism.

I'd say that if EA can't handle politics in the general forum, then I think a subforum for EA politics should be made. Discussions about the politics of EA or how to effectively do politics can go there.

Meanwhile, the general EA forum can simply ban political posts and discussions.

Yes, it's a strong measure to ban politics here. But bluntly, in social communities that want to have any level of civility and charity, ultimately it does tend towards banning politics and discussion around it, except maybe in a subforum.

Finally, I do think that there is a risk of updating too quickly the other way. Back on the original post, some users have responded this comment saying that it's 'entirely correct'[1], but I don't think it's reasonable to view Expo's piece a 'major misrepresentation' of what happened - their reporting on the case seems to have been accurate. While 'Option 1' seems to be what happened there is still a question of how the grant made it to stage 5 out of 7 on FLI's grant-making procedure. It's not the major scandal we feared, but it's ramifications are more than being 'mildly embarrassing'.

Similarly, I think EA's response to the first apology was reasonable, because FLI and Tegmark gave a terrible apology, essentially amounting to a non-apology. It was at least reasonable to suspect that they didn't actually have much of a problem with Nazism.

Now, a better apology has been produced, but it almost certainly required the criticism of FLI to stick.

My big concern is that permanent harm could be suffered by either EA or it's championed causes. Somewhat like how transhumanism became tarred with the brush of racism and eugenics, I worry things like AI safety or X-risk work could be viewed in the same light as racism. And there may be much more at stake than people realize.

The problem is that even without a hinge of history, our impacts, especially in a longtermism framework, are far far larger than previous generations, and we could very well lose all that value if EA or it's causes become viewed as badly as say eugenics or racism was.


you want people to believe a certain thing (even if it's something you yourself sincerely believe), in this case that EA is not racist it's about managing impressions and reputations (e.g. EA's reputation as not racist) Your initial comment (and also the Bostrom email statement) both struck me as "performative" in how they demonstrated really harsh and absolute condemnation ("absolutely horrifying", "[no] place in this community", "recklessly flawed and reprehensible" – granted that you said "if true", but the tone and other comments seemed to suggest you did think it was true). That tone and manner of speaking as the first thing you say on a topic[1] feels pretty out of place to me within EA, and certainly isn't what I want in the EA I would design.

Extreme condemnation pattern matches to someone signaling that they too punish the taboo thing (to be clear, I agree that racism should not be tolerated at all), as is seen on the lot of the Internet, and feels pretty toxic. It feels like it's coming from a place of needing to demonstrate "I/we are not the bad thing".

So even if your motivation was "do your bit to make it clear that EA isn't racist", that does strike me as still political/PR (even if you sincerely believe it)

(And I don't mean to doubt your upsetness! It is very reasonable to upset if you think something will cause harm to others, and harm to the cause you are dedicating yourself to. Upsetness is real and caring about reputation can come from a really good place.)

I could write more on my feelings about PR/political stuff, because my view is not that it's outright "bad/evil" or anything, more that caution is required.

IMO, I think this is an area EA needs to be way better in. For better or worse, most of the world runs on persuasion, and PR matters. The nuanced truth doesn't matter that much for social reality, and EA should ideally be persuasive and control social reality.

Psychologists know IQ is a somewhat mysterious measure (no, scoring lower on an IQ test does not necessary mean a person is "more stupid"). It is affected by things like income shifts across generations and social position. For Bostrom to even have that opinion as an educated 23-year-old was bad, but to not unequivocably condemn it today - despite the harm it can clearly cause - seems even worse.

I disagree, because I think the evidence from psychology is that IQ is a real measure of intelligence, and while a lot of old tests had high cultural biases, the modern ones are way better.

That stated, I still strong upvoted your comment because PR and looking good matters, and you are correct on the genetic science point of there being evidence against real life subspecies/races.

IMO I think Ecoterrorism's deaths were primarily the Unabomber, which was at least 3 deaths and 23 injuries. I may retract my first comment if I don't have more evidence than this.

[This comment is no longer endorsed by its author]Reply

Alas, I do think this defense no longer works, given FTX, which seems substantially worse than all the ecoterrorism I have heard about.

I disagree with this because I believe FTX's harm was way less bad than most ecoterrorism, primarily because of the disutility involved. FTX hasn't actually injured or killed people, unlike a lot of ecoterrorism. It stole billions, which isn't good, but right now no violence is involved. I don't think FTX is good, but so far no violence has been attributed or even much advocated by EAs.

[This comment is no longer endorsed by its author]Reply

Only a small red flag, IMO, because it's rather easy to convince people of alluring falsehoods, and not so easy to convince people of uncomfortable truths.

I don't think there's been a huge scandal involving Will? Sure, there are questions we'd like to see him openly address about what he could have done differently re FTX - and I personally am concerned about his aforementioned influence because I don't want anyone to have that much - but very few if any people here seem to believe he's done anything in seriously bad faith.

I was imagining a counterfactual world where William Macaskill did something hugely wrong.

And yeah come to think of it, selection may be quite a bit stronger than I think.

Load More