S

SeeYouAnon

499 karmaJoined Nov 2022

Comments
23

I'm not really up for a long exchange here, as I find this sort of thing draining. So I hope you'll forgive me if I don't reply further after this message.

As a non-native speaker, I think I have literally never been dismissed in this way.  So I suspect you're setting up an imaginary problem. But I only have anecdotes to go off of rather than data; if someone has survey data I'm willing to update quickly.

In text, at least, your English is notably better than the average native speaker, so I'm not convinced you're representative here. Even setting aside your grasp of English, your obvious high intelligence would, I suspect, make it pretty hard to pull off dismissing you in this way, as would your willingness to speak your views. So I'm not convinced that the fact that you haven't experienced this means that others haven't.

That said, I accept that I have no actual data to point to.

In modern left-leaning American culture, "racist" is one of the worst things you can call someone,

I actually think this is importantly false (or, at least, importantly incomplete as a characterisation). Modern left-leaning culture really does distinguish between racism in the two senses that you quoted earlier. And when it comes to structural racism, saying that someone is racist (in the sense of having acted in a way that perpetuates and buys into racist structural norms) just isn't a terrible thing to call someone.

I've heard multiple people saying that they think everyone is racist (ie. socialised into problematic norms that perpetuate racist discrimination) and also that they are themselves of course racist (because they too have been socialised in this way).

Structural racism is seen as a big deal by the left. It's seen as worth correcting the influence of this on ourselves. But it's not seen as a terrible accusation to acknowledge that a particular statement or behaviour was structurally racist (indeed, saying this can be helpful for allowing people to make progress in challenging the ways that structural racism has impacted their thinking).

Of course, a statement that someone is racist might be ambiguous, between the terrible reading and the structural reading, which is why I wouldn't personally use it in the latter way. So I do wish we lived in a world where people wouldn't call you racist for the things in this thread.

I'm surprised so many people would stand for me being called that based on such scant evidence

Setting aside my just-stated wish, my guess is that no-one intended to call you racist in the terrible sense. And, at the very least, my guess is that the reason people "stand for" Akhil's comment is that they do not see it as an accusation of racism in the terrible sense.

I myself did not read it this way, despite (and I would actually say, because of) very much being steeped in the contemporary left. This is partly because Akhil commented on your comment rather than on you as an individual, and partly because I think the structural, rather than the terrible, claim is the more plausible accusation here (the accusation being something like: the statement is given meaning by a set of structural norms that developed because of racist attitudes and that perpetuates racial disadvantage). So I guess I felt like the charitable read of Akhil was that he wasn't calling you racist in the terrible sense but rather was making a claim about structural racism.

For what it's worth, I think it would be a real loss to the community if you chose to be less involved.

I don't personally view your comment as racist, but it feels like you're trying to understand why someone might, so here's a take.

Here's a thing that I think is true: your comment came across as dismissive because it didn't engage with the substance of what had been said. Instead, it seemed to dismiss someone's substantive comments on the basis of their command of English. Consequently, it came across as a personal attack and specifically as dismissively disrespectful. (To be clear, I'm not saying this was your intention in making the comment; here I'm making a claim about how it came across, or at least how it will have come across to some readers)

Now let's just focus on it as a dismissal for a moment (that is, let's just focus on the role it plays as a dismissal despite the fact that this wasn't the role you were intending it to play). This sort of dismissal might strike some as racist for two reasons:


1. Because there is a (very imperfect) connection between race and native grasp of English, this sort of strategy for dismissal of a person's substantive views is likely to disproportionally impact people of certain races and is likely to reinforce existing factors that mean such people are dismissed/disrespected/not-adequately-heard. (This is perhaps particularly crucial given that English is one of a small number of languages that is disproportionately important for having power)

2. Because of 1, this strategy is (I suspect) actually deliberately used in many cases as a form of racist dismissal. At the very least, many people will perceive that this is so. Consequently, statements like this take on a certain sort of cultural meaning and carry with them certain consequences (for example, if someone has been racistly dismissed in this way many times before, it will be more hurtful to them to face this sort of dismissal again, and so the sentence comes to be particularly harmful to people who have experienced racist attacks).

Given 1 and 2, this sort of statement occupies a certain place in a set of norms around discourse: it is a member of a class of statements that reinforces racial disparities, that harms people disproportionally from certain races, and that is used as a dogwhistle to describe racist dismissal as something else. I think this does roughly fit the second definition of racism that you point to (or, at least, the more complete version of this that recognises that systemtic racism can be about not just policies or systems but also about the role played in broader social norms).

For myself, I buy at least some of the above, and think it might mean it was worth commenting on the way that your commenting could be upsetting to some. I wouldn't personally chose to describe the comment as racism, because I think this is too easily read as a comment on a person's intention and virtue, rather than some sort of comment about the place of the statement within a broader societal context. And as I'm confident your intentions here were good, I personally would avoid this description.

Just want to signal my agreement with this.

My personal guess is that Kat and Emerson acted in ways that were significantly bad for the wellbeing of others. My guess is also that they did so in a manner that calls for them to take responsibility: to apologise, reflect on their behaviour, and work on changing both their environment and their approach to others to ensure this doesn't happen again. I'd guess that they have committed a genuine wrongdoing.

I also think that Kat and Emerson are humans, and this must have been a deeply distressing experience for them. I think it's possible to have an element of sympathy and understanding towards them, without this undermining our capacity to also be supportive of people who may have been hurt as a result of Kat and Emerson's actions.

Showing this sort of support might require that we think about how to relate with Nonlinear in the future. It might require expressing support for those who suffered and recognising how horrible it must have been. It might require that we think less well of Kat and Emerson. But I don't think it requires that we entirely forget that Kat and Emerson are humans with human emotions and that this must be pretty difficult.

Of course, if they don't post a response, at a certain point people might decide they lack further energy to invest in this and might therefore update their views (while retaining some uncertainty) and not read further materials. This is a reasonable practical response that is protective of one's own emotional resources.

But while making this practical decision based on personal wellbeing, I think it's also possible to recognise that Kat and Emerson might not be in a place to respond as rapidly here as they might hope to (and as we might hope they would).

Also, just to say: I think these judgement calls are easy to make in the abstract, but I'm glad I don't have to make them quickly in reality when they actually have implications.

I do think the wrong call was made here, but I also think the mod team acts in good faith and is careful and reflective in their actions. I am discussing things here because I think this is how we can collectively work towards a desirable set of moderation norms. I am not mentioning these things to criticise the mod team as individuals or indeed as a group.

I appreciate the thoughtful reply. However, I don't agree with 5, which I take to be the most important claim in this reply.

Side comment: my claim isn't that moderators should avoid responding to posts that criticise prominent figures in EA. But my claim is that moderators should be cautious about acting in ways that discourage critique. I think this creates a sort of default presupposition that formal mod action should not be taken against critiques that include substantive discussion, as this one did.

I don't particularly find the comparison to the "modest proposal" post fruitful, because the current post just seem like a very different categories of post. I think it's perfectly possible to not take action on substantive criticisms of leaders while taking action on "modest proposal" style posts.

While it might be reasonable to want to discourage the sort of rhetorical attacks seen in this post if all else were equal, I don't think all else was equal in this case. And while I agree that "criticism" of leaders shouldn't permit all sins, the post seemed to me to have enough substantive discussion that it shouldn't be grouped into the general category of "inflammatory and misleading".

I don't feel particularly good that the various concerns about this mod decision were not, as far as I can tell, addressed by mods. I accept that this decision has support from some people, but a number of people have also expressed concern. My own concern got 69 upvotes and 24 agree votes. Nathan, Linch, and Sphor all raise concerns too. I think a high bar should be set for mod action against critiques of EA leaders, but I also think that mods would ideally be willing to engage in discussion about this sort of action (even if only to provide reassurance that they generally support appropriate critique but that they feel this instance wasn't appropriate for X, Y and Z reasons).

ETA: Lizka has now written a thoughtful and reflective response here (and also explained why it took a while for any such response to be written).

I at least think it's important to distinguish:

  1. People who make minor errors because they don't speak English as a first language or otherwise find it difficult to avoid minor errors; from
  2. People who make minor errors because they write things quickly and off the cuff and can't be bothered to put the effort into making the work tighter.

Whether or not 2 is a problem, I at least don't want to blur these two things together and don't think a greater sympathy for 1 should necessarily imply a greater tolerance of 2.

("Off the cuff" can often mean low quality posts that are harder to engage with because not clearly written. These posts can decrease the time cost to the writer but increase it for the reader, especially if we focus on time cost per unit of value received. I don't think it's crazy to think there's something in this that is disrespectful to the reader.)

I have mixed feelings about this mod intervention. On the one hand, I value the way that the moderator team (including Lizka) play a positive role in making the forum a productive place, and I can see how this intervention plays a role of this sort.

On the other hand:

  1. Minor point: I think Eliezer is often condescending and disrespectful, and I think it's unlikely that anyone is going to successfully police his tone. I think there's something a bit unfortunate about an asymmetry here.
  2. More substantially: I think procedurally it's pretty bad that the moderator team act in ways that discourages criticism of influential figures in EA (and Eliezer is definitely such a figure). I think it's particularly bad to suggest concrete specific edits to critiques of prominent figures. I think there should probably be quite a high bar set before EA institutions (like forum moderators) discourage criticism of EA leaders (esp with a post like this that engages in quite a lot of substantive discussion, rather than mere name calling). (ETA: Likewise, with the choice to re-tag this as a personal blogpost, which substantially buries the criticism. Maybe this was the right call, maybe it wasn't, but it certainly seems like a call to be very careful with.)
  3. I personally agree that Eliezer's overconfidence is dangerous, given that many people do take his views quite seriously (note this is purely a comment on his overconfidence; I think Eliezer has other qualities that are praiseworthy). I think that the way EA has helped to boost Eliezer's voice has, in this particular respect, plausibly caused harm. Against that backdrop, I think it's important that there be able to be robust pushback against this aspect of Eliezer.

I don't know what the right balance is here, and maybe the mod team/Lizka have already found it. But this is far from clear to me.

(P.S. While I was typing this, I accidentally refreshed, and I was happy to discover that my text had been autosaved. It's a nice reminder of how much I appreciate the work of the entire forum team, including the moderators, to make using the forum a pleasant experience. So I really do want to emphasise that this isn't a criticism of the team, or Lizka in particular. It's an attempt to raise an issue that I think is worth reflection in terms of future mod action).

Much of this is just repeating things that others have said, but my initial position here is skepticism.

  1. The model is based on a fertility trend that has arisen in a very specific cultural, economic, and technological context. I'm very skeptical that we should take it to provide any sort of reliable guide to the long term future.
  2. It seems to me that there are plenty of ways that projecting forwards underlying trends could interrupt the fertility trend. For example, perhaps as you increase per capita wealth you get decreased child mortality, increased costs of educating children, and so on, such that having less children becomes incentivised. But if wealth continues to grow then perhaps there becomes a decoupling between economic incentives and decisions about how many children to have (because marginal wealth becomes less important so costs in terms of marginal wealth matter much less in terms of their impact on utility).
  3. Low fertility itself seems likely to lead to cultural changes. I feel pretty sceptical of the idea that we end up in a world with a radically shrinking population where we can carry forward the trends that are familiar from a world with a growing population.
  4. AI could easily change the connection between population growth and innovation, in a way that means progress could continue absent population growth (and this progress will plausibly itself gives the tools necessary to resolve issues of population if we become worried).
  5. AI might itself count as population in whatever sense matters.
  6. Fertility technologies might change how easy it is to have children and might lead to a decoupling between parental choice and societal birthrate (for example, you could imagine a world of artificial wombs where the government is responsible for creating the next generation, and where children are co-raised by society; clearly there might be issues with such a world, but the fertility rate itself is not the issue).
  7. I believe that evolutionary pressures tend to push genes responsible for fertility to evolve to fixity. The genes now responsible for fertility are increasingly those related to wanting children. We should expect these genes to evolve to fixity.
  8. One might retreat to saying we should have a small credence in the relevant models but claiming this suffices to justify action. I'm skeptical that we should even have a sufficiently high (small) credence for this argument to go through. Projecting this population trend forwards 300 years through the radical change we should expect over that time seems to me not very informative.

I recognise that the people working on this are better informed than me on this topic, and that seems like a relevant consideration. But I worry this is... kinda EA nerdbait. Clever big picture thinking, backed by quantitative models, revealing a hidden catastrophe that others have not foreseen sufficiently clearly. I'm not saying such things never get at the truth, but I do think it's reasonable to approach them with an initial attitude of skepticism, even in the face of the existence of enthusiastic proponents.

You say two things.

  1. The conclusions doesn't seem to support that
  2. I'm not sure it makes sense grammatically? "Against too much financial risk tolerance", "Against many arguments for financial risk tolerance"?

I agree that (1) is substantial. (2) is not, and the response you give in the above comment doesn't provide reasons to think (2) is substantial. It was (2) I was commenting on.

ETA: But perhaps now I'm nitpicking. I appreciate you acknowledging that you feel there was something to my other point.
 ETA2: I won't reply further, because while I do stand by what I said, I also don't want to distract from what seems to me the more important discussion (about substance).

Load more