I want to know if you can find more people companies that have experienced a similar thing with the FDA.
Is there a reddit or discussion forum where people discuss and commiserate about FDA threats like this one? Can you find people there, and then verify that they / their experiences are real?
As a naive outsider, it seems to me like all of the specific actions you suggest would be stronger and more compelling if you can muster a legitimate claim that this is a pattern of behavior and not just a one-off. An article with one source making an accusations is more than 3x less credible than an article with 3 sources making the same accusation, for instance.
And if this is just a one-off, then it seems a lot less concerning, and taking action seems much less pressing. (Though it seems much easier to verify that this is a pattern, by finding other people in a similar situation to yours, than to verify that it isn't, since there are incentives to be quiet about this sort of thing).
I know that I was wrong because people of the global majority continuously speak in safe spaces about they feel unsafe in EA spaces. They speak about how they feel harmed by the kinds of things discussed in EA spaces. And they speak about how there are some people — not everyone, but some people — who don’t seem to be just participating in open debate in order to get at the truth, but rather seem to be using the ideals of open discussion to be a cloak that can hide their intent to do harm.
I'm not sure what to say to this.
Again, just because someone claims to feel harmed by some tread of discourse, that can't be sufficient grounds to establish a social rule against it.But I am most baffled by this...
. And they speak about how there are some people — not everyone, but some people — who don’t seem to be just participating in open debate in order to get at the truth, but rather seem to be using the ideals of open discussion to be a cloak that can hide their intent to do harm.
Um. Yes? Of course? It's pretty rare that people are in good faith and sincerely truth-seeking. And of course there are some bad-actors, in every group. And of course those people will be pretending to have good intentions. Is the claim that in order to feel safe, people need to know that there are no bad actors?(I think that is not a good paraphrase of you.)
We need a diverse set of people to at least feel safe in our community.
Yeah. So the details here matter a lot, and if we operationalize, I might change my stance here. But on the face of this, I disagree. I think that we want people to be safe in our community and that we should obviously take steps to insure that. But it seems to be asking to much to insure that people feel safe. People can have all kinds of standards regarding what they need to feel safe, and I don't think that we are obligated to carter to them because they are on the list of things that some segment of people need to feel safe.
Especially if one of the things on that list is "don't openly discuss some topics that are relevant to improving the world." That is what we do. That's what we're here to do. We should sacrifice pretty much none of the core point of the group to be more inclusive.
"How much systemic racism is there, what forms does it take, and how does it impact people?" are actually important questions for understanding and improving the world. We want to know if there is anything we can do about it, and how it stacks up against other interventions. Curtailing that discussion is not a small or trivial ask.
(In contrast, if using people's preferred pronouns, or serving vegan meals at events, or not swearing, or not making loud noises, etc. helped people feel safe and/or comfortable, and they are otherwise up for our discourse standards, I feel much more willing to accommodate them. Because none of those compromise the core point of the EA community.)
...Oh. I guess one thing that seems likely to be a crux:
...if we are to succeed in truly achieving effective altruism at scale..
I am not excited about scaling EA. If I thought that trying to do EA at scale was a good idea, then I would be much more interested in having different kinds of discussions in push and pull media.
Some speech is harmful. Even speech that seems relatively harmless to you might be horribly upsetting for others. I know this firsthand because I’ve seen it myself.
I want to distinguish between "harmful" and "upsetting". It seems to me that there is a big difference between shouting 'FIRE' in a crowed theater, "commanding others to do direct harm" on the one hand, and "being unable to focus for hours" after reading a facebook thread, being exhausted from fielding questions.
My intuitive grasp of these things has it that the "harm" of the first category is larger than that of the second. But even if that isn't true, and the harm of reading racist stuff is as bad as literal physical torture, there are a number of important differences.
For one thing, the speech acts in the first category have physical, externally legible bad consequences. This matters, because it means we can have rules around those kinds of consequences that can be socially enforced without those rules being extremely exploitable. If we adopt a set of discourse rules that say "we will ban any speech act that produce significant emotional harm", then anyone not in good faith can shut down and discourse that they don't like by claiming to be emotionally harmed by it. Indeed, they don't even need to be consciously malicious (though of course there will be some explicitly manipulative bad actors); this creates a subconscious incentive to be and act more upset than you might otherwise be by some speech-acts, because if you are sufficiently upset, the people saying things you don't like will stop.
Second, I note that both of the examples in the second category are much easier to avoid than the second category. If there are Facebook threads that drain someone’s ability to focus for hours, it seems pretty reasonable for that person to avoid such facebook threads. Most of us have some kind of political topics that we find triggering, and a lot of us find that browsing facebook at all saps our motivation. So we have workarounds to avoid that stuff. These workarounds aren't perfect, and occasionally you'll encounter material that triggers you. But it seems way better to have that responsibility be on the individual. Hence the idea of safe spaces in the first place.
Furthermore, there are lots of things that are upsetting (for instance, that there are people dying of preventable Malaria in the third world right now, and that this in principle, could be stopped if enough people and the first world knew and cared about it, or that the extinction of humanity is plausibly imminent), which are never the less pretty important to talk about.
First of all, I took this comment to be sincere and in the spirit of dialog. Thank you and salutations.
[Everything that I say in this comment is tentative, and I may change my mind.]
Surely there exists a line at which we agree on principle. Imagine that, for example, our EA spaces were littered with people making cogent arguments that steel manned holocaust denial, and we were approached by a group of Jewish people saying “We want to become effective altruists because we believe in the stated ideals, but we don’t feel safe participating in a space where so many people commonly and openly argue that the holocaust did not happen.”
In this scenario, I hope that we’d both agree that it would be appropriate for us to tell our fellow EAs to cut it out.
If that were actually happening, I would want to think more about the specific case (and talk directly to the people involved), but I'm inclined to bite the bullet of allowing that sort of conversation.
The main reason is that, (I would guess, though you can say more about your state of mind), that there is an implicit premise underlying the stance that we shouldn't allow that kind of talk. Namely, that "the Holocaust happened, and Holocaust denial is false".
Now, my understanding is that there is an overwhelming historical consensus that the Holocaust happened. But the more I learn about the world, the more I discover that claims that I would have thought were absurd are basically correct, especially in politicized areas.
I am not so confident that the Holocaust happened, and especially that the holocaust happened the way it is said to have happened, that I am willing to sweep out any discussion to the contrary.
If they are making strong arguments for a false conclusion, then they should be countered with arguments, not social censure.
This is the case even if none of the EAs talking about it actually believe it. Even if they are just steel-manning devil’s advocates...
In the situation where EAs are making such arguments not out of honest truth-seeking, but as playing edge-lord / trying to get attention / etc., then I feel a lot less sympathetic. I would be more inclined to just tell them to cut it out in that case. (Basically, I would make the argument that they are doing damage for no gain.)
But mostly, I would say if any people in an EA group were threatening violence, racially-motivated or otherwise, we should have a zero-tolerance policy. That is where I draw the line. (I agree that there is a bit of a grey area in the cases where someone is politely advocating for violent-action down the line, eg the Marxist who has never personally threatened anyone, but is advocating for a violent revolution.)
Q1: Do you agree that this is a question of degree, not kind? If not, then the rest of this comment doesn't really apply.
I think so. I expect that any rigid rule is going to have edge cases, that are bad enough that you should treat them differently. But I don't think we're on the same page about what the relevant scalar is.
If it became standard for undergraduate colleges to disallow certain forms of racist speech to protect students, would you be okay with copying those norms over to EA?
It depends entirely on what is meant by "certain forms", but on the face of it, I would not be okay with that. I expect that a lot of ideas and behaviors would get marked as "racist", because that is a convenient and unarguable way to attack those ideas.
I would again draw the line at the threat of violence: if a student group got together to discuss how to harass some racial minority, even just as a hypothetical (they weren't actually going to do anything), Eli-University would come down on them hard.
If a student group came together to discuss the idea a white ethno-state, and the benefits of racial and cultural homogeneity, Eli-University would consider this acceptable behavior, especially if the epistemic norms of such a group are set high. (However if I had past experience that such reading groups tended to lead to violence, I might watch them extra carefully.)
The ethno-state reading group is racist, and is certainly going to make some people feel uncomfortable, and maybe make them feel unsafe. But I don't know enough about the world to rule out discussion of that line of thinking entirely.
I will report here that a large number of people I see talking in private Facebook groups, on private slack channels, in PMs, emails, and even phone calls behind closed doors are continuously saying that they do not feel safe in EA spaces.
I would love to hear more about the details there. In what ways do people not feel safe?
(Is it things like this comment?)
I’m extremely privileged, so it’s hard for me to empathize here. I cannot imagine being harmed by mere speech in this way. But I can report from direct experience watching private Facebook chats and slack threads of EAs who aren’t willing to publicly talk about this stuff that these speech acts are causing real harm.
Yeah. I want to know more about this. What kind of harm?
My default stance is something like, "look, we're here to make intellectual progress, and we gotta be able to discuss all kinds of things to do that. If people are 'harmed' by speech-acts, I'm sorry for you, but tough nuggets. I guess you shouldn't participate in this discourse. "
That said, if I had a better sense of what kind of harms are resulting, I might have a different view, or it might be more obvious where there are cheep tradeoffs to be made.
Is the harm small enough to warrant just having these potential EAs bounce off? Or would we benefit from pushing such speech acts to smaller portions of EA so that newer, more diverse EAs can come in and contribute to our movement? I hope that you'll agree that these are questions of degree, not of kind.
Yep. I think I do, though I think that the indifference curve is extremely lopsided, for EA in particular.
I agree that one of the things that makes EA great is the quality of its epistemic discourse. I don’t want my words here to be construed that I think we should lower it unthinkingly. But I do think that a counterbalancing force does exist: being so open to discussion of any kind that we completely alienate a section of people who otherwise would be participating in this space.
I'm tentatively suggesting that we should pay close to no attention to possibility of alienating people, and just try to do our best to actually make progress on the intellectual project.
It is a (perhaps unfortunate) fact that many true conclusions alienate a lot of people. And it is much more important that we are able to identify those conclusions than that we find more people to join our ranks, or that our ranks are more ethnically / culturally / etc. diverse.
I don't follow how what you're saying is a response to what I was saying.
I think a model by which people gradually "warm up" to "more advanced" discourse norms is false.
I wasn't saying "the point of different discourse norms in different EA spaces is that it will gradually train people into more advanced discourse norms." I was saying if that I was mistaken about that "warming up effect", it would cause me to reconsider my view here.
In the comment above, I am only saying that I think it is a mistake to have different discourse norms at the core vs. the periphery of the movement.
I think there is a lot of detail and complexity here and I don't think that this comment is going to do it justice, but I want to signal that I'm open to dialog about these things.
For example, allowing introductory EA spaces like the EA Facebook group or local public EA group meetups to disallow certain forms of divisive speech, while continuing to encourage serious open discussion in more advanced EA spaces, like on this EA forum.
On the face of it, this seems like a bad idea to me. I don't want "introductory" EA spaces to have different norms than advanced EA spaces, because I only want people to join the EA movement to the extent that they have a very high epistemic standards. If people wouldn't like the discourse norms in the central EA spaces, I don't want them to feel comfortable in the more peripheral EA spaces. I would prefer that they bounce off.
To say it another way, I think it is a mistake to have "advanced" and "introductory" EA spaces, at all.
I am intending to make a pretty strong claim here.
[One operationalization I generated, but want to think more about before I fully endorse it: "I would turn away billions of dollars of funding to EA causes, if that was purchased at the cost of 'EA's discourse norms are as good as those in academia.'"]
Another thing for people to keep in mind:
Apparently, if you want loan forgiveness, you can only spend 8 weeks worth of the money on payroll.
If you’re a sole proprietor, you can have eight weeks of the loan forgiven as a replacement for lost profit. But you’ll need to provide documentation for the remaining two weeks worth of cash flow, proving you spent it on mortgage interest, rent, lease, and utility payments.
So if at some point you need to check boxes saying what you're applying for this loan for, and you can check more than one box, you should check all of them, or at least payroll + something else. If you can only check one box, I guess check payroll.
I doubt that that box checking actually matters, but it seems prudent to do this, just in case it does.
I imagine that most of the disagreement is with (implied, but not stated) conditional "that Owen did this means that decent men don't exist".