J

Jason

15225 karmaJoined Working (15+ years)

Bio

I am an attorney in a public-sector position not associated with EA, although I cannot provide legal advice to anyone. My involvement with EA so far has been mostly limited so far to writing checks to GiveWell and other effective charities in the Global Health space, as well as some independent reading. I have occasionally read the forum and was looking for ideas for year-end giving when the whole FTX business exploded . . . 

How I can help others

As someone who isn't deep in EA culture (at least at the time of writing), I may be able to offer a perspective on how the broader group of people with sympathies toward EA ideas might react to certain things. I'll probably make some errors that would be obvious to other people, but sometimes a fresh set of eyes can help bring a different perspective.

Posts
2

Sorted by New
6
Jason
· · 1m read

Comments
1718

Topic contributions
2

For me, "zakat being compatible with EA" means "its possible to increase the impact of zakat and allocate it in the most cost-effective way" [ . . . .]

Indeed, effective giving being subject to donor-imposed constraints is the norm, arguably even in EA. Many donors are open only to certain cause areas, or to certain levels of risk tolerance, or to projects with decent optics, etc. Zakat compliance does not seem fundamentally different from donor-imposed constraints that we're used to working within.

Although I have mixed feelings on the proposal, I'm voting insightful because I appreciate that you are looking toward an actual solution that at least most "sides" might be willing to live with. That seems more insightful than what the Forum's standard response soon ends up as: rehashing fairly well-worn talking points every time an issue like this comes up.

My recollection is that the recent major scandals/controversies were kickstarted by outsiders as well: FTX, Bostrom, Time and other news articles, etc. I don't think any of those needed help from the Forum for the relevant associations to form. The impetus for the Nonlinear situation was of  inside origin, but (1) I don't think many on the outside cared about it, and (2) the motivation to post seemed to be protecting community members from perceived harm, not reputational injury. 

In any event, this option potentially works only for someone's initial decision to post at all. Once something is posted, simply ignoring it looks like tacit consent to what Manifest did. Theoretically, everyone could simply respond with: "This isn't an EA event, and scientific racism is not an EA cause area" and move on. The odds of that happening are . . . ~0. Once people (including any of the organizers) start defending the decision to invite on the Forum, or people start defending scientific racism itself, it is way too late to put the genie back in the bottle. Criticism is the only viable way to mitigate reputational damage at that point.

But insofar as people think that Manifest's actions were ok-ish, it's mostly sad that they are associated with EA and make EA look bad, [ . . . .]

To clarify my own position, one can think Manifest's actions were very much not okay and yet be responding with criticism only because of the negative effects on EA. Also, I would assert that the bad effects here are not limited to "mak[ing] EA look bad."

There's a lot of bad stuff that goes on in the world, and each of us have only a tiny amount of attention and bandwidth in relation to the scope of bad stuff in the world. If there's no relationship to one of my communities, I don't have a principled reason for caring more about what happens at Manifest than I do about what happens in the (random example) Oregon Pokemon Go community. I wouldn't approve if they invited some of these speakers to their Pokemon Go event to speak, but I also wouldn't devote the energy to criticizing.

If you have any good ideas on how to build a reputational firewall, I think most of us would be all ears. I think most of the discussants would be at least content with a world in which organizations/people could platform whoever they wanted but any effects of those decisions would not splash over to everyone else. Unfortunately, I think this is ~impossible given the current structure and organization of EA. There is no authoritative arbiter of what is/isn't EA, or is/isn't consistent with EA. Even if the community recognized such an arbiter, the rest of the world probably wouldn't.

I'm not aware of Manifest (or even Manifold) receiving funding from Open Phil, although Manifold did receive significant funding from an EA-linked funder (FTXFF).

Actually a third: ~ "the approximate percentage of EAs would think it'd be ridiculous to give a platform to someone like Hanania." I don't need convincing that both "Forum users" and "EAs that James Herbert personally knows" are likely unrepresentative samples of EAs as a whole. And I'd still be distressed if "most" EAs thought it ridiculous, but a sizable minority thought it was affirmatively a good idea.

There's a lot of good stuff here, but I think there's another side to "[c]onsider the virtue of silence." There is the belief/norm, quite common in the broader world, that qui tacet consentire videtur (often translated to "silence means consent" but apparently more literally to ~ "he who is silent is taken to agree"). Whether or not one thinks that should be a norm, it is a matter of social reality at this point in time. 

I wish we had a magic button we could press that would contain any effects from the Manifest organizers' decisions to Manifest itself, preventing any reputational or other adverse effects from falling on anyone else. To me, it is the need to mitigate those third-party adverse effects that makes silence problematic here. After all, all of us have much better things to do with our lives than gripe about other people's choices that don't impose adverse effects onto other people (or other moral patients).

James brought up site moderation philosophy in a comment ("Regarding Manifest and controversial attendees, we kept the same ethos as a our site, where anyone can create markets."). I responded by asking how that jived with plausible business models for the company. So it's a discussion about an issue first raised in the comments. I do think it's of some relevance to a broader question hinted at in your post: whether the founders' prior ideological commitments are causing them to make suboptimal business decisions.

Yeah, I agree with that. Mainly, I think I want to signal to the audience that the situation in which orgs find themselves reflects thorny policy tradeoffs rather than a simple goof by Congress. Especially since the base rate of goofs is so high!

  • Rather, the more common sentiment, and the one I think is mostly attracting upvotes, seems to me to be like "who are we to tell other people who to talk to?"
    • I don't know much about him, but from what I do know I think the guy sounds like a jerk and I'd be meaningfully less interested in going to events he was at; I can't really imagine inviting him to speak at anything
    • But it also seems to me that it's important to respect people's autonomy and ability to choose differently

Criticizing someone's decisions is not denying them autonomy or ability to choose. 

To use a legal metaphor, one way of thinking about this is personal jurisdiction -- what has Manifest done that gives the EA community a right to criticize? After all, it would be uncool to start criticizing random people on the Forum with no link to EA, and it would generally be uncool to start criticizing random EAs for their private non-EA/EA-adjacent actions.

I have two answers to that:

  • The first is purposeful availment. If an actor purposefully takes advantage of community resources in connection with an action, they cannot reasonably complain about their choices being the subject of community scrutiny. The Manifest organizers promoted their event on the Forum. A significant portion of Manifold's funding has come from an EA-linked source (FTXFF) and IIRC they have sought grants from OP as well. 
  • The second is adverse effect / distancing. It's reasonable for members of a community that has been adversely affected by an action, or to whom the action is being imputed (or to whom it might reasonably be expected to be imputed) to criticize -- especially if the criticism helps set the record straight that the community doesn't support the action. It is not reasonable for the speakers to expect community members not mitigate the reputational harm that their actions have caused. This is why, for instance, it was appropriate for CEA to issue a statement on the Bostrom e-mail even though there was no personal availment.
    • I don't think it matters if the imputation of others is reasonable. It is the experience of harm to the community that gives it standing to criticize. That community may have been involuntarily and unreasonably dragged into the controversy, but that doesn't change the fact that it is there. In contrast, I think standing based on possible future imputation would be triggered only where some meaningful degree of imputation would be reasonable to some class of outside observers.
    • From my "side," some of the comments by third parties have made the adverse effect / need to distance more acute. People are advocating scientific racism on the merits, which is not a position that the Manifest organizers actually took. While it's important to distinguish between the organizers' stance and that of the third-party commenters, I think it's unavoidable that the third-party comments are raising the temperature here.

So if the Manifest organizers don't want to give the EA community a right to criticize, they can avoid or at least limit purposeful availment, and can at least try to take steps to avoid any negative secondary effects on the EA community.

Load more