Tiresias

353Joined Mar 2022

Comments
29

Hm maybe, I'm not sure. I like to have a professional atmosphere, and public sharing of misdeeds can lead to a culture of like gossip. But, I think it is appropriate to speak publicly about it if the situation was mishandled (in my case, unclear as it's been reopened) or if the person should be blacklisted (I do not think this is the case here).

Lol not you. I deleted most detail I included in that comment because I feel like it's distracting from SBF discussion (like, this convo should not be used as a soapbox for me), the case has recently been reopen (which means probably best if I don't talk about it and also there might be a good outcome). And I also just worry about pissing people off.

It's also like, what are people supposed to do with an anonymous comment with a very vague allegation.

Agree. And also worth noting it seems like he may have never actually been that rich, but just, you know, lied and did fraud.

The general thing I'm hearing is, with a lot of people who do misconduct, you/CEA will hear about this misconduct relatively early on, and they should take action before things get too large to correct. That, early & decisive action is important. Leadership should be taking a lot more initiative in responding to misconduct.

This tracks with my experience too. I've reported professional misconduct, have it not be taken seriously, and have that person continue to gain power. The whole experience was maddening. So, yeah, +1 to early intervention following credible misconduct reports.

It really must feel awful to report serious misconduct and have it not be taken seriously. I've had a similar experience and it crushed me mentally.

I've been thinking about this situation a lot. I don't know many details, but I'm trying to sort through what I think EA leadership should have done differently.

My main thought is, maybe in light of these concerns, they should have kept taking his money, but not tied themselves to him as much. But I don't know many details about how they tied themselves to him. Its just, handling misconduct cases gets complicated when the misconduct is against one of the 100 richest people in the world. And while it's clear Sam treated people poorly, broke many important rules and lied frequently, it was not clear he was stealing money from customers. And so it just leaves me confused. But thank God I am not in charge of handling these sorts of things.

I know it's also not your responsibility to know what to do in situations like this, but I'd be curious to hear what you wished EA leadership/infrastructure had done differently. I think that might help give shape to my thoughts around this situation.

I don't known if communicating super clearly here. So want to clarify. This is not meant as a critical comment at all! I hope it doesn't read as downplaying your experience, because I do feel super alarmed about everything and get the sense EA fucked up big here. I feel fully on support of you here, but I'm worried my confusion makes that harder to read.

Retracting because on reflection I'm like, no one knew he was stealing funds, but I think leadership knew enough of the ingredients to not be surprised by this. It's not just Sam treating employees poorly, but leadership heard that he would lie to people (including investors), mix funds, play fast and loose with the rules. They may not have known the disastrous ways these would combine. Even so, it seems super bad and while I'm still confused as to how the ideal way to handle it would be. It does seem clear to me it was egriously mishandled.

[This comment is no longer endorsed by its author]Reply

I'm not sure about this suggestion, but I wonder if as part of the EAG survey, it might ask if you had an uncomfortable experience with someone.

I know someone who at this EAG, had an uncomfortable experience, but on the minor end of the spectrum. I don't think they considered reporting it to CH at CEA until they heard that someone else independent brought up that they had an uncomfortable experience with the same person at that EAG. On its own, an incredibly minor experience that would seem excessive to bring up to CEA. But hearing it as a pattern of behavior made it more concerning.

So given that reporting to CEA can feel too serious for many offenses, maybe filling it in a survey would be a place people could report more minor experiences like this?

I think another barrier to reporting minor incidences is the potential reportee wouldn't want too serious of sanctions to he taken against the person. A lot may just want someone in a position of authority to say "hey, you may not realize, but you're making people uncomfortable"

I'm a little worried that instilling this policy would lead to a hostile atmosphere or something where fingers are pointed at each other. But maybe worth testing this at an EAGx or something? Not sure.

Hm, I think I did not communicate my concern clearly. The concern I have is not with the CH lead sympathizing with the text of the post. At least in a personal capacity, I agree that the text of the post adds to the conversation is a useful way. I also understand that women are not all in agreement on these points.

The concern I have is the implicit endorsement of the meme shared at the top of the post. Not the text of the post. It's one thing for community members to share these sorts of trivializing memes that mock the position they disagree with. But when the CH lead shares a post that opens with that meme, I wonder, is that how you see that side of the conversation?

I'm not saying the meme at the top of the post means you can't link to it, quote it, reference it. But I'd want some caveat.

Maybe my broader point is this is clearly an emotionally intense topic, that touches on many's personal experiences. And as CH lead, many people are looking to you right now, and a lot is riding on it. You have substantial power, and individually us as community members have much less so. So it feels important, at least to me, that I feel this issue is being handled sensitively, with a lot of empathy and understanding. It's fine for you to express sympathy for different positions, and I think it's valuable to transparently see where you are at. But many people have considered this meme to be trivializing and mocking. I think generally memes making fun of an argument they're disagreeing with will have that effect. So seeing you tacitly endorse mocking/trivializing content is upsetting.

The meme is absolutely trivializing. It is mocking the opposite side of the discussion. The text of the post is not trivializing. Agreeing with the meme does not mean the meme is not trivializing.

One thing I struggle with in discourse, is expressing agreement. Agreeing seems less generative, since I often don't have much more to say than "I agree with this and think you explain it well." I strongly agree with this post, and am very happy you made it. I have some questions/minor points of disagreement, but want to focus on what I agree with before I get to that, since I overwhelmingly agree and don't want to detract from your point.

The sentiment "we are smarter than everyone and therefore we distrust non-EA sources" seems pervasive in EA. I love a lot about EA, I am a highly engaged member. But that sentiment is one of the worst parts about EA (if not the worst). I believe it is  highly destructive to our ability to achieve our aims of doing good effectively. 

Some sub-communities within EA seem to do better at this than others. That being said, I think every element of EA engages in this kind of thinking to some extent. I don't know if I've ever met any EA who didn't think it on some level. I definitely have a stream of this within me. 

But, there is a much softer, more reasonable version of that sentiment. Something like "EA has an edge in some domains, but other groups also have worthwhile contributions." And I've met plenty of EAs who operate much more on this more reasonable line than the excessively superior sentiment described above. Still, it's easy to slip into the excessively superior sentiment and I think we should be vigilant to avoid it.

------

Onto some more critical questions/thoughts.

My previous epistemics used to center around "expert consensus." The COVID-19 pandemic changed that. Expert consensus seemed to frequently be wrong, and I ended up relying much more on individuals with a proven track record, like Zeynep Tufekci. I'm still not sure what my epistemics are, but I've moved towards a forecasting based model. Where I most trust people with a proven track record of getting things right, rather than experts. But it's hard to find people with this proven track record, so I almost always still default to trusting experts. I certainly don't think forum/blog posts fit into this "proven track record" category, unless it's the blog of someone with a proven track record. But "proven track record" is still a very high standard. Zeynep is literally the only person I know who fits the bill. My worry with people using a "forecaster > expert" model is they won't have a high enough standard for what qualifies someone as a trust worthy forecaster.  And it's not like I trust her on everything. I'm wondering what your thoughts are on a forecaster model.

Another question I have is that the slowness of peer-review does strike me as a legitimate issue. But I am not in the AI field at all so I have very little knowledge. I still would like to see AI researchers make more efforts to get their work peer-reviewed, but I'm wondering if there might be some dual system, where less time sensitive reports get peer reviewed and are treated with a high-level of trust, and more time-sensitive reports do not go through as rigorous of a process, but are still shared, albeit with a lower level of trust. I'm really not sure, but some sort of dual system seems necessary to me. It can't be we totally disregard all non peer-reviewed work? 

Yeah, I strongly agree and endorse Michael's post, but this line you're drawing out is also where I struggle. Michael has made better progress on teasing out the boundaries of this line than I have, but I'm still unclear. Clearly there are cases where conventional wisdom is wrong -- EA is predicated on these cases existing.

Michael is saying on questions of philosophy, we should not accept conventional wisdom, but on questions of sociology, we should. I agree with you that the distinction between sociological and philosophical are not quite clear. I think you're example of "what should you do with your life" is a good example of where the boundaries blur.

Maybe, I think "sociological" is not quite the right framing, but something along the lines of "good governance." The peer review point Michael brings up doesn't fit into the dynamic. Even though I agree with him, I think "how much should I trust peer review" is an epistemic question, and epistemics does fall into the category where Michael thinks EAs might have an edge over conventional wisdom. That being said, even if I thought there was reason to distrust conventional wisdom on this point, I would still trust professional epistemic philosophers over the average EA here and I would find it hard to believe that professional epistemic philosophers think forums/blogs are more reliable than peer reviewed journals.

Yeah, I'm not sure that people prioritizing the Forum over journal articles is a majority view, but it is definitely something that happens, and there are currents in EA that encourage this sort of thinking.

I'm not saying we should not be somewhat skeptical of journal articles. There are huge problems in the peer-review world. But forum/blogs posts, what your friends say, are not more reliable. And it is concerning that some elements of EA culture encourage you to think that they are.

Evidence for my claim, based on replies to some of Ineffective Altruism's tweets (who makes a similar critique).

1: https://twitter.com/IneffectiveAlt4/status/1630853478053560321?s=20 Look at replies in this thread

2: https://twitter.com/NathanpmYoung/status/1630637375205576704?s=20 Look at all the various replies in this thread

(If it is inappropriate for me to link to people's Twitter replies in a critical way, let me know. I feel a little uncomfortable doing this, because my point is not to name and shame any particular person. But I'm doing it because it seems worth pushing back against the claim that "this doesn't happen here." I do not want to post a name-blurred screenshot because I think all replies in the thread are valuable information, not just the replies I share, so I want to enable people to click through.)

Load more