A

AnonymousEAForumAccount

2375 karmaJoined Oct 2019

Comments
212

They also suggested some steps that we didn’t see as feasible for us. 

Can you disclose the specifics of some or all of these steps and the reasons why you didn't think they were feasible?

I agree with all of this, and hope the CH team responds. I'd also add that the video of Kat's talk has a prominent spot on the EAG 2023 playlist on CEA's official youtube channel. That video has nearly 600 views.

If initial due diligence conducted by an independent third party didn’t uncover obvious evidence about which side is correct, IMO that’s very helpful info for the broader community and it really seems like there should be a way of expressing that in a way that doesn’t introduce legal liability.

I would like to see an updated post from the team on what the community should and should not expect from them, with the caveat that they may be somewhat limited in what they can say legally about their scope.

Agree this would be helpful. In addition to clarifying what community expectations should be, I’d like to know whether the Nonlinear affair will be included either or both of the internal and external reviews that are (were?) being conducted. And if so, would that inclusion have taken place if Ben hadn’t published his post?

Whether or not CEA/EV had in-house counsel, I’d like to think they had an ability to access legal advice. If not, that seems like a poorly thought out setup.

I agree it makes sense for EV to have a lower risk tolerance in light of the Charity Commission investigation. However, I’m making the following assumptions (it would be great if a lawyer could opine on whether they are accurate):

  1. There are simple steps CH could have taken that would carry very little risk of a defamation suit. For instance, I find it hard to believe CH would be liable if they’d issued a public statement along the lines of “Alice and Chloe report XYZ about Nonlinear; Nonlinear disputes these claims. CH is not publicly picking a side but wants to make people aware of the dispute.” Maybe Alice and Chloe would have objected to that kind of statement, but it seems like it wouldn’t have material defamation risk (though again, I’m not a lawyer).
  2. Inaction by CH could also carry legal risk. For example, if CH hears credible complaints against an org that is still allowed to come to EAG (an event run by CH's CEA colleagues), and then someone joins that org at EAG and subsequently suffers the same treatment that CH was aware of, I imagine CEA/CV could in some cases have liability if that person wanted to sue. 

Let me know if you'd like me to remove my comment while this gets sorted out.

Thank you for all your efforts in this endeavor Ben, you’ve performed a very valuable service to the community.

Your comments about CEA’s Community Health team in this post seem particularly important to me. If CEA’s CH team had in depth knowledge of how Alice and Chloe described their experiences, had found no reason to doubt those accounts, and still declined to make any kind of public statement, that’s incredibly damning. I'm open to hearing CH's take on things, but if that’s actually the case I agree with your view that “the world would probably be better if the CEA Community Health team was disbanded and it was transparent that there is little-to-no institutional protection from bullies in the EA ecosystem.” That’s definitely a new position for me; while I’ve criticized CH work before my prior assumption was that the team could be fixed.

Thanks for running this analysis Ollie! Interesting findings!

it does suggest the results are sensitive to my scoring system. I'm not sure where this leaves me; that isn't ideal and I'd like something more robust but, on the other hand, I think these high-scoring results (people securing jobs, teams being formed) are exactly the kind of things we want to happen at our events so I think it's reasonable to put significant weight on them. 

Agree that this exercise doesn’t yield an obvious conclusion. Given that you’ve found the results to be sensitive to the scoring system, I suggest trying to figure out how sensitive. You’ve crunched the numbers using max scores of 50 and 5; I imagine it’d be quick to do the same with max scores of 20,10, and 1 (the other scores you used in your original scoring system). 

The other methodology I’d suggest looking at would be to keep the same relative rankings you used originally, but just condense the range of scores (to say 1,2,3,4,5 vs. 1,5,10,20,50). That would capture the fact that you think starting an EA project is more valuable than meeting a collaborator (which is lost by capping the scores at 5), but would assess it as 2.5x more valuable vs. 10x. (Btw, I think the technical term for “beheading” the data is “Winsorizing” though that’s usually done using percentiles of the data set, which is another way you could do a sensitivity analysis). 

This sort of more comprehensive sensitivity analysis would shed some light on whether your observation about EAGxAustralia is supported by the broader data set: 

For EAGxAustralia, three outcomes really stood out and the rest were good but unremarkable, according to the attendees. It seems likely to me that a good chunk of the value of the event was accrued by just a few people who had more life-changing things happen to them (e.g. getting a grant or job). 

If that looks to be a robust finding, that has pretty big implications for how events should be run. FWIW I’d consider that a more important finding than EAGx events looking more cost-effective than CEP events, and would suggest editing the bottom line upfront section to note that. 

Longer term, I’d look to refine the metrics you use for events and how you collect the data. I love that you’ve started looking beyond “number of connections” to “valuable outcomes”; this definitely seems like a move in the right direction. However, it’s also not feasible for you to score responses from attendees at scale going forward. So I’d suggest giving asking responders to score the event themselves, while providing guidance on how different experiences should be scored (e.g. starting a new project = X) to promote consistency across respondents. 

My hunch is that it’d be good to have people score the event along the different dimensions (connections, learning, motivation/positivity, action, other) you listed in the “How do attendees get value from EA community-building events?” post. That might make the survey too onerous, but if you could collect that data you’d have a lot of granularity about which events accrued which type of value and it's probably easier to do relative scoring within categories rather than across them. You'd  still be able to create a single score based on a weighted average of the different dimensions (where you’d presumably give connections and learning the most weight, since that’s where people seem to get the most value).

Thanks for providing that background Jessica, very helpful. It'd be great to see metrics for the OSP included in the dashboard at some point.

It might also make sense to have the dashboard provide links to additional information about the different programs (e.g. the blog posts you link to) so that users can contextualize the dashboard data.

Load more