AnonymousEAForumAccount

1809Joined Oct 2019

Comments
173

My colleague Shakeel Hashim at CEA is focused on communications for all of EA, not CEA in particular.

 

Are there any mechanisms in place to ensure that Shakeel’s work prioritizes the interests of  “all of EA, not CEA in particular”? I think these interests will generally, but not always, be aligned. The recent allegations that multiple  CEA board members were aware of SBF behaving unethically at Alameda in 2018 (as well as community requests for comments on those allegations) seem like an important area of potential misalignment. 

I’m gratified you find my contributions helpful Guy, thank you for the positive feedback.

 

I think tackling these problems (some of which have easy solutions, like making boards larger and more diverse and independent) should be one of the top things the EA community starts demanding from these orgs.

I’m somewhat skeptical that there are a lot of easy solutions here. “Making boards larger and more diverse and independent” would help in my view, but a lot depends on procedural factors (how new board members are selected, who has input into that process, what the board’s mandate is, etc) as well as cultural factors  (how empowered the board feels, how much the organization engages with the board, whether stakeholders feel represented by the board, etc.) I’d argue that the advisory panel CEA created in 2017 is a good example of how simply adding more perspectives isn’t sufficient; the community had no input into the board’s composition or mission, and eventually the panel seems to have just petered out pretty quickly as CEA appears to have stopped consulting it. 

In my opinion, one of the best easy starting points would be for EA organizations and individuals to investigate the EA Good Governance Project, which seems well positioned to improve governance of specific EA organizations.

The trickier, and probably more important, task is to improve community level governance. In this space, I’d argue the highest priority is starting to address the question of how governance should work at Effective Ventures . This is not an easy question, but it is a critical one due to a) the importance of EV’s work to EA; b) the implausibility (in my opinion) of EV’s current board providing good oversight to its wide-ranging projects; c) EV operating certain projects on behalf of the broader community (e.g. community health, EAG, effectivealtruism.org); and d) allegations that EA leaders including multiple  EV board members were aware of SBF behaving unethically at Alameda in 2018, potentially misaligning the incentives of EV and the EA community. 

Some of these factors certainly apply to other organizations. OpenPhil’s work is obviously critical to the EA community so (A) applies, and I suspect (B) does too. But OpenPhil hasn’t assumed responsibilities from the EA community so (C) doesn’t apply, which makes me somewhat sympathetic to the argument that it’s Dustin and Cari’s money and they can do whatever they want with it. So the combination of factors above is really why I think the broader governance discussion needs to start with EV.

However the EA community decides to pursue better governance, I hope it leverages existing expertise on the subject and avoids notions of “EA exceptionalism” (which I think is always problematic but particularly bad in a context like governance where EA has a poor track record). Brigid Slipka’s resignation letter from GiveWell’s board includes links to several resources, and includes a thoughtful framework of her own. I think that framework is useful for understanding EV’s governance; I’d characterize EV’s programs as in the “adolescent” stage progressing toward “mature”, while the board setup seems like it’s still at the “start-up” stage. 

This is a terrific post, thank you! And it’s generated some excellent discussion too. I particularly agree with Ivy’s thoughts about the effects of focusing on recruiting young people (I’ve discussed some similar things here and here) and Tobyj’s comment about how “more work needs to be done on governance at the ecosystem level” vs. the level of specific organizations.

In my opinion, the case for increased attention to governance is even stronger because your list of examples omits some of the community’s largest historical and ongoing instances of weak/unprioritized governance:

  • The board of Effective Ventures currently has an oversight mandate that seems on its face to be completely unrealistic. This five person board is responsible for overseeing CEA, 80k, Forethought Foundation, EA Funds, GWWC, Centre for the Governance of AI, Longview Philanthropy, Asterisk, Non-Trivial, and  BlueDot Impact. I’m not sure what is meant by “The Board is responsible for overall management and oversight of the charity, and where appropriate it delegates some of its functions to sub-committees and directors within the charity”; it sounds like some of the organizations that make up EV might be responsible for their own oversight (which would raise other governance questions).
  • Effective Ventures has not communicated its thinking with respect to cause prioritization (though perhaps tellingly all five EV board members appear to hold a longtermist world view) and how (if at all) that thinking affects which organizations are allowed to operate under the EV umbrella. EV also hasn’t communicated anything  about whether/how EV is accountable to the broader EA community in any way. 
  • Community-level governance institutions are limited, and arguably non-existent. CEA’s community health team could possibly fit this bill, but it seems important that (as far as I know) there are no mechanisms in place to make that team accountable to the broader community or to independently assess instances where CEA itself (or individuals closely affiliated with CEA) could be causing a community health problem.  
  • In my detailed Red Teaming analysis of CEA’s community building work, I found that (particularly prior to 2019) CEA routinely underdelivered on commitments, rarely performed meaningful project evaluations, and understated the frequency, degree, and duration of mistakes when publicly discussing them, often leading to negative externalities borne by the broader community. Some of the problems, such as favoring CEA’s cause priorities over the community’s in contexts like EA Global, effectivealtruism.org, and the EA Handbook, directly relate to community governance issues. (One of my conclusions from this analysis was “the EA community should seriously engage with governance questions.”)
  • When GiveWell changed the composition of its board in early 2019 three board members resigned, two of whom publicly described concerns they had with the board’s composition and role.
    • Brigid Slipka: “I am unsettled by the decision to reduce the Board down to a hand-picked group of five, two of whom are the co-founders and another of whom is the primary funder (in addition to also funding Holden at Open Philanthropy). This feels regressive. It represents a tightening of control at a moment when the focus should be on increased accountability to the public GiveWell serves.”
    • Rob Reich: “I have continuing concerns that the board’s important governance role is not taken seriously enough by GiveWell leadership. Board meetings are too frequently nothing more than the performance of meeting legal obligations. For an organization as large and as important as GiveWell, the oversight of GiveWell operations should be much greater, and welcomed by GiveWell leadership as an opportunity for diversifying input on GiveWell strategy. Recent decisions by GiveWell to shrink board membership are in my view a mistake, making it less likely that the board will play any serious role in steering the organization into the future.”
  • OpenPhil has a small board populated almost entirely by insiders who have been part of the organization since its founding. “The Open Philanthropy 501(c)(3) is governed by a Board of Directors currently consisting of Dustin Moskovitz (Chair), Cari Tuna, Divesh Makan, Holden Karnofsky, and Alexander Berger. Open Philanthropy LLC is governed by a Board of Managers currently consisting of Dustin Moskovitz, Cari Tuna, Elie Hassenfeld, Holden Karnofsky, and Alexander Berger.” 
  • CEA, both historically and currently, has deprioritized public program evaluations that would add accountability to community members and outside stakeholders. Instead, CEA has prioritized accountability to board members and major funders. Notably there is significant overlap between those constituencies, with three of CEA’s five board members being employed by either Open Phil or FTX Foundation (until the latter’s team resigned due to the scandal).

In “working on Effective Ventures’ response to the current situation”, who are you representing? Effective Ventures? The EA community? 

If the former, how in principle would you handle conflicting interests between different parts of EV if those were to arise? Do you see it as problematic that nobody is responsible for taking the community's perspective?

If the latter, are there any mechanisms to protect against potential misaligned incentives (i.e. you being paid by EV) if there were conflicts between the interests of the community and EV?

Thanks for publishing that Max, and for linking to it from CEA's strategy page. I think that's an important improvement in CEA's transparency around these issues.

Unfounded rumors about Leverage were common in the EA community when I was involved, and it's disappointing that they continue to be perpetuated. 

Most of the rumors about Leverage that I heard were along the lines of what Zoe later described (which is also largely consistent with other accounts described here and here). So I wouldn’t call those rumors “unfounded” at all. In this case at least, where there was smoke there turned out to be a fire.

Other rumors I heard were quite consistent with Leverage’s own description (pretty culty in my opinion) of why it terminated an eight year exploratory psychology program:

As our researchers sensitized themselves further, and accessed more and more of what seemed to be unconscious content, several negative effects occurred. Some of the psychological content was itself distressing, there appeared to be psychogenic effects, with individuals negatively affecting each other unintentionally through what appeared to be non-verbal communication, and conflict within the group escalated. After attempting to resolve the problems and making insufficient headway, we shut down the psychology research program and began the process of re-organizing the institute.

 

Re:

If you'd like to learn more about what Leverage was like, a bunch of information has come out that allows for a more nuanced and accurate picture. Two of the best, in my view, are Cathleen’s post and our Inquiry Report.  Also, if you're interested in what we work on today, feel free to visit our website.

I haven’t read Cathleen’s post, as it apparently takes several hours to read. I skimmed the Inquiry Report, enough to learn that has methodological biases that render it largely useless in my opinion (“Although we reached out to everyone from Leverage 1.0, not everyone chose to speak to us, including those who may have had the worst experiences.”)

I urge anyone who would “like to learn more about what Leverage was like” to read Zoe’s account and the other accounts I link to at the start of this comment instead of or in addition to the material Kerry suggests. 

I was in charge of the EAO team when two members of my team initially conceptualized Pareto, but mid-way through EAO merged into the rest of CEA , and Will was in charge. Will was responsible for fundraising for Pareto, and he signed off on having it at the 454 building (where many of Leverage's staff were located).

The interview process seems to have been the most problematic part of Pareto and was presumably designed by your team members who ran the project. Who should have nipped that in the bud? If Will didn’t take over until mid-way, would that have been your responsibility? Are you aware of any accountability for anyone involved in the creation or oversight of the interview process?

When Will signed off on having Pareto at the Leverage building, was he aware participants wouldn’t be informed about this?

 

I don't think it's accurate to say that Pareto was overall culty or anything like that. I've seen the post-program evaluations, and they seem to have been pretty good overall, with high ratings given to much of the content taught by Leverage and Paradigm staff. 

 

Were fellows anonymous when submitting their evaluations and confident that their evaluations could not be traced back to them? I imagine they’d have been reluctant to criticize the program (and by extension highly influential EAs involved including yourself) if they could not be completely confident in their anonymity. I’d also note that the fellows likely had very high thresholds for cultyness given that they weren’t turned off by the interview process.

Since CEA never shared the program evaluations (nor published its own evaluation despite commitments to do so), I feel like the most credible publicly available assessment is Beth Barnes’ (one of the Fellows) observation that “I think most fellows felt that it was really useful in various ways but also weird and sketchy and maybe harmful in various other ways.”

I imagine the fellowship itself was less culty than the interview process (a pretty low bar). As to how culty it was, I’d say that depends to some degree on how culty one thinks Leverage was at the time since Barnes also noted: “Several fellows ended up working for Leverage afterwards; the whole thing felt like a bit of a recruiting drive.” 

Zoe’s account (and other accounts described here and here) certainly make Leverage sound quite culty. That would be consistent with my own interactions with Leverage (admittedly quite limited); I remember coming out of those interactions feeling like I'd never encountered a community that (in my subjective opinion based on limited data) emitted such strong culty vibes, and that nothing come particularly close.

 

Also, many of the Pareto Fellows went on to do important work in and around the EA community.

I don’t doubt at all that many Pareto Fellows went on to do great work. Given the caliber and background of the people who participated, it would be weird if they didn’t. But I’m not aware of any evidence that Pareto positively contributed to their impact.

For the record, while I’m highly critical of the Pareto Fellowship as a program, those criticisms do not extend to the Fellows themselves.

Does EV have any policies around term-limits for board members? This is a fairly common practice for nonprofits and I’m curious about how EV thinks about the pros and cons, and more generally how EV thinks about board composition and responsibilities given the outsize role the board has in community governance.

Re: 1, can you share which board member is responsible for this?

Re: 2, is this something CEA plans to work on in say the next 3 months? If not, would it help if a volunteer did an initial draft?

If you have information or anecdotes that relate to this analysis, you can share them anonymously via this form. Thanks to an anonymous EA who DM’d me with the suggestion to set up this form.

Load More