Zachary RobinsonšŸ”ø

CEO @ Centre for Effective Altruism
1828 karmaJoined Working (6-15 years)San Francisco, CA, USA

Comments
12

Open Phil does not want to fund anything that is even slightly right of center in any policy work

This is false.

I think it's possible our views are compatible here. I want expertise to be valued more on the margin because I found EV and many other EA orgs to tilt towards an extreme of prioritizing value alignment, but I certainly believe there are cases where value alignment and general intelligence matter most and also that there are cases where expertise matters more.

I think the key lies in trying to figure out which situations are which in advance.

I think the weighted views of the community should likely inform CEA's cause prioritization, though I think it should be one data point among many. I do continue to worry a bit about self-fulfilling prophecies. If EA organizations make it disproportionately easy for people prioritizing certain causes to engage (e.g. by providing events for those specific causes, or by heavily funding employment opportunities for those causes) then I think it becomes murkier how to account for weighted cause prioritization because cause prioritization is both an input and an output.

I think it's super reasonable for people to be confused about this. EV is a ridiculously confusing entity (or rather, set of entities), even without the name change and overlapping names.

I wouldn't consider Wytham to have ever been a part of the project that's currently known as CEA. A potential litmus test I'd use is "Was Wytham ever under the control of CEA's Executive Director?" To the best of my knowledge, the answer is no, though there's a chance I'm missing some historical context.

This comment also discusses this distinction further.

[T]hese seem to be exactly the same principles CEA has stated for years. If nothing about them is changing, then it doesn't give much reason to think that CEA will improve in areas it has been deficient to date. To quote probably-not-Albert-Einstein, ā€˜Insanity is doing the same thing over and over again and expecting different results.ā€™

 

I really really wish 'transparency' would make the list again (am I crazy? I feel like it was on a CEA list in some form in the early days, and then was removed). I think there are multiple strong reasons for making transparency a core principle:

 

Thereā€™s a distinction between what an organization wants to achieve and how it wants to achieve it. The principles described in the original post are related to the what. They help us identify a set of shared beliefs that define the community we want to cultivate.

 

I think thereā€™s plenty of room for disagreement and variation over how we cultivate that community. Even as CEAā€™s mission remains the same, I expect the approach weā€™ll use to achieve that mission will vary. Itā€™s possible to remain committed to these principles while also continuing to find ways to improve CEAā€™s effectiveness. 

 

I view transparency as part of the how, i.e. I believe transparency can be a tool to achieve goals informed by EA principles, but I donā€™t think itā€™s a goal in itself. Looking at the spectrum of approaches EA organizations take to doing good, Iā€™m glad that thereā€™s room in our community for a diversity of approaches. I think transparency is a good example of a value where organizations can and should commit to it at different levels to achieve goals inspired by EA principles, and as a result I donā€™t think itā€™s a principle that defines the community.

 

For example, I think itā€™s highly valuable for GiveWell to have a commitment to transparency in order for them to be able to raise funds and increase trust in their charity evaluations, but I think transparency may cause active harm for impactful projects involving private political negotiations or infohazards in biosecurity. Transparency is also not costless, e.g. Open Philanthropy has repeatedly published pieces on the challenges of transparency. I think itā€™s reasonable for different individuals and organizations in the EA community to have different standards for transparency, and Iā€™m happy for CEA to support others in their approach to doing good at a variety of points along that transparency spectrum.

 

When it comes to CEA, I think CEA would ideally be more transparent and communicating with the community more, though I also donā€™t think it makes sense for us to have a universal commitment to transparency such that I would elevate it to a ā€œcore principle.ā€ I think different parts of our work deserve different levels of transparency. For example:

  • I think CEA should communicate about programmatic goals, impacts, and major decisions, which weā€™ve done before (see e.g. here)ā€”but I think we would ideally be doing more.
  • On the other end of the spectrum, there are some places where confidentiality seems like an obvious good to me, e.g. with some information that is shared with our Community Health Team. I donā€™t expect this will be a novel idea for most readers, but I think itā€™s useful to illustrate that even for CEA, transparency isnā€™t an unabated good.
  • Somewhere in between is something like the EAG admissions bar. We do share significant amounts of information about admissions, but as Amy Labenz (our Head of Events) has stated, we want to avoid situations where we share so much information that people can use it to game the admissions process. I think itā€™s worth us potentially investing more in similar meta-transparency around where we will and wonā€™t expect to share information. I suspect the lack of total transparency will upset some members of the community (particularly those who arenā€™t admitted to our events), but I think the tradeoffs are plausibly worth it.

 

I find the principles themselves quite handwavey, and more like applause lights than practical statements of intent. What does 'recognition of tradeoffs' involve doing? It sounds like something that will just happen rather than a principle one might apply. Isn't 'scope sensitivity' basically a subset of the concerns implied by 'impartiality'? Is something like 'do a counterfactually large amount of good' supposed to be implied by impartiality and scope sensitivity? If not, why is it not on the list? If so, why does 'scout mindset' need to be on the list, when 'thinking through stuff carefully and scrupulously' is a prerequisite to effective counterfactual actions? On reading this post, I'm genuinely confused about what any of this means in terms of practical expectations about CEA's activities.

 

I feel quite strongly that these principles go beyond applause lights and are substantively important to EA. Instead of going into depth on all of the principles, Iā€™ll point out that many others have spent effort articulating the principles and their value, e.g. herehere, and here.

To briefly engage with some of the points in your comment and explain how I see these principles holding value:

  • Impartiality and scope sensitivity can exist independently of each other. Many contemporary approaches to philanthropy are highly data-driven and seek to have more impact, but they arenā€™t impartial with respect to their beneficiaries. As an example, the Gates Foundationā€™s US education program strikes me as an approach that is likely to be scope-sensitive without being impartial. Theyā€™re highly data-driven and want to improve US education as much as possible, but it seems likely to me that their focus on the US education as opposed to e.g. educational programs in Nigeria stems from Gates being in the US rather than an impartial consideration of all potential beneficiaries of their philanthropy. 
  • I also think itā€™s possible to have impartiality without scope sensitivity. Animal shelters and animal sanctuaries strike me as efforts that reflect impartiality insofar as they value the wellbeing of a wide array of species, but they donā€™t try to account for scope sensitivity (e.g. corporate campaigns are likely to improve the lives of orders of magnitude more animals per dollar).
  • I agree that a scout mindset and recognition of tradeoffs are important tools for doing counterfactually large amounts of good. I also think theyā€™re still wildly underutilized by the rest of the world. Stefan Schubertā€™s claim that the triviality objection is beside the point resonates with me. The goal of these principles isnā€™t to be surprising, but rather to be action-guiding and effective at inspiring us to better help others.

 

 

'I view the community as CEAā€™s team, not its customers' sounds like a way of avoiding ever answering criticisms from the EA community, and really doesn't gel with the actual focuses of CEA

 

I think itā€™s important to view the quote from the original post in the context of the following sentence: ā€œWhile we often strive to collaborate and to support people in their engagement with EA, our primary goal is having a positive impact on the world, not satisfying community members (though oftentimes the two are intertwined).ā€ I believe the goals of engaged community members and CEA are very frequently aligned, because I believe most community members strive to have a positive impact on the world. With that being said, if and when having a positive impact on the world and satisfying community members does come apart, we want to keep our focus on the broader mission. 

 

I worry some from the comments in response to this post that people are concerned we wonā€™t listen to or communicate with the community. My take is that as ā€œteammates,ā€ we actually want to listen quite closely to the community and have a two-way dialogue on how we can achieve these goals. With that being said, based on the confusion in the comments, I think it may be worth putting the analogy around ā€œteammatesā€ and ā€œcustomersā€ aside for the moment. Instead, let me say some concrete things about how CEA approaches engagement with the community:

  • I believe the majority of CEAā€™s impact flows through the community. In recent years, our decision making has placed the most emphasis on metrics around the number of positive career changes people have made as a result of our programs. We think the community has valuable input to give us on how we can help them help others, and we use their input to drive decisions. We frequently solicit feedback for this purpose, e.g. via our recent forum survey, or the surveys we run after most of our events.
  • The ultimate beneficiaries of our work are groups like children who would otherwise die from malaria, chickens who would otherwise suffer in cages, and people who might die or not exist due to existential catastrophes. I think these are populations that the vast majority of the EA community is concerned about as well. I see us as collaborating to achieve these goals, and I think CEA is best poised to achieve them by empowering people who share core EA principles.
  • While I think most people in EA would agree with the above goals, I do think at times that meta organizations have erred too far in the direction of trying to optimize for community satisfaction. I think this was particularly true during the FTX boom times, when significant amounts of money were spent in ways that, to my eyes, blurred the lines between helping the community do more good and just plain helping the community. See e.g. these posts for some historical discussion.
  • Concretely, this affects how we evaluate CEAā€™s impact. For example, for events, our primary focus is on metrics like how many positive career changes occur as a result of our events, as opposed to attendee satisfaction. We do collect data on the latter and treat it as a useful input for our decision-making. Among other reasons, we believe itā€™s helpful because we think one of the things that satisfies many community members is when we help them improve their impact! But itā€™s an input, not the thing weā€™re optimizing for. We have made decisions that may make our events less of a pleasant experience (e.g. cutting back on meals and snack variety) but ultimately think that we can use these funds better elsewhere or that our donors can instead not give to CEA and redirect the funding to beneficiaries that both they and we care about.
  • Sometimes, approaches to serving different parts of the community are in tension with each other. To return to EAG admissions, I think Eli Nathan does a good job in this comment discussing how we both incorporate stakeholder feedback but donā€™t optimize for making the community happy. Sometimes we have to make tough decisions on tradeoffs between how we support different parts of the community, and weā€™ll use a mix of community input and our own judgment when doing so.
  • I think if anyone was best able to make a claim to be our customers, it would be our donors. Accountability to the intent behind their donations does drive our decision-making, as I discussed in the OP. I think itā€™s also important to note that I donā€™t perceive this to be a change from CEAā€™s historical practices (if anything, I think this dynamic has become less pronounced with recent changes at Open Philanthropy and CEA, although I still am very unsure how it will shake out in the long run). 
  • I still want us to invest more in communicating with the community. I suspect you and I have different takes on what we feel like the optimal level of communication and transparency is, but I do agree that CEA should directionally be communicating more. Our main bottleneck to doing so right now is bandwidth, not desire. (Weā€™re exploring ways to reduce that bottleneck but donā€™t want to make promises.) I think itā€™s a good thing when we engage more, and Iā€™m supportive of efforts from our team to do so, whether thatā€™s through proactive posts from us or engaging with community critiques. The desire to be transparent was one of the original inspirations for doing this principles-first post.
  • I think the principles-first approach is good at recognizing the diversity of perspectives in our community and supporting individual community members in their own journey to do good. We regularly have forum posts, event attendees and speakers, and group members whose cause prioritization reflects choices I disagree with. I think thatā€™s good!

I want to flag for Forum readers that I am aware of this post and the associated issues about FTX, EV/CEA, and EA. I have also reached out to Becca directly. 

I started in my new role as CEAā€™s CEO about six weeks ago, and as of the start of this week Iā€™m taking a pre-planned six-week break after a year sprinting in my role as EV USā€™s CEO[1]. These unusual circumstances mean our plans and timelines are a work in progress (although CEAā€™s work continues and I continue to be involved in a reduced capacity).

Serious engagement with and communication about questions and concerns related to these issues is (and was already) something I want to prioritize, but I want to wait to publicly discuss my thoughts on these issues until I have the capacity to do so thoroughly and thoughtfully, rather than attempt to respond on the fly. I appreciate people may want more specific details, but I felt that Iā€™d at least respond to let people know Iā€™ve acknowledged the concerns rather than not responding at all in the short-term.

  1. ^

     Itā€™s unusual to take significant time off like this immediately after starting a new role, but this is functionally a substitute for me not taking an extended break between roles. For some banal logistical reasons, it made more sense for me to start and then take time off.

1. It's unclear what the legal status of EV will be at the end of the process. If it does exist, I expect it would be in a minimalist fashion and I wouldn't expect it to resemble what it has historically looked like (e.g. I don't expect it to be a fiscal sponsor for multiple projects).

2. No specific timeline. It's in a queue with other pieces of public communications I expect to do after I transition into a new role at CEA, and I'm not planning on it being the first piece.

Thanks to the search committee, the CEA team, the boards, and the kind commenters -- I'm looking forward to joining the team!

I'm planning to publish some forum posts as I get up to speed in the role, and I think those will be the best pieces to read to get a sense of my views. If it's helpful for getting a rough sense of timing, I'm still working full-time on EV at the moment, but will transition into my CEA role in mid-February.

Cross posting from here

 

Thanks for flagging! New donations wonā€™t be used for this settlement. The funding for the settlements has already been secured, and none of EVā€™s projects will need to allocate any additional funding. Besides funding that came from FTX, no funds that have previously been donated to a specific project will be used as part of this settlement.

As noted by Jason, the EV US settlement remains subject to court approval, and we wonā€™t be commenting on it further while the settlement process is still underway. With that being said, we didnā€™t want any misunderstandings to disrupt CEAā€™s fundraising efforts in the meantime. 

Also, as a minor correction, the motion to approve the settlement was not filed by EV US ā€“ it was filed by the FTX debtors (this is standard practice for approval of settlements in bankruptcy cases).

Load more