I'm Executive Director at the Centre for Effective Altruism. I used to be a moderator here, and helped to launch the new version of the Forum in 2018.
I don't know if you've seen ea.greaterwrong.com - that has a dark mode (in the left hand menu).
I think that applying EA principles and concepts to different areas is really valuable, even if they’re areas that EA hasn’t focused on a lot up to this point. I’m glad you asked this question!
I think a very common failure mode for CEA over the past ~5 years has been: CEA declares they are doing X, now no one else wants to or can get funding to do X, but CEA doesn't actually ever do X, so X never gets done.
I agree with this. I think we've been making progress both in following through on what we say we'll do and in welcoming others to fill neglected roles, and I'd like to see us continue to make progress, particularly on the latter.
I agree that it’s important that CEA reliably and verifiably listens to the community.
I think that we have been listening, and we published some of that consultation - for instance in this post and in the appendix to our 2019 review (see for instance the EA Global section).
Over the next few months we plan to send out more surveys to community members about what they like/dislike about the EA community members, and as mentioned above, we’re thinking about using community member satisfaction as a major metric for CEA. If it did become a key metric, it’s likely that we would share some of that feedback publicly.
We don’t currently have plans for a democratic structure, but we’ve talked about introducing some democratic elements (though we probably won’t do that this year).
Whilst I agree that consultation is vital, I think the benefits of democracy over consultation are unclear. For instance, voters are likely to have spent less time engaging with arguments for different positions and there is a risk of factionalism. Also the increased number of stakeholders means that the space of feasible options is reduced because there are few options that a wide spread of the community could agree on, which makes it harder to pursue more ambitious plans. I think you’re right that this would increase community support for CEA’s work and make CEA more accountable. I haven’t thought a lot about the options here, and it may be that there are some mechanisms which avoid the downsides. I’d be interested in suggestions.
Anyway, I definitely think it’s important for CEA to listen to the community and be transparent about our work, and I hope to do more of that in the future.
Yes, we’ve thought about this. We currently think that it’s probably best for them to spin off separately, so that’s the main option under consideration, but we might change our minds (for instance as we learn more about which candidates are available, and what their strategic vision for the projects would be).
This is a bit of a busy week for me, so if you’d like me to share more about our considerations, upvote this comment, and I’ll check back next week to see if there’s been sufficient interest.
I think this is a really important point, and one I’ve been thinking a lot about over the past month. As you say, I do think that having a strategy is an important starting point, but I don’t want us to get stuck too meta. We’re still developing our strategy, but this quarter we’re planning to focus more on object-level work. Hopefully we can share more about strategy and object-level work in the future.
That said, I also think that we’ve made a lot of object-level progress in the last year, and we plan to make more this year, so we might have underemphasized that. You can read more in the (lengthy, sorry!) appendix to our 2019 post, but some highlights are:
Of course, there are lots of improvements we still need to make, but I still feel happy with this progress, and with the progress we made towards more reliably following through on commitments (e.g. addressing some of the problems with EA Grants).
Sorry, that paragraph wasn’t clear. Before we had offices in Oxford and Berkeley. The change is to close the Berkeley office (for reasons discussed above) and keep the Oxford office open. We think it’s useful to be in Oxford because that’s where a lot of our staff are currently based, and because it allows us to keep in touch with other EA orgs (e.g. the Global Priorities Institute) who share our office in Oxford.
Thanks for your comments!
>Wasn't GWWC previously independent, before it was incorporated into CEA in 2016?
Essentially, yes. Giving What We Can was founded in 2009. CEA was set up as an umbrella legal entity for GWWC and 80,000 Hours in 2011, but the projects had separate strategies, autonomous leadership etc. In 2016, there was a restructure of CEA such that GWWC and some of the other activities under CEA’s umbrella came together under one CEO (Will MacAskill at that time), whilst 80,000 Hours continued to operate independently.
>What's changed over the last 5 years to warrant a reversal?
To be honest, I think it’s less that the strategic landscape has changed, and more that the decision 5 years ago hasn’t worked out as well as we hoped.
(I wasn’t around at the time the decision was made, and I’m not sure if it was the right call in expectation. Michelle (ex GWWC Executive Director) previously shared some thoughts on this on the Forum.)
As discussed here, from 2017 to 2019 CEA did not invest heavily in Giving What We Can. Communications became less frequent and the website lost some features.
We’ve now addressed the largest of those issues, but the trustees and I think that Giving What We Can is an important project that hasn’t lived up to its (high) potential under the current arrangement (although pledges continue to grow).
Giving What We Can is one of the most successful parts of CEA. Over 4500 members have logged over $125M in donations. Members have pledged to donate $1.5B. Beyond the money raised, it has helped to introduce lots of people (myself included) to the EA community. This means that we are all keen to invest more in GWWC.
I also think it’s important to narrow CEA’s focus. That focus looks like it’s going to be nurturing spaces for people to discuss and apply EA principles. GWWC is more focused on encouraging a particular activity (pledging to donate to charities). Since it was successfully run as an independent project in the past, trying to spin it out seemed like the right call. I’m leading on this process and trustees are investing a lot of time in it too, and we’ll work very closely with new leadership to test things out and make sure the new arrangement works well.
"I think there's a very large chance they don't matter at all, and that there's just no one inside to suffer" - this strikes me (for birds and mammals at least) as a statement in direct conflict with a large body of scientific evidence, and to some extent, consensus views among neuroscientists (e.g. the Cambridge Declaration on Consciousness https://en.wikipedia.org/wiki/Animal_consciousness#Cambridge_Declaration_on_Consciousness).
I think that the Cambridge Declaration on Consciousness is weak evidence for the claim that this is a "consensus view among neuroscientists".
From Luke Muehlhauser's 2017 Report on Consciousness and Moral Patienthood:
1. The document reads more like a political document than a scientific document. (See e.g. this commentary.)2. As far as I can tell, the declaration was signed by a small number of people, perhaps about 15 people, and thus hardly demonstrates a “scientific consensus.”3. Several of the signers of the declaration have since written scientific papers that seem to treat cortex-required views as a live possibility, e.g. Koch et al. (2016) and Laureys et al. (2015), p. 427.
1. The document reads more like a political document than a scientific document. (See e.g. this commentary.)
2. As far as I can tell, the declaration was signed by a small number of people, perhaps about 15 people, and thus hardly demonstrates a “scientific consensus.”
3. Several of the signers of the declaration have since written scientific papers that seem to treat cortex-required views as a live possibility, e.g. Koch et al. (2016) and Laureys et al. (2015), p. 427.
(I was the interim director of CEA during Leaders Forum, and I’m now the executive director.)
I think that CEA has a history of pushing longtermism in somewhat underhand ways (e.g. I think that I made a mistake when I published an “EA handbook” without sufficiently consulting non-longtermist researchers, and in a way that probably over-represented AI safety and under-represented material outside of traditional EA cause areas, resulting in a product that appeared to represent EA, without accurately doing so). Given this background, I think it’s reasonable to be suspicious of CEA’s cause prioritisation.
(I’ll be writing more about this in the future, and it feels a bit odd to get into this in a comment when it’s a major-ish update to CEA’s strategy, but I think it’s better to share more rather than less.) In the future, I’d like CEA to take a more agnostic approach to cause prioritisation, trying to construct non-gameable mechanisms for making decisions about how much we talk about different causes. An example of how this might work is that we might pay for an independent contractor to try to figure out who has spent more than two years full time thinking about cause prioritization, and then surveying those people. Obviously that project would be complicated - it’s hard to figure out exactly what “cause prio” means, it would be important to reach out through diverse networks to make sure there aren’t network biases etc.
Anyway, given this background of pushing longtermism, I think it’s reasonable to be skeptical of CEA’s approach on this sort of thing.
When I look at the list of organizations that were surveyed, it doesn’t look like the list of organizations most involved in movement building and coordination. It looks much more like a specific subset of that type of org: those focused on longtermism or x-risk (especially AI) and based in one of the main hubs (London accounts for ~50% of respondents, and the Bay accounts for ~30%).* Those that prioritize global poverty, and to a lesser extent animal welfare, seem notably missing. It’s possible the list of organizations that didn’t respond or weren’t named looks a lot different, but if that’s the case it seems worth calling attention to and possibly trying to rectify (e.g. did you email the survey to anyone or was it all done in person at the Leaders Forum?)
I think you’re probably right that there are some biases here. How the invite process worked this year was that Amy Labenz, who runs the event, draws up a longlist of potential attendees (asking some external advisors for suggestions about who should be invited). Then Amy, Julia Wise, and I voted yes/no/maybe on all of the individuals on the longlist (often adding comments). Amy made a final call about who to invite, based on those votes. I expect that all of this means that the final invite list is somewhat biased by our networks, and some background assumptions we have about individuals and orgs.
Given this, I think that it would be fair to view the attendees of the event as “some people who CEA staff think it would be useful to get together for a few days” rather than “the definitive list of EA leaders”. I think that we were also somewhat loose about what the criteria for inviting people should be, and I’d like us to be a bit clearer on that in the future (see a couple of paragraphs below). Given this, I think that calling the event “EA Leaders Forum” is probably a mistake, but others on the team think that changing the name could be confusing and have transition costs - we’re still talking about this, and haven’t reached resolution about whether we’ll keep the name for next year.
I also think CEA made some mistakes in the way we framed this post (not just the author, since it went through other readers before publication.) I think the post kind of frames this as “EA leaders think X”, which I expect would be the sort of thing that lots of EAs should update on. (Even though I think it does try to explicitly disavow this interpretation (see the section on “What this data does and does not represent”, I think the title suggests something that’s more like “EA leaders think these are the priorities - probably you should update towards these being the priorities”). I think that the reality is more like “some people that CEA staff think it’s useful to get together for an event think X”, which is something that people should update on less.
We’re currently at a team retreat where we’re talking more about what the goals of the event should be in the future. I think that it’s possible that the event looks pretty different in future years, and we’re not yet sure how. But I think that whatever we decide, we should think more carefully about the criteria for attendees, and that will include thinking carefully about the approach to cause prioritization.