Edit: A previous version of this post contained significant errors, which was pointed out in the comments. I mark and correct them in this version, but I believe my point is largely unaffected.
I originally wanted to write a comment to the forum post CEA Disambiguation, which contains further context, but I believe this warrants its own post.
The Effective Ventures Foundation (formerly known as CEA) (I'll call them EVF) runs many projects, including 80.000 hours, Giving What We Can, Longview Philanthropy, EA Funds, and the Centre for Effective Altruism (CEA), which in turn seems to run this forum.
It is very strange to learn that these organizations are not independent from each other, and the EVF board can exert influence over each of them. I believe this structure was set up so the EVF board has central control over EA strategy.
Edit: I now believe that this structure was set up to share resources like ops and oversight. It's not clear to me that this is the correct choice. I do not believe that 80k, GWWC , EA Funds and CEA are sufficiently protected from interference or legal risks.
I think this is very bad. EVF can not be trusted to unbiasedly serve the EA community as a whole, it misleads donors, and it exposes effective altruism to unnecessary risks of contagion.
An example (misleading donors):
As "Giving What We Can", EVF currently recommends donations to a number of funds that are run by EVF:
- Longview Philanthropy: Longtermism Fund
- several Funds run by Effective Altruism Funds
Through the "EA Funds Longterm Future Fund", EVF has repeatedly paid out grants to itself, for example in July 2021 it paid itself $177,000 for its project "Centre for the Governance of AI".
Another example (biased advertising):
On https://www.effectivealtruism.org/, which serves as an introduction to EA, the EVF links to its own project 80000hours, but not to the competing Probably Good.In both examples, the obvious conflicts of interest are stated nowhere. Edit: On GWWC's page, The conflict of interest is stated somewhere, but I missed it when quickly looking for it.
What should we do?
I have not thought hard about this, but I have come up with a few obvious-sounding ideas. Please leave your thoughts in the comments!
This is what I think we should do:
- I think we should break up the EVF into independent projects, especially those that direct or receive funding. Until that happens, we should conceive of EVF as a single entity.
- We need to push for more transparency. EVF's "EA Funds"-branded funds publicly disclose their spending, which is commendable! EVF's "Longview Longtermism Fund" does not. (Edit: The Fund had previously credibly committed to releasing a spending report, which I missed)
- Funds should definitely disclose their conflicts of interest.
- We should champion community-run organizations like EA Germany e.V. or the Czech EA Association, and let them step into their natural role of representing the community. GWWC members should demand control over their institution.
- We should continue the debate about EA's governance norms. In order to de-risk the community and to represent our values, we should establish democratic, transparent and fair governance on all levels, including local groups.
- We probably should rethink supporting community leaders that consolidate their power instead of distributing it.
DFTBA,
Ludwig
PS: the same consideration applies for effektiveraltruismus.de, which is run by an EA donation platform, and not by EA Germany. (Edit: The page has now been transferred. Thanks!)
I'm very thankful for EVF and associated orgs, and as referenced by others, it's understandable how/why the community is currently organized this way. Eventually, depending on growth and other factors, it'll probably make sense for the various subs to legally spin off, but I'm not sure if this is high priority - it depends on just how worried EAs are about governance in the wake of this past month.
I will say, conflict of interest disclosures are important but seems like they may be doing a lot of work here. As far as I can tell[1], leadership within these organizations also function independently and they're particularly aware of bias as EAs so they've built processes to mitigate this. But being aware of bias and disclosing it doesn't necessarily stop [trustworthy] people from being biased (see: doctors prescribing drugs from companies that pay for talks.) Even if these organizations separated tomorrow, I'd half expect them to be in relative lock-step for years to come. Even if these orgs never shared funding/leadership again, they're in the same community, they'll have funders in common, they'll want to impress the same people, so they'll make decisions with this in mind. I've seen this first-hand in every [non-EA] org I've ever been a part of, in sectors of all sizes, so moving forward we'll have to build with this bug in mind and decide just how much mitigation is worth doing.
I'm aware that none of this is original or ground-breaking but perhaps worth reiterating.
This is a little facetious, but does anyone else find themselves caveating more often these days, just in case...