Speaking only for ConcernedEAs, we are likely to continue remaining anonymous until costly signals are sent that making deep critiques in public will not damage one's career/funding/social prospects within EA.
We go into more detail in Doing EA Better, most notably here:
Prominent funders have said that they value moderation and pluralism, and thus people (like the writers of this post) should feel comfortable sharing their real views when they apply for funding, no matter how critical they are of orthodoxy.
This is admirable, and we are sure that they are being truthful about their beliefs. Regardless, it is difficult to trust that the promise will be kept when one, for instance:
- Observes the types of projects (and people) that succeed (or fail) at acquiring funding
- i.e. few, if any, deep critiques or otherwise heterodox/“heretical” works
- Looks into the backgrounds of grantmakers and sees how they appear to have very similar backgrounds and opinions (i.e they are highly orthodox)
- Experiences the generally claustrophobic epistemic atmosphere of EA
- Hears of people facing (soft) censorship from their superiors because they wrote deep critiques of the ideas of prominent EAs
- Zoe Cremer and Luke Kemp lost “sleep, time, friends, collaborators, and mentors” as a result of writing Democratising Risk, a paper which was critical of some EA approaches to existential risk. Multiple senior figures in the field attempted to prevent the paper from being published, largely out of fear that it would offend powerful funders. This saga caused significant conflict within CSER throughout much of 2021.
- Sees the revolving door and close social connections between key donors and main scholars in the field
- Witnesses grantmakers dismiss scientific work on the grounds that the people doing it are insufficiently value-aligned
- If this is what is said in public (which we have witnessed multiple times), what is said in private?
We go into more detail in the post, but the most important step is the radical viewpoint-diversifying of grantmaking and hiring decision-making bodies.
As long as the vast majority of resource-allocation decisions are made by a tiny and homogenous group of highly orthodox people, the anonymity motive will remain.
This is especially true when one of the (sometimes implicit) selection criteria for so many opportunities is percieved "value-alignment" with a very specific package of often questionable views, i.e. EA Orthodoxy.
We appreciate that influential members of the community (e.g. Buck) are concerned about the increasing amounts of anonymity, but unfortunately expressing concern and promising that there is nothing to worry about is not enough.
If we want the problem to be solved, we need to remove the factors that cause it.
We’re very happy to hear that you have seriously considered these issues.
If the who-gets-to-vote problem was solved, would your opinion change?
We concur that corrupt intent/vote-brigading is a potential drawback, but not an unsolvable one.
We discuss some of these issues in our response to Halstead on Doing EA Better:
There are several possible factors to be used to draw a hypothetical boundary, e.g.
These and others could be combined to define some sort of boundary, though of course it would need to be kept under constant monitoring & evaluation.
Given a somewhat costly signal of alignment it seems very unlikely that someone would dedicate a significant portion of their lives going “deep cover” in EA in order to have a very small chance of being randomly selected to become one among multiple people in a sortition assembly deliberating on broad strategic questions about the allocation of a certain proportion of one EA-related fund or another.
In any case, it seems like something at least worth investigating seriously, and eventually become suitable for exploring through a consensus-building tool, e.g. pol.is.
What would your reaction be to an investigation of the boundary-drawing question as well as small-scale experimentation like that we suggest in Doing EA Better?
What would your criteria for “success” be, and would you be likely to change your mind if those were met?
Thank you for your response, and more generally thank you for having been consistently willing to engage with criticism on the forum.
We’re going to respond to your points in the same format that you made them in for ease of comparison.
Should EA be distinctive for its own sake or should it seek to be as good as possible? If EA became more structurally similar to e.g. some environmentalist movements in some ways, e.g. democratic decision-making, would that actually be a bad thing in itself? What about standard-practice transparency measures? To what extent would you prefer EA to be suboptimal in exchange for retaining aspects that would otherwise make it distinctive?
In any case, we’re honestly a little unsure how you reached the conclusion that our reforms would lead EA to be “basically the same as standard forms of left-wing environmentalism”, and would be interested in you spelling this out a bit. We assume there are aspects of EA you value beyond what we have criticised, such as an obsessive focus on impact, our commitment to cause-prioritisation, and our willingness to quantify (which is often a good thing, as we say in the post), etc., all of which are frequently lacking in left-wing environmentalism.
As we say in the post, this was overwhelmingly written before the FTX crash, and the problems we describe existed long before it. The FTX case merely provides an excellent example of some of the things we were concerned about, and for many people shattered the perhaps idealistic view of EA that stopped so many of the problems we describe from being highlighted earlier.
Finally, we are not sure why you are so keen to repeatedly apply the term “left wing environmentalism”. Few of us identify with this label, and the vast majority of our claims are unrelated to it.
* We actually touch on it a little: the mention of the Californian Ideology, which we recommend everyone in EA reads.
The term is explored in an upcoming section, here.