I think that's the first time I've seen this written as clearly as here, and I don't really like it or agree
Apologies, I think I should be clear that when I say "the messaging changed" I'm just describing what I believed happened, not that I think it was a good thing. I agree that some people aren't interested in AIS, or aren't the right fit, but can still make the world substantially better. I do however think that we should openly say "we think AIS is an important cause area" and should spend less time arguing why that isn't a weird thing to think.
I also get the impression that you forget to mention the value of community for keeping strong values, and sticking to your plan
I agree that this is a value of community building, but it seems similarly relevant for explicitly longtermist community building and broad EA community building?
If these judgment calls are being made and underpin the work of CEA’s groups team, that seems very relevant for the EA movement.
I agree. We're working on increasing transparency - expect to see more posts on this in the future
Do I interpret your comment correctly, that the CEA groups team does have an internal qualitative ranking, but you are not able to share it publicly?
I'm not 100% clear what you mean here, so I've taken a few guesses, and answered all of them
I also want to emphasise that an important component in our grantmaking is creating healthy intellectual scenes.
Hey Miri,
Typically, unless someone is donating large amounts of money - we would interpret direct work as more valuable. But all of these things have a scale, and there is a qualitative part to the interpretation. With donations, this is especially obvious - where it is very measurably true that some people are able to donate much more than others. However there is also an element of this with careers, where some people are able to have a huge impact with their careers, and others have smaller impact (yet still large in absolute terms). Because there are a lot of sensitive, qualitative judgement calls - we can't provide full reasoning transparency.
Hey Linda,
I'm head of CEA's groups team. It is true that we care about career changes - and it is true that our funders care about career changes. However it is not true that this is the only thing that we care about. There are lots of other things we value, for example grant-recipients have started effective institutions, set up valuable partnerships, engaged with public sector and philanthropic bodies. This list is not exhaustive! We also care about the welcomingness of groups, and we care about groups not using "polarizing techniques".
In terms of longtermist pressure - I have recently written a post about why we believe in a principles first approach, rather than an explicitly longtermist route.
Hi Dušan
I work with Ben, as head of groups at CEA. If I could answer
How important is to you pushing to open EA groups in countries where a lot of aid is going?
In general we've found it very difficult to "push" for opening an EA group. Running an impactful EA group requires a pretty high level of EA knowledge (alongside other skills) and trying to find an EA organizer, with that level of skill, in a country without an EA Group has historically proved difficult.
Instead we have prioritized having global platforms (e.g., Virtual Programs, EA Anywhere, and professional/affiliation based groups). Additionally when someone does wish to start a group we have support (e.g., resource centre, welcomer calls)
I've only been at CEA for a part of Max's tenure - but it's been a real privilege seeing you work. Absolutely incredible what you've achieved.
I was surprised by your “unpleasant to a lot of communities” comment. By that, are you referring to the dynamic where if you have to place value on outcomes, some people/orgs will be disappointed with the value you place on their work?
Not really. I was more referring that any attempt to quantify the likely impact someone will have is (a) inaccurate (b) likely to create some sort of hierarchy and unhealthy community dynamics.
This seems like another area where control groups would be helpful in making the exercise an actual experiment. Seems like a fairly easy place to introduce at least some randomization into
I agree with this, I like the idea of successful groups joining existing mentorship programs such that there is a natural control group of "average of all the other mentors." (There are many ways this experiment would be imperfect, as I'm sure you can imagine) - I think the main implementation challenge here so far has been "getting groups to actually want to do this." We are very careful to preserve the groups' autonomy, I think this acts as a check on our behaviour. If groups engage on programs with us voluntarily, and we don't make that engagement a condition of funding, it demonstrates that our programs are at least delivering value in the eyes of the organizers. If we started trying to claim more autonomy and started designating groups into experiments, we'd lose one of our few feedback measures. On balance I think I would prefer to have the feedback mechanism rather than the experiment. (The previous paragraph does contain some simplifications, it would certainly be possible to find examples of where we haven't optimised purely for group autonomy)
Hey, thanks for this. I work on CEA's groups team. When you say "we don’t know much about which work ... has the most impact on the outcomes we care about" - I think I would rather say
a) We have a reasonable, yet incomplete, view on how many people different groups cause to engage in EA, and some measure on what is the depth of that engagement
b) We are unsure how many of those people would have become engaged in EA anyway
c) We do not have a good mapping from "people engaging with EA" to the things that we actually want in the world
I think we should be sharing more of the data we have on what types of community building have, so far, seemed to generate more engagement. To this end we have a contractor who will be providing a centralized service for some community building tasks, to help spread what is working. I also think groups that seem to be performing well should be running experiments where other groups adopt their model. I have proposed this to several groups, and will continue to do so.
However trying to predict the mapping from engagement to good things happening in the world is (a) sufficiently difficult that I don't think anyone can do it reliably (b) deeply unpleasant to a lot of communities. In trying to measure this we could decrease the amount of good that is happening in the world - and also probably wouldn't succeed in taking the measurement accurately.
I've heard people express the idea that top of funnel community building is not worth the effort, as EA roles often get 100+ applicants.
I think this is misguided. Great applicants may get a job after only a few applications. Poor applicants may apply to many many jobs without getting a job. As a result you should expect poor applicants to be disproportionately well represented in the applicant pool - hence the pure number of applicants isn't that informative. This point is weakened by recruitment systems being imperfect, but as long as you believe recruitment systems have some ability to select people, then I think this take holds.
I'm really only making a claim about a specific argument, not whether or not top of funnel community building is a good idea on the margin.
H/T Amarins for nudging me to post this