Master Student ML and Research Assistant Explainable ML at University of Tubingen. Working on my Masters thesis right now, which is on the composition of Neural Networks and Probabilistic Graphical Models. After that I would like to contribute to AI safety.
Would it be possible to measure "churn rate" somehow? I feel like this is a very important indicator for community health. A number of proxies you proposed have more of an outside perspective, from which it might be difficult to see the internal community health. There are of course multiple reasons for people leaving a community, but I would expect a strong correlation between more people leaving and how "toxic" EA becomes.