I don't think this is about "good" or "bad" posts, it's about whether the post is mainly focused on reviewing/improving the community as a whole or whether it's more about improving individuals or productivity. In that case, "EA burnout" wouldn't be in community, "longtermist turn" would clearly be community, whereas anything about the history of longtermism (e.g. in ancient Rome) would not be.
You brought up a good point that language barrier post being ambiguous.
This makes sense. Upvotes are fundamentally anonymous, and we have no idea what kinds of people are upvoting what things. I'm pretty surprised at how mathematically obvious and explanatory your findings are in hindsight, and yet it never occurred to me or anyone else until now.
I'd like to add that, just like how auctions tend to be won by bidders who got carried away and accidentally bid more than what the object was worth to them, it makes sense to think that 80% of the upvotes could potentially be coming from 20% of the forum readers, and some of those people might spend a little too much time getting invested into the forum instead of feeling obligated to go to events and connect with lots of people and see what they're spending their time working on.
I hope that sharing papers and getting feedback still works well or even better with the new solution, e.g. I'm really glad I chanced across Akhil's research and can now share it with all sorts of people I meet in my line of work, even though my own priority is AI and AGI policy and I would never have encountered it if not for the forum.
Can you go into more detail about this? Utilitarians and other people with logically/intellectually precise worldviews seem to be pretty consistently against human extinction; whereas average people with foggy worldviews tend to randomly flip in various directions depending on what hot takes they've recently read.
Most human extinction radicals seem to emerge completely seperate from the EA movement and never intersect with it, e.g. AI scientists who believe in human extinction. If people like Tomasik or hÉigeartaigh ever end up pro-extinction, it's probably because they recently did a calculation that flipped them to prioritize s-risk over x-risk, but sign uncertainty and error bars remain more than sufficiently wide to keep them in their network with their EV-focused friends (at minimum, due to the obvious possibility of doing another calculation that flips them right back).
Wasn't the default explanation that SBF/FTX had a purity spiral with no checks and balances, and combined with the high uncertainty of crypto trading, SBF became psychologically predisposed to betting all of EA on his career instead of betting his career on all of EA? Powerful people tend to become power seeking and that's a pretty solid prior in most cases.