One interesting implication of this theory is that the spread of strict utilitarian philosophies would be a contributing factor to existential risk. The more people are willing to bite utilitarian bullets, the more likely it is that one will bite the "kill everyone" bullet.
Can you go into more detail about this? Utilitarians and other people with logically/intellectually precise worldviews seem to be pretty consistently against human extinction; whereas average people with foggy worldviews tend to randomly flip in various directions depending on what hot takes they've recently read.
Even if we don't agree with the human extinction radicals, people might split off from the movement and end up supporting it.
Most human extinction radicals seem to emerge completely seperate from the EA movement and never intersect with it, e.g. AI scientists who believe in human extinction. If people like Tomasik or hÉigeartaigh ever end up pro-extinction, it's probably because they recently did a calculation that flipped them to prioritize s-risk over x-risk, but sign uncertainty and error bars remain more than sufficiently wide to keep them in their network with their EV-focused friends (at minimum, due to the obvious possibility of doing another calculation that flips them right back).
One interpretation of the FTX affair was that it was a case of seemingly EA aligned people splitting off to do unethical things justified by utilitarian math.
Wasn't the default explanation that SBF/FTX had a purity spiral with no checks and balances, and combined with the high uncertainty of crypto trading, SBF became psychologically predisposed to betting all of EA on his career instead of betting his career on all of EA? Powerful people tend to become power seeking and that's a pretty solid prior in most cases.
Will this yacht replace the Empress of the Seas cruise ship grant, planned to house the new headquarters of Open Philanthropy 2? Unlike the original cruise ship design, I'm highly skeptical that a yacht will be large enough of a headquarters to house the top-performing 50% of EA.
This is really interesting. I recommend posting it to lesswrong, the people there will probably find it more interesting than here.
I don't think this is about "good" or "bad" posts, it's about whether the post is mainly focused on reviewing/improving the community as a whole or whether it's more about improving individuals or productivity. In that case, "EA burnout" wouldn't be in community, "longtermist turn" would clearly be community, whereas anything about the history of longtermism (e.g. in ancient Rome) would not be.
You brought up a good point that language barrier post being ambiguous.
Are you visiting China right now? For people in China, it's best to contact them through connections. The best person for that is [removed for privacy reasons].
If this post goes high karma, maybe it's a little ironic. But it also means a big net reduction in karma misallocated to community posts, and it will probably more than compensate for this one additional community post getting a ton of karma.
This makes sense. Upvotes are fundamentally anonymous, and we have no idea what kinds of people are upvoting what things. I'm pretty surprised at how mathematically obvious and explanatory your findings are in hindsight, and yet it never occurred to me or anyone else until now.
I'd like to add that, just like how auctions tend to be won by bidders who got carried away and accidentally bid more than what the object was worth to them, it makes sense to think that 80% of the upvotes could potentially be coming from 20% of the forum readers, and some of those people might spend a little too much time getting invested into the forum instead of feeling obligated to go to events and connect with lots of people and see what they're spending their time working on.
I hope that sharing papers and getting feedback still works well or even better with the new solution, e.g. I'm really glad I chanced across Akhil's research and can now share it with all sorts of people I meet in my line of work, even though my own priority is AI and AGI policy and I would never have encountered it if not for the forum.