Community
Community
Posts about the EA community and projects that focus on the EA community

Quick takes

11
3d
9
Rate limiting on the EA Forum is too strict. Given that people karma downvote because of disagreement, rather than because of quality or civility — or they judge quality and/or civility largely on the basis of what they agree or disagree with — there is a huge disincentive against expressing unpopular or controversial opinions (relative to the views of active EA Forum users, not necessarily relative to the general public or relevant expert communities) on certain topics. This is a message I saw recently: You aren't just rate limited for 24 hours once you fall below the recent karma threshold (which can be triggered by one comment that is unpopular with a handful of people), you're rate limited for as many days as it takes you to gain 25 net karma on new comments — which might take a while, since you can only leave one comment per day, and, also, people might keep downvoting your unpopular comment. (Unless you delete it — which I think I've seen happen, but I won't do, myself, because I'd rather be rate limited than self-censor.) The rate limiting system is a brilliant idea for new users or users who have less than 50 total karma — the ones who have little plant icons next to their names. It's an elegant, automatic way to stop spam, trolling, and other abuses. But my forum account is 2.5 years old and I have over 1,000 karma. I have 24 posts published over 2 years, all with positive karma. My average karma per post/comment is +2.3 (not counting the default karma that all post/comments start with; this is just counting karma from people's votes). Examples of comments I've gotten downvoted into the net -1 karma or lower range include a methodological critique of a survey that was later accepted to be correct and led to the research report of an EA-adjacent organization getting revised. In another case, a comment was downvoted to negative karma when it was only an attempt to correct the misuse of a technical term in machine learning — a topic which anyone can conf
1
4d
Posting this here for a wider reach: I'm looking for roommates in SF! Interested in leases that begin in January. Right now, I know three others who are interested and we have a low-key signal group chat. If you are interested, direct message me here or on one my linked socials and we will hop on a 15-minute call to determine if we would be a good match!
16
7d
Petty complaint: Giving What We Can just sent me an email with the subject line "Why I give 🔶 and why your impact is greatest this week". It did not explain why my impact is greatest this week.
2
1mo
4
What AI model does SummaryBot use? And does whoever runs SummaryBot use any special tricks on top of that model? It could just be bias, but SummaryBot seems better at summarizing stuff then GPT-5 Thinking, o3, or Gemini 2.5 Pro, so I'm wondering if it's a different model or maybe just good prompting or something else. @Toby Tremlett🔹, are you SummaryBot's keeper? Or did you just manage its evil twin?
-34
1mo
21
The context for what I'm discussing is explained in two Reflective Altruism posts: part 1 here and part 2 here. Warning: This is a polemic that uses harsh language. I still completely, sincerely mean everything I say here and I consciously endorse it.[1] ---------------------------------------- It has never stopped shocking and disgusting me that the EA Forum is a place where someone can write a post arguing that Black Africans need Western-funded programs to edit their genomes to increase their intelligence in order to overcome global poverty and can cite overtly racist and white supremacist sources to support this argument (even a source with significant connections to the 1930s and 1940s Nazi Party in Germany and the American Nazi Party, a neo-Nazi party) and that post can receive a significant amount of approval and defense from people in EA, even after the thin disguise over top of the racism is removed by perceptive readers. That is such a bonkers thing and such a morally repugnant thing, I keep struggling to find words to express my exasperation and disbelief. Effective altruism as a movement probably deserves to fail for that, if it can't correct it.[2]  My loose, general impression is that people who got involved in EA because of global poverty and animal welfare tend to be broadly liberal or centre-left and tend to be at least sympathetic toward arguments about social justice and anti-racism. Conversely, my impression of LessWrong and the online/Bay Area rationalist community is that they don't like social justice, anti-racism, or socially/culturally progressive views. One of the most bewildering things I ever read on LessWrong was one of the site admins (an employee of Lightcone Infrastructure) arguing that closeted gay people probably tend to have low moral integrity because being closeted is a form of deception. I mean, what?! This is the "rationalist" community?? What are you talking about?! As I recall based on votes, a majority of forum users who
0
1mo
Just calling yourself rational doesn't make you more rational. In fact, hyping yourself up about how you and your in-group are more rational than other people is a recipe for being overconfidently wrong. Getting ideas right takes humility and curiosity about what other people think. Some people pay lip service to the idea of being open to changing their mind, but then, in practice, it feels like they would rather die than admit they were wrong.  This is tied to the idea of humiliation. If disagreement is a humiliation contest, changing one's mind can feel emotionally unbearable. Because it feels as if to change your mind is to accept that you deserve to be humiliated, that it's morally appropriate. Conversely, if you humiliated others (or attempted to), to admit you were wrong about the idea is to admit you wronged these people, and did something immoral. That too can feel unbearable. So, a few practical recommendations: -Don't call yourself rational or anything similar -Try to practice humility when people disagree with you -Try to be curious about what other people think -Be kind to people when you disagree so it's easier to admit if they were right -Avoid people who aren't kind to you when you disagree so it's easier to admit if you were wrong
-4
2mo
1
deleted my original comment about the first DDS attack because it was called a 'crackpot theory and shouldn't be on the forum'. I didn't phrase it right but was asking for any research from potential catastrophic risk groups on probability/estimates of attacks like this increasing (especially with global heat around AGI race) and any recommendations for regular citizens to prepare for it.  Second attack hit this morning in under a week and it's now picking up in press as potential threat. So i'm going to trust my gut on this one and say i'm not wrong in forecasting this is an immediate emerging threat. I'm going to start compiling some work on this, let me know if you're interested.
10
2mo
TL;DR: $100,000 for insights into an EA's unsolved medical mystery (Sharing on behalf of the patient to preserve their anonymity)  The Medical Mystery Prize is a patient-funded initiative offering a $100,000 grand prize (plus smaller awards) for ideas that help advance a difficult, unresolved medical case. The patient works in AI safety. The goal is to solve his health issue so that he can do his best work. All patient records are fully anonymized and HIPAA-compliant. Submissions for the prize will be reviewed by a licensed healthcare provider before reaching the patient. Even if you don’t have a complete solution, it’s worth taking a look; sometimes a fresh perspective or small hypothesis can make a real difference! Partial contributions will also be awarded smaller prize amounts. Check out the case details and submission info at themedicalmysteryprize.com.
Load more (8/224)

Posts in this space are about

CommunityEffective altruism lifestyle