Community
Community
Posts about the EA community and projects that focus on the EA community

Quick takes

22
1d
11
Hey y'all, My TikTok algorithm recently presented me with this video about effective altruism, with over 100k likes and (TikTok claims) almost 1 million views. This isn't a ridiculous amount, but it's a pretty broad audience to reach with one video, and it's not a particularly kind framing to EA. As far as criticisms go, it's not the worst, it starts with Peter Singer's thought experiment and it takes the moral imperative seriously as a concept, but it also frames several EA and EA-adjacent activities negatively, saying EA quote "is spending millions on hosting AI safety conferences." I  think there's a lot to take from it. The first is in relation to @Bella's argument recently that EA should be doing more to actively define itself. This is what happens when it doesn't. Because EA is legitimately an interesting topic to learn about because it asks an interesting question. That's what I assume drew many of us here to begin with. It's interesting enough that when outsiders make videos like this, even when they're mildly inaccurate,[1] they will capture the attention of many. This video is a significant impression, but it's not the end-all-be-all, and we should seek to define ourself lest we be defined by videos like it. The second is about zero-sum attitudes and leftism's relation to EA. In the comments, many views like this were presented: @LennoxJohnson really thoughtfully grappled with this a few months ago, when he talked about how his journey from a zero-sum form of leftism and the need for structural change towards becoming more sympathetic to the orthodox EA approach happened. But I don't think we can necessarily depend on similar reckonings happening to everyone, all at the same time. With this, I think there's a much less clear solution than the PR problem, as I think on the one hand that EA sometimes doesn't grapple enough with systemic change, but on the other hand that society would be dramatically better if more people took a more EA outlook towar
8
4d
25
I’ve seen a few people in the LessWrong community congratulate the community on predicting or preparing for covid-19 earlier than others, but I haven’t actually seen the evidence that the LessWrong community was particularly early on covid or gave particularly wise advice on what to do about it. I looked into this, and as far as I can tell, this self-congratulatory narrative is a complete myth. Many people were worried about and preparing for covid in early 2020 before everything finally snowballed in the second week of March 2020. I remember it personally. In January 2020, some stores sold out of face masks in several different cities in North America. (One example of many.) The oldest post on LessWrong tagged with "covid-19" is from well after this started happening. (I also searched the forum for posts containing "covid" or "coronavirus" and sorted by oldest. I couldn’t find an older post that was relevant.) The LessWrong post is written by a self-described "prepper" who strikes a cautious tone and, oddly, advises buying vitamins to boost the immune system. (This seems dubious, possibly pseudoscientific.) To me, that first post strikes a similarly ambivalent, cautious tone as many mainstream news articles published before that post. If you look at the covid-19 tag on LessWrong, the next post after that first one, the prepper one, is on February 5, 2020. The posts don't start to get really worried about covid until mid-to-late February. How is the rest of the world reacting at that time? Here's a New York Times article from February 2, 2020, entitled "Wuhan Coronavirus Looks Increasingly Like a Pandemic, Experts Say", well before any of the worried posts on LessWrong: The tone of the article is fairly alarmed, noting that in China the streets are deserted due to the outbreak, it compares the novel coronavirus to the 1918-1920 Spanish flu, and it gives expert quotes like this one: The worried posts on LessWrong don't start until weeks after this article was p
6
11d
Praise for Sentient Futures By now, I have had the chance to meet most staff at Sentient Futures, and I think they really capture the best that EA has to offer, both in terms of their organisational goals and culture.  They are kind, compassionate, impartial, frugal - the things that I feel like the movement compromised on in the past years in pursuit of trying to save us from AI. I really hope this kind of culture becomes more prominent in the 4th wave of EA[1], with similar organisations popping up in the coming months and years.   PS.: I have friends at the org, so this obviously makes me biased:) 1. ^ 3rd wave described here in Ben West's post. If you go with this one, then what I'm describing would be the 5th wave.
18
14d
15
Rate limiting on the EA Forum is too strict. Given that people karma downvote because of disagreement, rather than because of quality or civility — or they judge quality and/or civility largely on the basis of what they agree or disagree with — there is a huge disincentive against expressing unpopular or controversial opinions (relative to the views of active EA Forum users, not necessarily relative to the general public or relevant expert communities) on certain topics. This is a message I saw recently: You aren't just rate limited for 24 hours once you fall below the recent karma threshold (which can be triggered by one comment that is unpopular with a handful of people), you're rate limited for as many days as it takes you to gain 25 net karma on new comments — which might take a while, since you can only leave one comment per day, and, also, people might keep downvoting your unpopular comment. (Unless you delete it — which I think I've seen happen, but I won't do, myself, because I'd rather be rate limited than self-censor.) The rate limiting system is a brilliant idea for new users or users who have less than 50 total karma — the ones who have little plant icons next to their names. It's an elegant, automatic way to stop spam, trolling, and other abuses. But my forum account is 2.5 years old and I have over 1,000 karma. I have 24 posts published over 2 years, all with positive karma. My average karma per post/comment is +2.3 (not counting the default karma that all post/comments start with; this is just counting karma from people's votes). Examples of comments I've gotten downvoted into the net -1 karma or lower range include a methodological critique of a survey that was later accepted to be correct and led to the research report of an EA-adjacent organization getting revised. In another case, a comment was downvoted to negative karma when it was only an attempt to correct the misuse of a technical term in machine learning — a topic which anyone can conf
1
16d
Posting this here for a wider reach: I'm looking for roommates in SF! Interested in leases that begin in January. Right now, I know three others who are interested and we have a low-key signal group chat. If you are interested, direct message me here or on one my linked socials and we will hop on a 15-minute call to determine if we would be a good match!
16
19d
Petty complaint: Giving What We Can just sent me an email with the subject line "Why I give 🔶 and why your impact is greatest this week". It did not explain why my impact is greatest this week.
2
1mo
4
What AI model does SummaryBot use? And does whoever runs SummaryBot use any special tricks on top of that model? It could just be bias, but SummaryBot seems better at summarizing stuff then GPT-5 Thinking, o3, or Gemini 2.5 Pro, so I'm wondering if it's a different model or maybe just good prompting or something else. @Toby Tremlett🔹, are you SummaryBot's keeper? Or did you just manage its evil twin?
-37
2mo
21
The context for what I'm discussing is explained in two Reflective Altruism posts: part 1 here and part 2 here. Warning: This is a polemic that uses harsh language. I still completely, sincerely mean everything I say here and I consciously endorse it.[1] ---------------------------------------- It has never stopped shocking and disgusting me that the EA Forum is a place where someone can write a post arguing that Black Africans need Western-funded programs to edit their genomes to increase their intelligence in order to overcome global poverty and can cite overtly racist and white supremacist sources to support this argument (even a source with significant connections to the 1930s and 1940s Nazi Party in Germany and the American Nazi Party, a neo-Nazi party) and that post can receive a significant amount of approval and defense from people in EA, even after the thin disguise over top of the racism is removed by perceptive readers. That is such a bonkers thing and such a morally repugnant thing, I keep struggling to find words to express my exasperation and disbelief. Effective altruism as a movement probably deserves to fail for that, if it can't correct it.[2]  My loose, general impression is that people who got involved in EA because of global poverty and animal welfare tend to be broadly liberal or centre-left and tend to be at least sympathetic toward arguments about social justice and anti-racism. Conversely, my impression of LessWrong and the online/Bay Area rationalist community is that they don't like social justice, anti-racism, or socially/culturally progressive views. One of the most bewildering things I ever read on LessWrong was one of the site admins (an employee of Lightcone Infrastructure) arguing that closeted gay people probably tend to have low moral integrity because being closeted is a form of deception. I mean, what?! This is the "rationalist" community?? What are you talking about?! As I recall based on votes, a majority of forum users who
Load more (8/227)

Posts in this space are about

CommunityEffective altruism lifestyle