Community
Community
Posts about the EA community and projects that focus on the EA community

Quick takes

3
5h
Austin Chen (co founder of Manifold) shared some thoughts on the EA community during a recent interview with a former-EA [see transcript here] I think it is good to deter unscrupulous ultra high-net-worth individuals from engaging with EA. I think it was good that various 'EA thought leaders' came out as they did and said something along the lines of "we do not endorse this. Don't commit fraud."  I'm not entirely sure what Chen thinks the alternative could/should have been. Defend SBF's motives? Say nothing? Either of those approaches seem like the kind of optics-maxing he critiques as being a 'huge error' (only instead of aiming to 'broadly protect reputation' it's 'protect reputation in eyes of billionaires'). It's worth pointing out that Chen has a slightly more sympathetic view of Sam than I do:
1
2d
Richard Ngo has a selection of open-questions in his recent post. One question that caught my eye: I originally created this account to share a thought experiment I suspected might be a little too 'out there' for the moderation team. Indeed, it was briefly redacted and didn't appear in the comment section for a while (it does now). It was, admittedly, a slightly confrontational point and I don't begrudge the moderation team for censoring it. They were patient and transparent in explaining why it was briefly redacted. You can read the comment and probably guess correctly why it was flagged.  Still, I am curious to hear of other cases like this. My guess is that in most of them, the average forum reader will side with the moderation team.   LessWrong publishes most of their rejected posts and comments on a separate webpage. I say 'most' as I suspect infohazards are censored from that list. I would be interested to hear the EA forum's moderation team's thoughts on this approach/whether it's something they've considered, should they read this and have time to respond.[1] 1. ^ Creating such a page would also allow them to collect on Ngo's bounty, since they would be answer both how much censorship they do and (assuming they attach moderation notes) why
68
2d
1
In light of recent discourse on EA adjacency, this seems like a good time to publicly note that I still identify as an effective altruist, not EA adjacent. I am extremely against embezzling people out of billions of dollars of money, and FTX was a good reminder of the importance of "don't do evil things for galaxy brained altruistic reasons". But this has nothing to do with whether or not I endorse the philosophy that "it is correct to try to think about the most effective and leveraged ways to do good and then actually act on them". And there are many people in or influenced by the EA community who I respect and think do good and important work.
10
4d
One of the benefits of the EA community is as a social technology where altruistic actions are high status: earning-to-give, pledging and not eating animals are all venerated to varying degrees among the community.  Pledgers have coordinated to add the orange square emoji to their EA forum profile names (and sometimes in their twitter bio). I like this, as it both helps create an environment where one is might sometimes be forced to think "wow, lots of pledgers here, should I be doing that too?" as well as singling out those deserving of our respect.  Part of me wonders if 'we' should go further in leveraging this; bestow small status markers on those who make a particularly altruistic sacrifice.  Unfortunately, there is no kidney emoji, so perhaps those who donate their kidney will need to settle for the kidney bean emoji (🫘). This might seem ridiculous (I am half joking with the kidney beans), but creating neat little ways for those who behave altruistically to reap the status reward might ever so slightly encourage others to collect on the bounty (i.e donate their kidney or save a drowning child) as well as rewarding those who have done the good thing. 
3
11d
Did 80,000 hours ever list global health as a top area to work on? If so, does anyone know when it changed?  Apologies if this is somewhere obvious, I didn't see anything in my quick scan of 80k posts/website 
2
15d
The recent pivot by 80 000 hours to focus on AI seems (potentially) justified, but the lack of transparency and input makes me feel wary. https://forum.effectivealtruism.org/posts/4ZE3pfwDKqRRNRggL/80-000-hours-is-shifting-its-strategic-approach-to-focus   TLDR; 80 000 hours, a once cause-agnostic broad scope introductory resource (with career guides, career coaching, online blogs, podcasts) has decided to focus on upskilling and producing content focused on AGI risk, AI alignment and an AI-transformed world. ---------------------------------------- According to their post, they will still host the backlog of content on non AGI causes, but may not promote or feature it. They also say a rough 80% of new podcasts and content will be AGI focused, and other cause areas such as Nuclear Risk and Biosecurity may have to be scoped by other organisations. Whilst I cannot claim to have in depth knowledge of robust norms in such shifts, or in AI specifically, I would set aside the actual claims for the shift, and instead focus on the potential friction in how the change was communicated. To my knowledge, (please correct me), no public information or consultation was made beforehand, and I had no prewarning of this change. Organisations such as 80 000 hours may not owe this amount of openness, but since it is a value heavily emphasises in EA, it seems slightly alienating. Furthermore, the actual change may not be so dramatic, but it has left me grappling with the thought that other mass organisations could just as quickly pivot. This isn't necessarily inherently bad, and has advantageous signalling of being 'with the times' and 'putting our money where our mouth is' in terms of cause area risks. However, in an evidence based framework, surely at least some heads up would go a long way in reducing short-term confusion or gaps.   Many introductory programs and fellowships utilise 80k resources, and sometimes as embeds rather than as standalone resources. Despite claimi
9
16d
Learnings from a day of walking conversations  Yesterday, I did 7 one-hour walks with Munich EA community members. Here's what I learned and why I would recommend it to similarly extroverted community members: Format * Created an info document and 7 one-hour Calendly slots and promoted them via our WhatsApp group * One hour worked well as a default timeframe - 2 conversations could have been shorter while others could have gone longer * Scheduling more than an hour with someone unfamiliar can feel intimidating, so I'll keep the 1-hour format * Walked approximately 35km throughout the day and painfully learned that street shoes aren't suitable - got blisters that could have been prevented with proper hiking boots Participants * Directly invited two women to ensure diversity, resulting in 3/7 non-male participants * Noticed that people from timeslots 1 and 3 spontaneously met for their own 1-1 while I was busy with timeslot 2 * Will actively encourage more member-initiated connections next time to create a network effect Conversations * My prepared document helped skip introductions and jump straight into meaningful discussion * Tried balancing listening vs. talking, succeeding in some conversations while others turned into them asking me more questions * Expanded beyond my usual focus on career advice, offering a broader menu of discussion topics * This approach reached people who initially weren't interested in career discussions * One participant was genuinely surprised their background might be impactful in ways they hadn't considered * Another wasn't initially interested in careers but ended up engaging with the topic after natural conversation flow * 2 of 7 people shared personal issues where I focused on empathetic listening and sharing relevant parts of my own experience * The remaining 5 discussions centered primarily on EA concepts and career-related topics Results * Received positive feedback suggesting participants gained eithe
6
17d
I'm visiting Mexico City, anyone I should meet / anyone would like to meet up? About me: Ex President LSE EA, doing work in global health, prediction markets, AIS.https://eshcherbinin.notion.site/me
Load more (8/195)

Posts in this space are about

CommunityEffective altruism lifestyle