Community
Community
Posts about the EA community and projects that focus on the EA community

Quick takes

3
4d
Did 80,000 hours ever list global health as a top area to work on? If so, does anyone know when it changed?  Apologies if this is somewhere obvious, I didn't see anything in my quick scan of 80k posts/website 
2
8d
The recent pivot by 80 000 hours to focus on AI seems (potentially) justified, but the lack of transparency and input makes me feel wary. https://forum.effectivealtruism.org/posts/4ZE3pfwDKqRRNRggL/80-000-hours-is-shifting-its-strategic-approach-to-focus   TLDR; 80 000 hours, a once cause-agnostic broad scope introductory resource (with career guides, career coaching, online blogs, podcasts) has decided to focus on upskilling and producing content focused on AGI risk, AI alignment and an AI-transformed world. ---------------------------------------- According to their post, they will still host the backlog of content on non AGI causes, but may not promote or feature it. They also say a rough 80% of new podcasts and content will be AGI focused, and other cause areas such as Nuclear Risk and Biosecurity may have to be scoped by other organisations. Whilst I cannot claim to have in depth knowledge of robust norms in such shifts, or in AI specifically, I would set aside the actual claims for the shift, and instead focus on the potential friction in how the change was communicated. To my knowledge, (please correct me), no public information or consultation was made beforehand, and I had no prewarning of this change. Organisations such as 80 000 hours may not owe this amount of openness, but since it is a value heavily emphasises in EA, it seems slightly alienating. Furthermore, the actual change may not be so dramatic, but it has left me grappling with the thought that other mass organisations could just as quickly pivot. This isn't necessarily inherently bad, and has advantageous signalling of being 'with the times' and 'putting our money where our mouth is' in terms of cause area risks. However, in an evidence based framework, surely at least some heads up would go a long way in reducing short-term confusion or gaps.   Many introductory programs and fellowships utilise 80k resources, and sometimes as embeds rather than as standalone resources. Despite claimi
9
9d
Learnings from a day of walking conversations  Yesterday, I did 7 one-hour walks with Munich EA community members. Here's what I learned and why I would recommend it to similarly extroverted community members: Format * Created an info document and 7 one-hour Calendly slots and promoted them via our WhatsApp group * One hour worked well as a default timeframe - 2 conversations could have been shorter while others could have gone longer * Scheduling more than an hour with someone unfamiliar can feel intimidating, so I'll keep the 1-hour format * Walked approximately 35km throughout the day and painfully learned that street shoes aren't suitable - got blisters that could have been prevented with proper hiking boots Participants * Directly invited two women to ensure diversity, resulting in 3/7 non-male participants * Noticed that people from timeslots 1 and 3 spontaneously met for their own 1-1 while I was busy with timeslot 2 * Will actively encourage more member-initiated connections next time to create a network effect Conversations * My prepared document helped skip introductions and jump straight into meaningful discussion * Tried balancing listening vs. talking, succeeding in some conversations while others turned into them asking me more questions * Expanded beyond my usual focus on career advice, offering a broader menu of discussion topics * This approach reached people who initially weren't interested in career discussions * One participant was genuinely surprised their background might be impactful in ways they hadn't considered * Another wasn't initially interested in careers but ended up engaging with the topic after natural conversation flow * 2 of 7 people shared personal issues where I focused on empathetic listening and sharing relevant parts of my own experience * The remaining 5 discussions centered primarily on EA concepts and career-related topics Results * Received positive feedback suggesting participants gained eithe
6
10d
I'm visiting Mexico City, anyone I should meet / anyone would like to meet up? About me: Ex President LSE EA, doing work in global health, prediction markets, AIS.https://eshcherbinin.notion.site/me
28
14d
3
I would like to publicly set a goal not to comment other people's posts with a criticism of some minor side point that doesn't matter. I have a habit of doing that, but I think it's usually more annoying than it is helpful so I would like to stop. If you see me doing it, feel free to call me out (I reserve the right to make substantive criticisms of a post's central arguments)
4
23d
The True Believer by Eric Hoffer is a book about the psychology of mass movements. I think there are important cautions for EAs thinking about their own relationship to the movement. I wanted to write a draft amnesty post about this, but I couldn't write anything better than this Lou Keep essay about the book, so I'll just recommend you read that.
-1
25d
On AI alarmists: A fair-sized stream seems vast to one who until then Has never seen a greater; so with trees, with men. In every field each man regards as vast in size The greatest objects that have come before his eyes  (Lucretius)
3
1mo
2
Anyone else get a pig butchering scam attempt lately via DM on the forun?  I just got the following message  > Happy day to you, I am [X] i saw your profile today and i like it very much,which makes me to write to you to let you know that i am interested in you,therefore i will like you to write me back so that i will tell you further about myself and send you also my picture for you to know me physically.  [EMAIL] I reported the user on their profile and opened a support request but just FYI  
Load more (8/191)

Posts in this space are about

CommunityEffective altruism lifestyle