Building effective altruism

New Shortform New Post
Sorted by New & upvoted, Posts collapsed

Shortforms

3
7h
1
Some comments Duncan made in a private social media conversation: (Resharing because I think it's useful for EAs to be tracking why rad people are bouncing off EA as a community, not because I share Duncan's feeling—though I think I see where he's coming from!) That seemed like a potential warning sign to me of cultural unhealth on the EA Forum, especially given that others shared Duncan's sentiments. I asked if Duncan would like to share his impression on the EA Forum so EAs could respond and talk it out, and he said: (He was willing to let me cross-post it myself, however.)
48
24d
5
On Socioeconomic Diversity: I want to describe how the discourse on sexual misconduct may be reducing the specific type of socioeconomic diversity I am personally familiar with.  I’m a white female American who worked as an HVAC technician with co-workers mostly from racial minorities before going to college. Most of the sexual misconduct incidents discussed in the Time article [https://time.com/6252617/effective-altruism-sexual-harassment/] have likely differed from standard workplace discussions in my former career only in that the higher status person expressed romantic/sexual attraction, making their statement much more vulnerable than the trash-talk I’m familiar with. In the places most of my workplace experience comes from, people of all genders and statuses make sexual jokes about coworkers of all genders and statuses not only in their field, but while on the clock. I had tremendous fun participating in these conversations. It didn’t feel sexist to me because I gave as good as I got. My experience generalizes well; Even when Donald Trump made a joke about sexual assault that many upper-class Americans believed disqualified him, immediately before the election he won, Republican women [https://www.vox.com/2016/10/9/13217158/polls-donald-trump-assault-tape] were no more likely to think he should drop out of the race than Republican voters in general. Donald Trump has been able to maintain much of his popularity despite denying the legitimacy of a legitimate election in part because he identified the gatekeeping elements of upper-class American norms as classist [https://astralcodexten.substack.com/p/a-modest-proposal-for-republicans]. I am strongly against Trump, but believe we should note that many female Americans from poorer backgrounds enjoy these conversations, and many more oppose the kind of punishments popular in upper class American communities. This means strongly disliking these conversations is not an intrinsic virtue, but a decision EA culture ha
43
22d
1
Proposing a change to how Karma is accrued: I recently reached over 1,000 Karma, meaning my upvotes now give 2 Karma and my strong upvotes give 6 Karma.  I'm most proud of my contributions to the forum about economics, but almost all of my increased ability to influence discourse now is from participating a lot in the discussions on sexual misconduct. An upvote from me on Global Health & Development (my primary cause area) now counts twice as much as an upvote from 12 out of 19 of the authors of posts with 200-300 Karma with the Global Health & Development tag. They are generally experts in their field working at major EA organizations, whereas I am an electrical engineering undergraduate. I think these kinds of people should have far more ability to influence the discussion via the power of their upvotes than me. They will notice things about the merits of the cases people are making that I won't until I'm a lot smarter and wiser and farther along in my career. I don't think the ability to say something popular about culture wars translates well into having insights about the object level content. It is very easy to get Karma by participating in community discussions, so a lot of people are now probably in my position after the increased activity in that area around the scandals. I really want the people with more expertise in their field to be the ones influencing how visible posts and comments about their field are.  I propose that Karma earned from comments on posts with the community tag accrues at a slower rate. Edit: I just noticed a post by moderators that does a better job of explaining why karma is so easy to accumulate in community posts: https://forum.effectivealtruism.org/posts/dDudLPHv7AgPLrzef/karma-overrates-some-topics-resulting-issues-and-potential [https://forum.effectivealtruism.org/posts/dDudLPHv7AgPLrzef/karma-overrates-some-topics-resulting-issues-and-potential]
50
1mo
13
SOME POST-EAG THOUGHTS ON JOURNALISTS For context, CEA accepted at EAG Bay Area 2023 a journalist who has at times written critically of EA and individual EAs, and who is very much not a community member. I am deliberately not naming the journalist, because they haven't done anything wrong and I'm still trying to work out my own thoughts. On one hand, "journalists who write nice things get to go to the events, journalists who write mean things get excluded" is at best ethically problematic. It's very very very normal: political campaigns do it, industry events do it, individuals do it. "Access journalism" is the norm more than it is the exception. But that doesn't mean that we should. One solution is to be very very careful about maintaining the differentiation between "community member" and "critical or not". Dylan Matthews is straightforwardly an EA and has reported critically on a past EAG [https://www.vox.com/2015/8/10/9124145/effective-altruism-global-ai]: if he was excluded for this I would be deeply concerned. On the other hand, I think that, when hosting an EA event, an EA organization has certain obligations to the people at that event. One of them is protecting their safety and privacy. EAs who are journalists can, I think, generally be relied upon to be fair and to respect the privacy of individuals. That is not a trust I extend to journalists who are not community members [https://observer.com/2012/07/faith-hope-and-singularity-entering-the-matrix-with-new-yorks-futurist-set/]: the linked example is particularly egregious, but tabloid reporting happens. EAG is a gathering of community members. People go to advance their goals: see friends, network, be networked at, give advice, get advice, learn interesting things, and more. In a healthy movement, I think that EAGs should be a professional obligation, good for the individual, or fun for the individual. It doesn't have to be all of them, but it shouldn't harm them on any axis. Someone might be out ab
42
1mo
LEARNING FROM AMNESTY INTERNATIONAL'S MANAGEMENT MALPRACTICE CRISIS The recent discussions of harms caused by EAs vaguely reminded me of controversies around misbehaviour committed by leaders of Amnesty International. Very horribly, these apparently only came to light due to two suicides that were as I understand partially caused by workplace bullying at AI offices. From Wikipedia [https://en.wikipedia.org/wiki/Amnesty_International#2019_report_on_workplace_bullying]: POTENTIAL NEXT STEPS (I likely won't find time to do more here. :/ ) Amnesty hired the Konterra Group which subsequently wrote the "AMNESTY INTERNATIONAL Staff Wellbeing Review" [https://www.amnesty.org/en/wp-content/uploads/2021/05/ORG6097632019ENGLISH.pdf], which seems generally insightful and potentially applicable to EA on a very very quick skim. * Skim the report and extract useful lessons for EA. * Make a quick evaluation whether the report's quality and value suggests that EAs might want to work with the Konterra Group [https://konterragroup.net/evaluation-organizational-learning/what-we-do/] to review the EA community:
4
5d
Would an AI governance book that covered the present landscape of gov-related topics (maybe like a book version of the FHI's AI Governance Research Agenda [https://www.fhi.ox.ac.uk/wp-content/uploads/GovAI-Agenda.pdf]?) be useful? We're currently at a weird point where there's a lot of interest in AI - news coverage, investment, etc. It feels weird to not be trying to shape the conversation on AI risk more than we are now. I'm well aware that this sort of thing can backfire, and I'm aware that most people are highly sceptical of trying not to "politicise" issues like these, but it might be a good idea. If it was written by, say, Toby Ord - or anyone sufficiently detached from American left/right politics, with enough prestige, background, and experience with writing books like these - I feel like it might be really valuable. It might also be more approachable than other books covering AI risk, like, say, Superintelligence. It might also seem a little more concrete, because it might cover scenarios that are easier for most people to imagine/scenarios that are more near-term, and less "sci-fi". Thoughts on this? 
5
11d
There are  different ways to approach telling people about effective altruism (or caring about the future of humanity or AI safety etc): * "We want to work on solving these important problems. If you care about similar things, let's work together!" * "We have figured out what the correct things to do are and now we are going to tell you what to do with your life" It seems like a lot of EA university group organisers are doing the second thing, and to me, this feels weird and bad. A lot of our disagreement about specific things, like how I feel it is icky to use prepared speeches written by someone else to introduce people to EA and bad to think of people who engage with your group in terms of where they are in some sort of pipeline, is about them thinking about things in that second frame. I think the first framing is a lot healthier, both for communities and for individuals who are doing activities under the category of "community building". If you care deeply about something (eg: using spreadsheets to decide where to donate, forming accurate beliefs, reducing the risk we all die due to AI, solving moral philosophy, etc) and you tell people why you care and they're not interested, you can just move along and try to find people who are interested in working together with you in solving those problems. You don't have to make them go through some sort of pipeline where you start with the most appealing concepts to build them up to the thing you actually want them to care about.  It is also healthier for your own thinking because putting yourself in the mindset of trying to persuade others, in my experience, is pretty harmful. When I have been in that mode in the past, it crushed my ability to notice when I was confused.  I also have other intuitions for why doing the second thing just doesn't work if you want to get highly capable individuals who will actually solve the biggest problems but in this comment, I just wanted to point out the distinction betwe
12
1mo
Calling all Lithuanians! I'm on the lookout for people who are interested in effective altruism / rationality and living in Lithuania.   If you happen to know anyone like that, let me know, so I could invite them to apply to the upcoming EAGxNordics conference [https://www.effectivealtruism.org/ea-global/events/eagxnordics-2023]. For context, I am on the organising team for EAGx Nordics and one of our goals is to grow the smaller EA communities in the region. Most notably Lithuania, which is the largest country in the Baltics, but has the smallest EA presence. My hope is that the conference will help connect existing EA-aligned individuals living in Lithuania, who might not know each other.
Load more (8/12)

Work on “building effective altruism” is about growing, shaping, or otherwise improving effective altruism as a practical and intellectual project. This can involve creating communities and institutions, developing norms, or running infrastructure.