After seeing some of the debate last month about effective altruism's information-sharing / honesty / criticism norms (see Sarah Constantin's follow-up and replies from Holly Elmore (1,2), Rob Wiblin (12), Jacy Rees, Christopher Byrd), I decided to experiment with an approach to getting less filtered feedback. I asked folks over social media to anonymously answer this question:

If you could magically change the effective altruism community tomorrow, what things would you change? [...] If possible, please mark your level of involvement/familiarity with EA[.]

I got a lot of high-quality responses, and some people suggested that I cross-post them to the EA Forum for further discussion. I've posted paraphrased version of many of the responses below. Some cautions:

1. I have no way to verify the identities of most of the respondents, so I can't vouch for the reliability of their impressions or anecdotes. Anonymity removes some incentives that keep people from saying what's on their mind, but it also removes some incentives to be honest, compassionate, thorough, precise, etc. I also have no way of knowing whether a bunch of these submissions come from a single person.

2. This was first shared on my Facebook wall, so the responses are skewed toward GCR-oriented people and other sorts of people I'm more likely to know. (I'm a MIRI employee.)

3. Anonymity makes it less costly to publicly criticize friends and acquaintances, which seems potentially valuable; but it also makes it easier to make claims without backing them up, and easier to widely spread one-sided accounts before the other party has time to respond. If someone writes a blog post titled 'Rob Bensinger gives babies ugly haircuts', that can end up widely shared on social media (or sorted high in Google's page rankings) and hurt my reputation with others, even if I quickly reply in the comments 'Hey, no I don't.' If I'm too busy with a project to quickly respond, it's even more likely that a lot of people will see the post but never see my response.

For that reason, I'm wary of giving a megaphone to anonymous unverified claims. Below, I've tried to reduce the risk slightly by running comments by others and giving them time to respond (especially where the comment named particular individuals/organizations/projects). I've also edited a number of responses into the same comment as the anonymous submission, so that downvoting and direct links can't hide the responses.

4. If people run experiments like this in the future, I encourage them to solicit 'What are we doing right?' feedback along with 'What would you change?' feedback. Knowing your weak spots is important, but if we fall into the trap of treating self-criticism alone as virtuous/clear-sighted/productive, we'll end up poorly calibrated about how well we're actually doing, and we're also likely to miss opportunities to capitalize on and further develop our strengths.

42

0
0

Reactions

0
0
Comments91
Sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Anonymous #28:

I have really positive feelings towards the effective altruism community on the whole. I think EA is one of the most important ideas out there right now.

However, I think that there is a lot of hostility in the movement towards those of us who started off as 'ineffective altruists,' as opposed to coming from the more typical Silicon Valley perspective. I have a high IQ, but I struggled through college and had to drop out of a STEM program as a result of serious mental health disturbances. After college, I wanted to make a difference, so I've spent my time since then working in crisis homeless shelters. I've broken up fistfights, intervened in heroin overdoses, received 2am death threats from paranoid meth addicts, mopped up the blood from miscarriages. I know that the work I've done isn't as effective as what the Against Malaria Foundation does, but I've still worked really hard to help people, and I've found that my peers in the movement have been very dismissive of it.

I'm really looking to build skills in an area where I can do more effective direct work. I keep hearing that the movement is talent-constrained, but it isn't clearly explained anywhere what the

... (read more)

I want to hug this person so much!

3
BenHoffman
I want to encourage this person to: * Write about what you've learned doing direct work that might be relevant to EAs. * Reach out to me if I can be helpful with this in any way. * Keep doing the good work you know how to do, if you don't see any better options. * Stay alert for high-leverage opportunities to do more, including opportunities you can see and other EAs can't, where additional funding or people or expertise that EAs might have would be helpful. so much!
7
Ben Millwood🔸
"Keep doing the good work you know how to do, if you don't see any better options" still sounds implicitly dismissive to me. It sounds like you believe there are better options, and only a lack of knowledge or vision is keeping this person from identifying them. Breaking up fistfights and intervening in heroin overdoses to me sound like things that have small-to-moderate chances of preventing catastrophic, permanent harm to the people involved. I don't know how often opportunities like that come up, but is it so hard to imagine they outstrip a GWWC pledger on an average or even substantially above-average salary?

Meta: this seems like it was a really valuable exercise based on the quality of the feedback. Thank you for conceiving it, running it, and giving thought to the potential side effects and systematic biases that could affect such a thing. It updates me in the direction that the right queries can produce a significant amount of valuable material if we can reduce the friction to answering such queries (esp. perfectionism) and thus get dialogs going.

1
Fluttershy
Definitely agreed. In this spirit, is there any reason not to make an account with (say) a username of username, and a password of password, for anonymous EAs to use when commenting on this site?
4
RobBensinger
I think this would be too open to abuse; see the concerns I raised in the OP. An example of a variant on this idea that might work is to take 100 established+trusted community members, give them all access to the same forum account, and forbid sharing that account with any additional people.
1
RomeoStevens
What about an anonymous forum that was both private and had a strict no object level names, personal or organizational, policy such that ideas could be discussed more freely? Obviously there'd be grey area on the alluding to object level people and organizations, but I think we can simply elect a king who is reasonable and agree not to squabble about the chosen line.

Anonymous #22:

I think that mentorship and guidance are lacking and undervalued in the EA community. This seems odd to me. Everyone seems to agree that coordination problems are hard, that we’re not going to solve tough problems without recruiting additional talent, and that outreach in the "right" places would be good. Functionally, however, most individuals in the community, most organizations, and most heads of organizations seem to act as though they can make a difference through brute force alone.

I also don’t get the impression that most EA organizations and heads of EA organizations are keen on meeting or working with new and interested people. People affiliated with EA write many articles about increasing personal productivity; I have yet to read a single article about increasing group effectiveness.

80,000 Hours may be the sole exception to this rule, though I haven’t formally gone through their coaching program, so I don’t know what their pipeline is like. CFAR also seems to be addressing some of these issues, though their workshops are still prohibitively expensive for lots of people, especially newcomers. EA outreach is great, but once people have heard a

... (read more)
6
Daniel_Eth
Another possibility is that most people in EA are still pretty young, so they might not feel like they're really in a position to mentor anyone.

Anonymous #6:

If I could wave a magic wand and change the EA community, I'd have everyone constantly posting little 5-hour research overviews of the best causes within almost-random cause areas and preliminary bad suggested donation targets. So: How to reduce Christianity? How to get people to heaven? Best way to speed up nanomedicine? Best way to reduce ageism? Best way to slow down economic progress?

4
BenHoffman
Relevant resources: Fact Posts: How and Why The Open Philanthropy Project's Shallow Investigations provide nice template examples. The Neglected Virtue of Scholarship Scholarship: How to Do It Efficiently I'm fairly new to the EA Forum, maybe someone who's been here longer knows of other resources on this site.
7
Richard_Batty
Even simpler than fact posts and shallow investigations would be skyping experts in different fields and writing up the conversation. Total time per expert is about 2 hours - 1 hour for the conversation, 1 hour for writing up.

Anonymous #27:

Many practitioners strike me as being dogmatic and closed-minded. They maintain a short internal whitelist of things that are considered 'EA' -- e.g., working at an EA-branded organization, or working directly on AI safety. If an activity isn't on the whitelist, the dogmatic (and sometimes wrong) conclusion is that it must not be highly effective. I think that EA-associated organizations and AI safety are great, but they're not the only approaches that could make a monumental difference. If you find yourself instinctively disagreeing, then you might be in the group I'm talking about. :)

People's natural response should instead be something like: 'Hmm, at first blush this doesn't seem effective to me, and I have a strong prior that most things aren't effective, but maybe there's something here I don't understand yet. Let's see if I can figure out what it is.'

Level of personal involvement in effective altruism: medium-high. But I wouldn't be proud to identify myself as EA.

1
BenHoffman
I wish to register my emphatic partial agreement with much of this one, though I do still identify as EA, and have also talked with many people who are quite curious and interested in getting value from learning about new perspectives.

Anonymous #11:

I think that a lot of people in effective altruism who focus on animal welfare as a cause area have demonstrated a pattern of doing extraordinarily uncooperative and epistemically terrible things. Examples: calling people bad names for eating meat, in the hope of changing their behavior via social pressure and stigma (rather than argument); comparing meat-eating to conventional murder, in the hope of taking advantage of the noncentral fallacy; 'direct action everywhere,' which often translates in practice into being rude and threatening to people who disagree about various factual questions; ACE basing conclusions on bad leafletting statistics and 'intuition'; threatening to cause public relations mayhem for the event organizers and damage the community's future work if EA Global didn't go vegetarian.

I wouldn't be OK with trying to 'kick animal welfare people out of the movement,' because a) what would that even mean, and b) we're supposed to be a garden of Niceness and Civilization. But it would be great if the EA community actively called out this bullshit when it happened, and demanded that people focusing on this cause met the same high epistemic standards th

... (read more)
9
Daniel_Eth
This. As a meat-eating EA who personally does think animal suffering is a big deal, I've found the attitude from some animal rights EAs to be quite annoying. I personally believe that the diet I eat is A) healthier than if I was vegan and B) allows me to be more focussed and productive than if I was vegan, allowing me to do more good overall. I'm more than happy to debate that with anyone who disagrees (and most EAs who are vegan are civil and respect this view), but I have encountered some EAs who refuse to believe that there's any possibility of either A) or B) being true, which feels quite dismissive. Contrast that attitude to what happened recently at a Los Angeles EA meetup where we went for dinner. Before ordering, I asked around if anyone was vegan since if there was anyone who was, I didn't want to eat meat in front of them and offend them. The person next to me said he was vegan, but that if I wanted meat I should order it since "we're all adults and we want the community to be as inclusive as it can." I decided to get a vegan dish anyway, but having him say that made me feel more welcome.
7
Dawn Drescher
Oh wow, thank you! That’s so awesome of you! I greatly appreciate it!
6
IanDavidMoss
For what it's worth and as an additional data point, I'm a meat eater and I didn't feel like this was a big problem at EA Global in 2016. For a gathering in which animal advocacy/veganism is so prevalent, I would have thought it really weird if the conference served meat anyway. The vegetarian food provided was delicious, and the one time I went out to dinner with a group and ordered meat, nobody got up in my face about it.
2
Daniel_Eth
Yes, that was my general impression of EA global. I feel like most of the people who do get upset about meat eaters in EA are only nominally in EA, and largely interact with the community via Facebook.

Anonymous #13:

(I used to work at an EA-associated organization.)

People involved in effective altruism should expect to have to think outside the box. The EA movement may be too focused on supporting and endorsing causes that are well-established, unambiguous (/have minimal Knightian uncertainty), are reputable, and have good virtue signalling value.

The default assumption for people in EA should be that at the very top end of effectiveness, we will probably not find causes that have those properties: the places where you personally can make the big

... (read more)

Anonymous #1:

My system-1 concerns about EA: the community exhibits a certain amount of conformism, and a general unwillingness to explore new topics.

I think there's some good reasoning behind this: the Pareto rule tells us that obvious things tend to be much more effective than convoluted strategies. However, this also leaves us more vulnerable to unknown unknowns.

The reason I think this is an issue is the general lack of really new proposals in EA discussion posts. I also think that there is a mysterious niche for an EA org dedicated to exploring

... (read more)
8
RomeoStevens
this feels really obvious from where I'm sitting but is met with incredulity by most EAs I speak with. Applause lights for new ideas paired with a total lack of engagement when anyone talks about new ideas seems more dangerous than I think we're giving credit.
5
tomstocker
See recent pain control brief lee sharkey as example, or Auren Forrester's stuff on suicide.
0
DC
I have been observing the same thing. What could we do to spark new ideas? Perhaps a recurring thread dedicated to it on this forum or Facebook, or perhaps a new Facebook group? A Giving Game for unexplored topics? How can we encourage creativity?
1
RomeoStevens
Creativity is a learnable skill and also can be encouraged through conversational/group activity norms. http://malcolmocean.com/2016/05/honing-mode-vs-jamming-mode/ https://vimeo.com/89936101

Anonymous #39:

Level of involvement: I'm not an EA, but I'm EA-adjacent and EA-sympathetic.

EA seems to have picked all the low-hanging fruit and doesn't know what to do with itself now. Standard health and global poverty feel like trying to fill a bottomless pit. It's hard to get excited about GiveWell Report #3543 about how we should be focusing on a slightly different parasite and that the cost of saving a life has gone up by $3. Animal altruism is in a similar situation, and is also morally controversial and tainted by culture war. The benefits of m

... (read more)
5
lukeprog
I think EA may have picked the lowest-hanging fruit, but there's lots of low-ish hanging fruit left unpicked. For example: who, exactly, should be seen as the beneficiaries aka allkind aka moral patients? EAs disagree about this quite a lot, but there hasn't been that much detailed + broadly informed argument about it inside EA. (This example comes to mind because I'm currently writing a report on it for OpenPhil.) There are also a great many areas that might be fairly promising, but which haven't been looked into in much breadth+detail yet (AFAIK). The best of these might count as low-ish hanging fruit. E.g.: is there anything to be done about authoritarianism around the world? Might certain kinds of meta-science work (e.g. COS) make future life science and social science work more robust+informative than it is now, providing highly leveraged returns to welfare?
2
Denkenberger🔸
There is also non-AI global catastrophic risk, like engineered pandemics, and low hanging fruit for dealing with agricultural catastrophes like nuclear winter.
1
tomstocker
What's wrong with low hanging fruit? Not entertaining enough?
0
Michael_PJ
I agree that we're in danger of having picked all the low-hanging fruit. But I think there's room to fix this.

Anonymous #12:

I feel that people in people involved in effective altruism are not very critical of the ways that confirmation bias and hero-of-the-story biases slip into their arguments. It strikes me as... convenient... that one of the biggest problems facing humanity is computers and that a movement popular among Silicon Valley professionals says people can solve it by getting comfortable professional jobs in Silicon Valley and donating some of the money to AI risk groups.

This is obviously not the whole story, as the arguments for taking AI risk ser

... (read more)
3
RobBensinger
Three points worth mentioning in response: 1. Most of the people best-known for worrying about AI risk aren't primarily computer scientists. (Personally, I've been surprised by the number of physicists.) 2. 'It's self-serving to think that earning to give is useful' seems like a separate thing from 'it's self-serving to think AI is important.' Programming jobs obviously pay well, so no one objects to people following the logic from 'earning to give is useful' to 'earning to give via programming work is useful'; the question there is just whether earning to give itself is useful, which is a topic that seems less related to AI. (More generally, 'technology X is a big deal' will frequently imply both 'technology X poses important risks' and 'knowing how to work with technology X is profitable', so it isn't surprising to find those beliefs going together.) 3. If you were working in AI and wanted to rationalize 'my current work is the best way to improve the world', then AI risk is really the worst way imaginable to rationalize that conclusion: accelerating general AI capabilities is very unlikely to be a high-EV way to respond to AI risk as things stand today, and the kinds of technical work involved in AI safety research often require unusual skills and background for CS/AI. (Ryan Carey wrote in the past: "The problem here is that AI risk reducers can't win. If they're not computer scientists, they're decried as uninformed non-experts, and if they do come from computer scientists, they're promoting and serving themselves." But the bigger problem is that the latter doesn't make sense as a self-serving motive.)
0
tomstocker
Except that on point 3, the policies advocated and strategies being tried aren't as if people are trying to reduce x risk, they're as if they're trying to enable AI to work rather than backfire.

Anonymous #40:

I'm the leader of a not-very-successful EA student group. I don't get to socialize with people in EA that much.

I wish the community were better at supporting its members in accomplishing things they normally couldn't. I feel like almost everyone just does the things that they normally would. People that enjoy socializing go to meetups (or run meetups); people that enjoy writing blog posts write blog posts; people that enjoy commenting online comment online; etc.

Very few people actually do things that are hard for them, which means tha

... (read more)

Anonymous #31:

I work for an effective altruism organization. I'd say that over half of my friends are at least adjacent to the space and talk about EA-ish topics regularly.

The thing I'd most like to change is the general friendliness of first-time encounters with EA. I think EA Global is good about this, but house parties tend to have a very competitive, emotionally exhausting 'everyone is sizing you up' vibe, unless you're already friends with some people from another context.

Next-most-important (and related), probably, is that I would want everyo

... (read more)
6
Daniel_Eth
Where are all these crazy EA parties that I keep reading about? The only EA parties I've heard of were at EA Global.
3
RomeoStevens
My guess is that there is a very large underestimation of the value from a higher baseline level of cross pollination of ideas.

Anonymous #25:

I'm very involved in the EA community, but at this point, it seems unlikely that I'll ever work at an EA organization, because I can't take the pay cut. I want to start a family and raise kids one day, and to me, this is incompatible with a $50k/year, 12h/day job (at least in the Bay Area).

I'm not sure if earning to give is the best solution to this, but sometimes it seems like the only option available.

Anonymous #15:

I wouldn't mind seeing more statistical analysis in a Bayesian framework in effective altruism -- with explicit likelihoods and prior distributions, rather than 'my intuitions about this p-value constitute Bayesian evidence for....' If people really like p-values, they can simulate and get posterior predictive ones.

Anonymous #9:

  • Practice humility towards charities working on systemic change or in related fields like development. They have been doing it for decades. Many would consider saving a few lives from malaria as non-utilitarian compared with changing policies that affect millions.
  • Be mindful of the risk of recruiting narcissists to represent the movement, as this makes a lot of people's first impression of effective altruism a condescending one. ('I am the most effective altruist!') The Bay Area's status culture is a turn-off for people in Anglophone cou
... (read more)

Anonymous #14:

I've worked with EA-related organizations, as have many of my friends.

On a system-1 level, I honestly just want to scrap the entire EA project and start over. EAA strikes me as particularly scrappable, but that's just my values.

On a system-2 level, I see the community being eaten by Moloch, roughly as a consequence of Darwinian pressure towards growth conflicting with a need for bona fide epistemic rigor. The reason that we seem to be getting especially eaten by this is that there's a widespread belief that our cause is just, so we're

... (read more)

Anonymous #10:

I basically agree with Sarah Constantin and Ben Hoffman's critiques. The community is too large and distributed to avoid principal-agent problems and Ribbonfarm-Sociopaths. The more people that are involved, the worse decision-making processes get. So I'd prefer to fragment the community in two, with one focused on projects that are externally-facing and primarily interact with non-EAs, and another that's smaller, denser, and inward-facing, that can be arbitrarily ambitious. The second group has to avoid the forces that attract Sociopaths a

... (read more)

Anonymous #4:

I think that EA as it exists today doesn't provide much value. It focuses mostly on things that are obvious today ('malaria is bad'), providing people a slightly better way to do what they already think is a good idea, rather than making bets on high-impact large-scale interventions. It also places too much emphasis on alleviating suffering, to the exclusion of Kantian, contractarian, etc. conceptions of ethical obligation.

(By this I primarily have in mind that too many EAs are working on changing the subjective experience of chickens and

... (read more)

I have spoken with two people in the community who felt they didn't have anyone to turn to who would not throw rationalist type techniques at them when they were experiencing mental health problems. The fix it attitude is fairly toxic for many common situations.

If I could wave a magic wand it would be for everyone to gain the knowledge that learning and implementing new analytical techniques cost spoons, and when a person is bleeding spoons in front of you you need a different strategy.

7
Jess_Whittlestone
I strongly agree with this, and I hadn't heard anyone articulate it quite this explicitly - thank you. I also like the idea of there being more focus on helping EAs with mental health problems or life struggles where the advice isn't always "use this CFAR technique." (I think CFAR are great and a lot of their techniques are really useful. But I've also spent a bunch of time feeling bad the fact that I don't seem able to learn and implement these techniques in the way many other people seem to, and it's taken me a long time to realise that trying to 'figure out' how to fix my problems in a very analytical way is very often not what I need.)
4
Fluttershy
I'd be interested in contributing to something like this (conditional on me having enough mental energy myself to do so!). I tend to hang out mostly with EA and EA-adjacent people who fit this description, so I've thought a lot about how we can support each other. I'm not aware of any quick fixes, but things can get better with time. We do seem to have a lot of depressed people, though. Speculation ahoy: 1) I wonder if, say, Bay area EAs cluster together strongly enough that some of the mental health techniques/habits/one-off-things that typically work best for us are different from the things that work for most people in important ways. 2) Also, something about the way in which status works in the social climate of the EA/LW Bay Area community is both unusual and more toxic than the way in which status works in more average social circles. I think this contributes appreciably to the number and severity of depressed people in our vicinity. (This would take an entire sequence to describe; I can elaborate if asked). 3) I wonder how much good work could be done on anyone's mental health by sitting down with a friend who wants to focus on you and your health for, say, 30 hours over the course of a few days and just talking about yourself, being reassured and given validation and breaks, consensually trying things on each other, and, only when it feels right, trying to address mental habits you find problematic directly. I've never tried something like this before, but I'd eventually like to. Well, writing that comment was a journey. I doubt I'll stand by all of what I've written here tomorrow morning, but I do think that I'm correct on some points, and that I'm pointing in a few valuable directions.
0
Jalen_Lyle-Holmes
I'm so intrigued by proposal 3)! I think when a friend is struggling like that I often have a vague feeling of wanting to engage/help in a bigger way than having a few chats about it, and I'm intrigued by this idea of how to do that. And also thinking about myself I think I'd love it if someone did that for me. I'm gonna keep that in mind and maybe try it one day!
0
Jalen_Lyle-Holmes
I think I would find this super helpful. low-level mental health stuff has contributed to me basically muddling around for years, nowhere near making good on what I could (in my best attempt at probably faulty self assessment) potentially learn and contribute.

Anonymous #32:

Level of involvement/familiarity: I work at an EA or EA-associated organization. Please post my five points separately so that people can discuss them without tangling the discussion threads.

Anonymous #32(c):

Note that this point is a little incoherent.

In the absence of proper feedback loops, we will feel like we are succeeding while we are in fact stagnating and/or missing the mark. Wary of using this as a fully general critique, some of the proxies we use for success seem to be only loosely tracking what we actually care about. (See Goodhart's Law.)

For instance, community growth is used as a proxy for success where it might, in fact, be an indicator of concept and community dilution. Engagement on OMfCT, while 'engaging the EA community,' seems to supplant real, critical engagement. (I'm really uncertain of this claim.) With the exception of a few people, often those from the early days of EA, there's little generation of new content, and more meta-fixation on organizations and community critiques.

Tracking quality and novel content is really hard, but it seems far more likely to move EA into the public sphere, academia, etc. than boosting pretty numbers on a graph. We're going to miss a lot of levers for influence if we keep resting on our intellectual laurels.

I'd like to see more essay contests and social rewards for writing, rather than the only respon

... (read more)
9
RobBensinger
Anonymous #32(d):
8
RobBensinger
Anonymous #32(e):
4
IanDavidMoss
This is a great point. In addition to considering "how can we make it easier to get people to change their minds," I think we should also be asking, "is there good that can still be accomplished even when people are not willing to change their minds?" Sometimes social engineering is most effective when it works around people's biases and weaknesses rather than trying to attack them head on.
3
Rohin Shah
I agree that this is a problem, but I don't agree with the causal model and so I don't agree with the solution. I'd guess that the majority of the people who take the EA Survey are fairly new to EA and haven't encountered all of the arguments etc. that it would take to change their minds, not to mention all of the rationality "tips and tricks" to become better at changing your mind in the first place. It took me a year or so to get familiar with all of the main EA arguments, and I think that's pretty typical. TL;DR I don't think there's good signal in this piece of evidence. It would be much more compelling if it were restricted to people who were very involved in EA. I'd propose a different model for the regional EA groups. I think that the founders are often quite knowledgeable about EA, and then new EAs hear strong arguments for whichever causes the founders like and so tend to accept that. (This would happen even if the founders try to expose new EAs to all of the arguments -- we would expect the founders to be able to best explain the arguments for their own cause area, leading to a bias.) In addition, it seems like regional groups often prioritize outreach over gaining knowledge, so you'll have students who have heard a lot about global poverty and perhaps meta-charity who then help organize speaker events and discussion groups, even though they've barely heard of other areas. Based on this model, the fix could be making sure that new EAs are exposed to a broader range of EA thought fairly quickly.
1
Daniel_Eth
Perhaps one implication of this is it's better to target movement growing efforts at students (particularly undergrads), since they're less likely to have already made up their minds?
8
RobBensinger
Anonymous #32(b):
8
Richard_Batty
What communities are the most novel/talented/influential people gravitating towards? How are they better?
4
IanDavidMoss
I upvoted this mostly because it was new information to me, but I have the same questions as Richard.
2
RobBensinger
Anonymous #32(a):

Anonymous #29:

I worry that Sarah Constantin's article will make an existing problem worse. The effective altruism community is made up of people who understand that politics isn't their comparative advantage. But sour grapes transforms this into 'and also, politics is The Dark Arts and if you do it you're Voldemort.'

GiveWell needs to make charity recommendations based on what's true, not based on what it can sell. But effective altruism as a whole is a political project. If it's to become more than a hobby, it needs to use political power to change th

... (read more)

Anonymous #8:

If I could change the effective altruism community tomorrow, I would move it somewhere other than the Bay Area, or at least make it more widely known that moving to the Bay is defecting in a tragedy of the commons and makes you Bad.

If there were large and thriving EA communities all over the place, nobody would need to move to the Bay, we'd have better outreach to a number of communities, and fewer people would have to move a long distance, get US visas, or pay a high rent in order to get seriously involved in EA. The more people move to

... (read more)
3
Michael_PJ
There's a lot of EA outside the Bay! The Oxford/London cluster in particular is quite nice (although I live there, so I'm biased).
1
Foster
+1 London community is awesome. Also heard very good things about the Berlin & Vancouver communities.
2
Dawn Drescher
I can recommend Berlin! Also biased. ;-)

Anonymous #37:

I would like to see more humility from people involved in effective altruism regarding metaethics, or at least better explanations for why EAs' metaethical positions are what they are. Among smart friends and family members of mine whom I've tried to convince of EA ideas, the most common complaint is, 'But that's not what I think is good!' I think this is a reasonable complaint, and I'd like it if we acknowledged it in more introductory material and in more of our conversations.

More broadly, I think that rather than having a 'lying probl

... (read more)
1
Dawn Drescher
It's fascinating how diverse the movement is in this regard. I've only found a single moral realist EA who had thought about metaethics and could argue for it. Most EAs around me are antirealists or haven't thought about it. (I'm antirealist because I don't know any convincing arguments to the contrary.)
7
Benjamin_Todd
My impression is that many of the founders of the movement are moral realists and professional moral philosophers e.g. Peter Singer published a book arguing for moral realism in 2014 ("The Point of View of the Universe").
3
lukeprog
Plus some who at least put some non-negligible probability on moral realism, in some kind of moral uncertainty framework.
0
Dawn Drescher
Ah, cool! I should read it.

Anonymous #17:

I would cause the effective altruism community to exhibit less risk aversion and less groupthink.

Anonymous #5:

At multiple EA events that I've been to, new people who were interested and expressed curiosity about what to do next were given no advice beyond 'donate money and help spread the message' -- even by prominent EA organizers. My advice to the EA community would be to stop focusing so much on movement-building until (a) EA's epistemics have improved, and (b) EAs have much more developed and solid views (if not an outright consensus) about the movement's goals and strategy.

To that end, I recommend clearly dividing 'cause-neutral EA' from 'ca

... (read more)
1
IanDavidMoss
I think I'm the one being called out with the reference to "a non-profit art magazine" being framed as EA-relevant, so I'll respond here. I endorse the commenter's thought that If I'm understanding the proposal correctly, it's envisioning something like a reddit-style set of topic-specific subforums in which EA principles could be discussed as they relate to that topic. What I like about that solution is that it allows for the clarity of discussion boundaries that the commenter desires, but still includes discussions of cause-specific effectiveness within the broader umbrella of EA, which helps to facilitate cross-pollination of thinking across causes and from individual causes to the more global cause-neutral space.

Anonymous #34:

The way that we talk about policy in the effective altruism community is unsophisticated. I understand that this isn't most EAs' area of expertise, but in that case just running around and saying 'we should really get EAs into policy' is pretty unhelpful. Anyone who is fairly inexperienced in 'policy' could quickly get a community-knowledge comparative advantage just by spending a couple of months doing self-study and having conversations, and could thereby start helpfully orienting our general cries for more work on 'policy.'

To be fair, there are some people doing this. But why not more?

Anonymous #21:

A meta comment: This post has gotten a lot of good replies! Like, Jesus, where are all of these people, and why do I never hear from them otherwise? I assume most of them must be people I've run into somewhere, on Facebook or at parties or conferences or whatever. But I guess they must just not say anything.

I don't agree with everything, obviously, but I see lots of things that I normally wouldn't expect to hear on Facebook. If any of you would like to continue these conversations over email, I've given Rob my contact information and given him permission to share it with parties who ask for it.

Anonymous #16:

Level of involvement: Most of my friends are involved in effective altruism and talk about it regularly.

The extent to which AI topics and MIRI seem to have increased in importance in effective altruism worries me. The fact that this seems to have happened more in private among the people who run key organizations than in those organizations' public faces is particularly troubling. This is also a noticeable red flag for groupthink. For example, Holden's explanation of why he has become more favorably disposed to MIRI was pretty unconvinci

... (read more)
2
jimrandomh
I'm confused by the bit about this not being reflected in organizations' public faces? Early in 2016 OpenPhil announced they would be making AI risk a major priority.

Anonymous #3:

Stop talking about AI in EA, at least when doing EA outreach. I keep coming across effective altruism proponents claiming that MIRI is a top charity, when they seem to be writing to people who aren't in the EA community who want to learn more about it. Do they realize that this comes across as very biased? It makes it seem like 'I know a lot about an organization' or 'I have friends in this organization' are EA criteria. Most importantly, talking about AI in doomsday terms sounds kooky. It stands apart from the usual selections, as it's one

... (read more)

Anonymous #23:

I used to work for an organization in EA, and I am still quite active in the community.

1 - I've heard people say things like, 'Sure, we say that effective altruism is about global poverty, but -- wink, nod -- that's just what we do to get people in the door so that we can convert them to helping out with AI / animal suffering / (insert weird cause here).' This disturbs me.

2 - In general, I think that EA should be a principle, not a 'movement' or set of organizations. I see no reason that religious charities wouldn't benefit from expos

... (read more)

Anonymous #35:

I would not feel like people in the EA community would backstab me if the benefit to them outweighed the harm. (Where benefit to them often involves their lofty goals, so it can go under the guise of 'effective altruism.')

Anonymous #24:

Intentional Insights makes me cringe for its obvious clickbaityness. I am totally consequentialist, and if it helps to raise the sanity waterline, go for it -- but I'm skeptical that it does. I feel a bit repelled by the low-effort content and shady attention-grabbing techniques; it causes me to feel slightly less respectful of the EA community, and less like I belong there. I hope that's just me.

If you are going to use any shady techniques, fabricated content or praise, non-organic popularity on social networks, or anything along those

... (read more)

Anonymous #33:

I think people in EA should give up on trying not to seem cultish and just go full-blown weird.

4
RobBensinger
Anonymous #38:
2
RobBensinger
There are versions of this I endorse, and versions I don't endorse. Anon #38 seems to be interpreting #33 as saying 'let's be less tolerant of normal people/behaviors', but my initial interpretation of #33 was that they were saying 'let's be more tolerant of weird people/behaviors'.

Anonymous #18:

Speaking regarding the Bay Area effective altruism community: There's something about status that could be improved. On the whole, status (and what it gets you) serves a valuable purpose; it's a currency used to reward those producing what the community values. The EA community is doing well at this in that it does largely assign status to people for the right things. At the same time, something about how status is being done is leaving many people feeling insecure and disconnected.

I don't know what the solution is, but you said magic wand, so I'll punt on what the right response should be."

Anonymous #26:

I probably classify as 'talent,' since I've been repeatedly shortlisted to EA organizations. I'm glad that superior applicants applied and got the jobs! It's been a shame for me personally, though, because a work environment like that would have been ideal for overcoming my longstanding depression.

Ordinarily, I'd just say 'that's life,' but it seems worthwhile to point out the value of the shared ethos, including EA's interest in personal productivity, etc. I'm sure that I'd be achieving exponentially greater benefits for the world after

... (read more)

Anonymous #7:

Remove Utilitarianism as a pillar, platform, assumption, or commonly held ethical belief.

Rob Bensinger replied:

If the author reads this, I'd be curious to see a follow-up that says more about what they mean by "utilitarianism". Lots of EAs don't strictly identify with utilitarianism (and utilitarianism isn't generally treated as a pillar of EA), but think it's useful to think in vaguely "utilitarianism-ish" terms: focusing on the consequences of one's actions in deciding what to do; among the consequences, heavily wei

... (read more)

Anonymous #2:

I'd prefer it if more people in EA were paid on a contract basis, if more people were paid lower salaries, if there were more mechanisms for the transfer of power in organizations (e.g., a 2- or 3-year term limit for CEOs and a maximum age at entry), and if there were more direct donations. Also: better systems to attract young people. More people in biology. More optimism. More willingness to broadcast arguments against working on animal welfare that have not been refuted.

1
Evan_Gaensbauer
I originally downvoted this comment, because some of the suggestions obviously suck, but some of the points here could be improved. There are a lot of effective altruists who have just as good ideas as anyone working at an EA non-profit, or a university, but due to a variety of circumstances, they're not able to land those jobs. Some effective altruists already run Patreons for their blogs, and I think the material coming out of them is decent, especially as they can lend voices independent of institutions on some EA subjects. Also, they have the time to cover or criticize certain topics other effective altruists aren't since their effort is taken up by a single research focus. Nothing can be done about this criticism if some numbers aren't given. Criticizing certain individuals for getting paid too much, or criticizing certain organizations for paying their staff too much, isn't an actionable criticism unless one gets specific. I know EA organizations whose staff, including the founders who decide the budget, essentially get paid minimum wage. On the other hand, Givewell's cofounders Holden and Elie get paid well into the six figures each year. While I don't myself much care, I've privately chatted with people who perceive this as problematic. Then, there may be some staff at some EA organizations who may appear to others to get paid more than they deserve, especially when their salaries may be able to pay for one or more full-time salaries as other individuals perceived to be just as competent. That last statement was full of conditionals, I know, but it's something I'm guessing they anonymous commenter was concerned about. Again, they'd need to be specific about what organization they're talking about. The biggest problem with this comment is the commenter made broad, vague generalizations which aren't actionable. It's uncomfortable to make specific criticisms of individuals or organizations, yes, but the point of an anonymous criticism is to be able to do tha

Anonymous #36:

I'd like to see more information from the EA community about which organizations are most effective at addressing environmental harm, and at reducing greenhouse gas emissions in particular. More generally, I'd like to see more material from the EA community about which organizations or approaches are most effective in the category in which they fall.

Many EA supporters doubtless accept a broadly utilitarian ethical framework, according to which all activities can be ranked in order of their effect on aggregate welfare. I think the notion

... (read more)

Anonymous #20:

I've read a few articles about effective altruism, and I have friends who are significantly involved in it. I also vote for 'stop talking about AI in EA.'

Curated and popular this week
Relevant opportunities