All of Fluttershy's Comments + Replies

Yeah, this sort of thing is basically always in danger of becoming politics all the way down. One good heuristic is to keep the goals you hope to satisfy by engaging in mind--if you want to figure out whether to accept an article's central claim, is the answer to your question decisive with respect to your decision? If you're trying to sway people, are you being careful to make sure it's plausibly deniable that you're doing anything other than truthseeking? If you're engaging because you think it's impactful to do so, are you treating your engagement as a tool rather than an end?

As a guy who used to be female (I was AMAB), Kelly's post rings true to me. Fully endorsed. It would be particularly interesting to hear about AFAB transmen's experiences with respect to this.

The change in how you're treated is much more noticeable when making progress in the direction of becoming more guyish; not sure if this is because this change tends to happen quickly (testosterone is powerful + quick) or because of the offsetting stigma re: people making transition progress towards being female. I could also see this stigma making up some of the posi... (read more)

I like the article. The first table makes it viscerally available that the VOI for better estimating eta (or for finding a better model for utility as a function of consumption on the margins) could be high, if you're relatively more interested in global poverty-focused EA than in other causes within EA.

I'm not aware of any better figures you could have used for GWWC/TLYCS/REG's leverage, and I'm not sure if many of us take estimates of leverage for meta-organizations literally, even relative to how literally we take normal EA cost-effectiveness estimates.... (read more)

I strongly agree with both of the comments you've written in this thread so far, but the last paragraph here seems especially important. Regarding this bit, though:

I might be a bit of an outlier

This factor may push in the opposite way than you'd think, given the context. Specifically, if people who might have gotten into EA in the past ended up avoiding it because they were exposed to this example, then you'd expect the example to be more popular than it would be if everyone who once stood a reasonable chance of becoming an EA (or even a hardcore EA) had stuck around to give you their opinion on whether you should use that example. So, keep doing what you're doing! I like your approach.

The objection about it being ableist to promote funding for trachoma surgeries rather than guide dogs doesn't have to do with how many QALYs we'd save from providing someone with a guide dog or a trachoma surgery. Roughly, this objection is about how much respect we're showing to disabled people. I'm not sure how many of the people who have said that this example is ableist are utilitarians, but we can actually make a good case that using the example causes negative consequences for the reason that it's ableist. (It's also possible that using the example a... (read more)

It just seems like the simplest explanation of your observed data is 'the community at large likes the funds, and my personal geographical locus of friends is weird'.

And without meaning to pick on you in particular (because I think this mistake is super-common), in general I want to push strongly towards people recognising that EA consists of a large number of almost-disjoint filter bubbles that often barely talk to each other and in some extreme cases have next-to-nothing in common. Unless you're very different to me, we are both selecting the people we

... (read more)
0
AGB
7y
(Sorry for the slower response, your last paragraph gave me pause and I wanted to think about it. I still don't feel like I have a satisfactory handle on it, but also feel I should reply at this point.) This makes total sense to me, and I do currently perceive something of an inverse correlation between how hard people have thought about the funds and how positively they feel about them. I agree this is a cause for concern. The way I would describe that situation from your perspective is not 'the funds have not been well-received', but rather 'the funds have been well-received but only because too many (most?) people are analysing the idea in a superficial way'. Maybe that is what you were aiming for originally and I just didn't read it that way. True. That post was only a couple of months before this one though; not a lot of time for new data/arguments to emerge or opinions to change. The only major new data point I can think of since then is the funds raising ~$1m, which I think is mostly orthogonal to what we are discussing. I'm curious whether you personally a perceive a change (drop) in popularity in your circles? This story sounds plausibly true. It's a difficult one to falsify though (I could flip all the language and get something that also sounds plausibly true), so turning it over in my head for the past few days I'm still not sure how much weight to put on it.

A more detailed discussion of the considerations for and against concluding that EA Funds had been well received would have been helpful if the added detail was spent examining people's concerns re: conflicts of interest, and centralization of power, i.e. concerns which were commonly expressed but not resolved.

I'm concerned with the framing that you updated towards it being correct for EA Funds to persist past the three month trial period. If there was support to start out with and you mostly didn't gather more support later on relative to what one would ... (read more)

3
BenHoffman
7y
I definitely perceived the sort of strong exclusive endorsement and pushing EA Funds got as a direct contradiction of what I'd been told earlier, privately and publicly - that this was an MVP experiment to gauge interest and feasibility, to be reevaluated after three months. If I'm confused, I'm confused about how this wasn't just a lie. My initial response was "HOW IS THIS OK???" (verbatim quote). I'm willing to be persuaded, of course. But, barring an actual resolution of the issue, simply describing this as confusion is a pretty substantial understatement. ETA: I'm happy with the update to the OP and don't think I have any unresolved complaint on this particular wording issue.
2
AGB
7y
In the OP Kerry wrote: CEA's original expectation of donations could just have been wrong, of course. But I don't see a failure of logic here. Re. your last paragraph, Kerry can confirm or deny but I think he's referring to the fact that a bunch of people were surprised to see (e.g.? Not sure if there were other cases.) GWWC start recommending the EA funds and closing down the GWWC trust recently when CEA hadn't actually officially given the funds a 'green light' yet. So not referring to the same set of criticisms you are talking about. I think 'confusion at GWWC's endorsement of EA funds' is a reasonable description of how I felt when I received this e-mail, at the very least*; I like the funds but prominently recommending something that is in beta and might be discontinued at any minute seemed odd. *I got the e-mail from GWWC announcing this on 11th April. I got CEA's March 2017 update saying they'd decided to continue with the funds later on the same day, but I think that goes to a much narrower list and in the interim I was confused and was going to ask someone about it. Checking now it looks like CEA actually announced this on their blog on 10th April (see below link), but again presumably lots of GWWC members don't read that. https://www.centreforeffectivealtruism.org/blog/cea-update-march-2017/

In one view, the concept post had 43 upvotes, the launch post had 28, and this post currently has 14. I don't think this is problematic in itself, since this could just be an indication of hype dying down over time, rather than of support being retracted.

Part of what I'm tracking when I say that the EA community isn't supportive of EA Funds is that I've spoken to several people in person who have said as much--I think I covered all of the reasons they brought up in my post, but one recurring theme throughout those conversations was that writing up criticis... (read more)

2
Kerry_Vaughan
7y
My sense (and correct me if I'm wrong) is that the biggest concerns seem to be related to the fact that there is only one fund for each cause area and the fact that Open Phil/GiveWell people are running each of the funds. I share this concern and I agree that it is true that EA Funds has not been changed to reflect this. This is mostly because EA Funds simply hasn't been around for very long and we're currently working on improving the core product before we expand it. What I've tried to do instead is precommit to 50% or less of the funds being managed by Open Phil/GiveWell and give a general timeline for when we expect to start making good on that committment. I know that doesn't solve the problem, but hopefully you agree that it's a step in the right direction. That said, I'm sure there are other concerns that we haven't sufficiently addressed so far. If you know of some off the top of your head, feel free to post them as a reply to this comment. I'd be happy to either expand on my thoughts or address the issue immediately.
6
AGB
7y
So I probably disagree with some of your bullet points, but unless I'm missing something I don't think they can be the crux of our disagreement here, so for the sake of argument let's suppose I fully agree that there are a variety of strong social norms in place here that make praise more salient, visible and common than criticism. ...I still don't see how to get from here to (for example) 'The community is probably net-neutral to net-negative on the EA funds, but Will's post introducing them is the 4th most upvoted post of all time'. The relative (rather than absolute) nature of that claim is important; even if I think posts and projects on the EA forum generally get more praise, more upvotes, and less criticism than they 'should', why has that boosted the EA funds in particular over the dozens of other projects that have been announced on here over the past however-many years? To pick the most obviously-comparable example that quickly comes to mind, Kerry's post introducing EA Ventures has just 16 upvotes*. It just seems like the simplest explanation of your observed data is 'the community at large likes the funds, and my personal geographical locus of friends is weird'. And without meaning to pick on you in particular (because I think this mistake is super-common), in general I want to push strongly towards people recognising that EA consists of a large number of almost-disjoint filter bubbles that often barely talk to each other and in some extreme cases have next-to-nothing in common. Unless you're very different to me, we are both selecting the people we speak to in person such that they will tend to think much like us, and like each other; we live inside one of the many bubbles. So the fact that everyone I've spoken to in person about the EA funds thinks they're a good idea is particularly weak evidence that the community thinks they are good, and so is your opposing observation. I think we should both discount it ~entirely once we have anything else to go

I appreciate that the post has been improved a couple times since the criticisms below were written.

A few of you were diligent enough to beat me to saying much of this, but:

Where we’ve received criticism it has mostly been around how we can improve the website and our communication about EA Funds as opposed to criticism about the core concept.

This seems false, based on these replies. The author of this post replied to the majority of those comments, which means he's aware that many people have in fact raised concerns about things other than communicati... (read more)

2
RyanCarey
7y
There are a range of reasons that this is not really an appropriate way to communicate. It's socially inappropriate, it could be interpreted as emotional blackmail, and it could encourage trolling. It's a shame you've been upset. Still, one can call others' writing upsetting, immoral, mean-spirited, etc etc etc - there is a lot of leeway to make other reasonable conversational moves.
AGB
7y15
0
0

Things don't look good regarding how well this project has been received

I know you say that this isn't the main point you're making, but I think it's the hidden assumption behind some of your other points and it was a surprise to read this. Will's post introducing the EA funds is the 4th most upvoted post of all time on this forum. Most of the top rated comments on his post, including at least one which you link to as raising concerns, say that they are positive about the idea. Kerry then presented some survey data in this post. All those measures of su... (read more)

2
Zeke_Sherman
7y
This is wholly speculative. I've seen no evidence that consequentialists "feel bad" in any emotionally meaningful sense for having made donations to the wrong cause. Looking at that advertising slightly dulled my emotional state. Then I went on about my day. And you are worried about something that would even be more subtle? Why can't we control our feelings and not fall to pieces at the thought that we might have been responsible for injustice? The world sucks and when one person screws up, someone else is suffering and dying at the other end. Being cognizant of this is far more important than protecting feelings. I think you ought to place a bit more faith in the ability of effective altruists to make rational decisions.
4
Kerry_Vaughan
7y
From my point of view, the context for the first section was to explain why we updated in favor of EA Funds persisting past the three-month trial before the trial was over. This was important to communicate because several people expressed confusion about our endorsement of EA Funds while the project was still technically in beta. This is why the first section highlights mostly positive information about EA Funds whereas later sections highlight challenges, mistakes etc. I think the update that your comment is suggesting is that I should have made the first section longer and should have provided a more detailed discussion of the considerations for and against concluding that EA Funds has been well-received so far. Is that what you think or do you think I should make a different update?
4
Kerry_Vaughan
7y
I think your concern is that since NPS was developed with for-profit companies in mind, we shouldn't assume that a +50 NPS is good for a nonprofit. If so, that's fair and I agree. When people benchmark NPS scores, they usually do it by comparing NPS scores in similar industries. Unfortunately, I don't know of any data for NPS scores of nonprofits like ours (e.g. consumer-facing and providing a donation service). I think the information about what NPS score is generally considered good is helpful to understanding why we updated in favor of EA Funds persisting past the three month trial. Is it your view that I a) shouldn't have included NPS data at all or b) shoulnd't have included information about what scores are good or c) that I should have caveated the paragraph more carefully?
3
Kerry_Vaughan
7y
I'm not sure I follow the concern here. Are you arguing that a) the "OPP's last dollar" content is not attempting to provide an argument or that b) it's wrong to give an argument if the argument causes guilt as a side effect or are you arguing for something else? I'd be willing to defend that it's acceptable to make arguments for a position even if those arguments have the unintended consquence of causing guilt.
5
Kerry_Vaughan
7y
Thanks for taking the time to provide such detailed feedback. I agree. This was a mistake on my part. I was implicitly thinking about some of the recent feedback I'd read on Facebook and was not thinking about responses to the initial launch post. I agree that it's not fair to say that the criticism have been predominately about website copy. I've changed the relevant section in the post to include links to some of the concerns we received in the launch post. I'd like to be as exhaustive as possible, so please provide links to any areas I missed so that I can include them (note that I didn't include all of the comments you linked to if I thought our launch post already addressed the issue).

This is a problem, both for the reasons you give:

Why do I think intuition jousting is bad? Because it doesn’t achieve anything, it erodes community relations and it makes people much less inclined to share their views, which in turn reduces the quality of future discussions and the collective pursuit of knowledge. And frankly, it's rude to do and unpleasant to receive.

and through this mechanism, which you correctly point out:

The implication is nearly always that the target of the joust has the ‘wrong’ intuitions.

The above two considerations combine... (read more)

I'd like to respond to your description of what some people's worries about your previous proposal were, and highlight how some of those worries could be addressed, hopefully without reducing how helpfully ambitious your initial proposal was. Here goes:

the risk of losing flexibility by enforcing what is an “EA view” or not

It seems to me like the primary goal of the panel in the original proposal was to address instances of people lowering the standard of trustworthiness within EA and imposing unreasonable costs (including unreasonable time costs) on in... (read more)

5
Gregory Lewis
7y
FWIW, as someone who contributed to the InIn document, I approve of (and recommended during discussion) the less ambitious project this represents.

Noted! I can understand that it's easy to feel like you're overstepping your bounds when trying to speak for others. Personally, I'd have been happy for you all to take a more central leadership role, and would have wanted you all to feel comfortable if you had decided to do so.

My view is that we still don't have reliable mechanisms to deal with the sorts of problems mentioned (i.e. the Intentional Insights fiasco), so it's valuable when people call out problems as they have the ability to. It would be better if the EA community had ways of calling out suc... (read more)

I believe you when you say that you don't benefit much from feedback from people not already deeply engaged with your work.

There's something really noticeable to me about the manner in which you've publicly engaged with the EA community through writing for the past while. You mention that you put lots of care into your writing, and what's most noticeable about this for me is that I can't find anything that you've written here that anyone interested in engaging with you might feel threatened or put down by. This might sound like faint praise, but it really ... (read more)

When you speculate too much on complicated movement dynamics, it's easy to overlook things like this via motivated reasoning.

Thanks for affirming the first point. But lurkers on a forum thread don't feel respected or disrespected. They just observe and judge. And you want them to respect us, first and foremost.

I appreciate that you thanked Telofy; that was respectful of you. I've said a lot about how using kind communication norms is both agreeable and useful in general, but the same principles apply to our conversation.

I notice that, in the first pa... (read more)

-2
kbog
7y
I've noticed that strawmanning and poor interpretations of my writing is a trend in your writing. Cut it out. I did not state that lurkers should respect us at the expense of us disrespecting them. I stated quite clearly that lurkers feel nothing of the sort, since they are observers. This has nothing to do with who they are, and everything to do with the fact that they are passively reading the conversation rather than being a subject of it. Rather, I argued that lurkers should be led to respect us instead of being unimpressed by us, and that they would be unimpressed by us if they saw that the standard reaction to somebody criticizing and leaving the movement was to leave their complaints unassailed and to affirm that such people don't fit in the movement.
2
Dawn Drescher
7y
Thank you. <3

I agree with your last paragraph, as written. But this conversation is about kindness, and trusting people to be competent altruists, and epistemic humility. That's because acting indifferent to whether or not people who care about similar things as we do waste time figuring things out is cold in a way that disproportionately drives away certain types of skilled people who'd otherwise feel welcome in EA.

But this is about optimal marketing and movement growth, a very empirical question. It doesn't seem to have much to do with personal experiences

I'm hap... (read more)

4
Owen Cotton-Barratt
7y
Really liked this comment. Would be happy to see a top level post on the issue.
1
kbog
7y
No, it's not cold. It's indifferent, and normal. No one in any social movement worries about wasting the time of people who come to learn about things. Churches don't worry that they're wasting people's time when inviting them to come in for a sermon; they don't advertise all the reasons that people don't believe in God. Feminists don't worry that they're wasting people's time by not advertising that they want white women to check their privilege before colored ones. BLM doesn't worry that it's wasting people's time by not advertising that they don't welcome people who are primarily concerned with combating black-on-black violence. And so on. Learning what EA is about does not take a long time. This is not like asking people to read Marx or the LessWrong sequences. The books by Singer and MacAskill are very accessible and do not take long to read. If someone reads it and doesn't like it, so what? They heard a different perspective before going back to their ordinary life. Who thinks "I'm an effective altruist and I feel unwelcome here in effective altruism because people who don't agree with effective altruism aren't properly shielded from our movement"? If you want to make people feel welcome then make it a movement that works for them. I fail to see how publicly broadcasting incompatibility with others does any good. Sure, it's nice to have a clearly defined outgroup that you can contrast yourselves with, to promote solidarity. Is that what you mean? But there are much easier and safer punching bags to be used for this purpose, like selfish capitalists or snobby Marxist intellectuals. Intersectionality does not mean simply looking at people's experiences from different backgrounds. It means critiquing and moving past sweeping modernist narratives of the experiences of large groups by investigating the unique ways in which orthogonal identity categories interact. I don't see why it's helpful, given that identity hasn't previously entered the picture at all in t

There's nothing necessarily intersectional/background-based about that

People have different experiences, which can inform their ability to accurately predict how effective various interventions are. Some people have better information on some domains than others.

One utilitarian steelman of this position that's pertinent to the question of the value of kindness and respect of other's time would be that:

  • respecting people's intellectual autonomy and being generally kind tends to bring more skilled people to EA
  • attracting more skilled EAs is worth it in u
... (read more)
1
kbog
7y
I'm not going to concede the ground that this conversation is about kindness or intellectual autonomy. Because it's really not what's at stake. This is about telling certain kinds of people that EA isn't for them. But this is about optimal marketing and movement growth, a very objective empirical question. It doesn't seem to have much to do with personal experiences; we don't normally bring up intersectionalism in debates about other ordinary things like this, we just talk about experiences and knowledge in common terms, since race and so on aren't dominant factors. By the way, think of the kind of message that would be sent. "Hey you! Don't come to effective altruism! It probably isn't for you!" That would be interpreted as elitist and close-minded, because there are smart people who don't have the same views that other EAs do and they ought to be involved. Let's be really clear. The points given in the OP, even if steelmanned, do not contradict EA. They happened to cause trouble for one person, that's all. You can interpret that kind of speech prescriptively - i.e., I am making the claim that given the premises of our shared activities and values, effective altruists should agree that reducing world poverty is overwhelmingly more important than aspiring to be the nicest, meekest social movement in the world. Edit: also, since you stated earlier that you don't actually identify as EA, it really doesn't make any sense for you to complain about how we talk about what we believe.

We're trying to make the world a better place as effectively as possible. I don't think that ensuring convenience for privileged Western people who are wandering through social movements is important.

I'm certainly a privileged Western person, and I'm aware that that affords me many comforts and advantages that others don't have! I also think that many people from intersectional perspectives within the scope of "privileged Western person" other than your own may place more or less value on respecting people's efforts, time, and autonomy than yo... (read more)

1
kbog
7y
This isn't about "let's all check our privileges", this is "the trivial interests of wealthy people are practically meaningless in comparison to the things we're trying to accomplish." There's nothing necessarily intersectional/background-based about that, you can find philosophers in the Western moral tradition arguing the same thing. Sure, they're valid perspectives. They're also untenable, and we don't agree with them, since they place wealthy people's efforts, time, and autonomy on par with the need to mitigate suffering in the developing world, and such a position is widely considered untenable by many other philosophers who have written on the subject. Having a perspective from another culture does not excuse you from having a flawed moral belief. But don't get confused. This is not "should we rip people off/lie to people in order to prevent mothers from having to bury their little kids" or some other moral dilemma. This is "should we go out of our way to give disclaimers and pander to the people we market to, something which other social movements never do, in order to save them time and effort." It's simply insane. The kind of 'kindness' being discussed here - going out of one's way to make your communication maximally considerate to all the new people it's going to reach - is not grounded in traditional norms and inclinations to be kind to your fellow person. It's another utilitarian-ish approach, equally impersonal as donating to charity, just much less effective.

For me, most of the value I get out of commenting in EA-adjacent spaces comes through tasting the ways in which I gently care about our causes and community. (Hopefully it is tacit that one of the many warm flavors of that value for me is in the outcomes our conversations contribute to.)

But I suspect that many of you are like me in this way, and also that, in many broad senses, former EAs have different information than the rest of us. Perhaps the feedback we hear when anyone shares some of what they've learned before they go will tend to be less rewarding... (read more)

Personally, I've noticed that being casually aware of smaller projects that seem cash-strapped has given me the intuition that it would be better for Good Ventures to fund more of the things it thinks should be funded, since that might give some talented EAs more autonomy. On the other hand, I suspect that people who prefer the "opposite" strategy, of being more positive on the pledge and feeling quite comfortable with Givewell's approach to splitting, are seeing a very different social landscape than I am. Maybe they're aware of people who would... (read more)

You're clearly pointing at a real problem, and the only case in which I can read this as melodramatic is the case in which the problem is already very serious. So, thank you for writing.

When the word "care" is used carelessly, or, more generally, when the emotional content of messages is not carefully tended to, this nudges EA towards being the sort of place where e.g. the word "care" is used carelessly. This has all sorts of hard to track negative effects; the sort of people who are irked by things like misuse of the word "care&qu... (read more)

What I'd like to see is an organization like CFAR, aimed at helping promising EAs with mental health problems and disabilities -- doing actual research on what works, and then helping people in the community who are struggling to find their feet and could be doing a lot in cause areas like AI research with a few months' investment. As it stands, the people who seem likely to work on things relevant to the far future are either working at MIRI already, or are too depressed and outcast to be able to contribute, with a few exceptions.

I'd be interested in c... (read more)

0
Jalen_Lyle-Holmes
6y
I'm so intrigued by proposal 3)! I think when a friend is struggling like that I often have a vague feeling of wanting to engage/help in a bigger way than having a few chats about it, and I'm intrigued by this idea of how to do that. And also thinking about myself I think I'd love it if someone did that for me. I'm gonna keep that in mind and maybe try it one day!

It updates me in the direction that the right queries can produce a significant amount of valuable material if we can reduce the friction to answering such queries (esp. perfectionism) and thus get dialogs going.

Definitely agreed. In this spirit, is there any reason not to make an account with (say) a username of username, and a password of password, for anonymous EAs to use when commenting on this site?

4
RobBensinger
7y
I think this would be too open to abuse; see the concerns I raised in the OP. An example of a variant on this idea that might work is to take 100 established+trusted community members, give them all access to the same forum account, and forbid sharing that account with any additional people.

It’s not a coincidence that all the fund managers work for GiveWell or Open Philanthropy.

Second, they have the best information available about what grants Open Philanthropy are planning to make, so have a good understanding of where the remaining funding gaps are, in case they feel they can use the money in the EA Fund to fill a gap that they feel is important, but isn’t currently addressed by Open Philanthropy.

It makes some sense that there could be gaps which Open Phil isn't able to fill, even if Open Phil thinks they're no less effective than the... (read more)

1
Kerry_Vaughan
7y
Thanks for the feedback! Two thoughts: 1) I don't think the long-term goal is that OpenPhil program officers are the only fund managers. Working with them was the best way to get an MVP version in place. In the long-run, we want to use the funds to offer worldview diversification and to expand the funding horizons of the EA community. 2) I think I agree with you. However, since the OpenPhil program officers know what OpenPhil is funding it means that the funds should provide options that are at least as good as OpenPhil's funding. (See Carl Shulman's post on the subject.) The hope is that the "at least as good as OpenPhil" bar is higher than most donors can reach now, so the fund is among the most effective options for individual donors. Let me know if that didn't answer the question.
4
ozymandias
7y
IIRC, Open Phil often wants to not be a charity's only funder, which means they leave the charity with a funding gap that could maybe be filled by the EA Fund.

Thank you! I really admired how compassionate your tone was throughout all of your comments on Sarah's original post, even when I felt that you were under attack . That was really cool. <3

I'm from Berkeley, so the community here is big enough that different people have definitely had different experiences than me. :)

1
Dawn Drescher
7y
Oh, thank you! <3 I’m trying my best. Oh yeah, the Berkeley community must be huge, I imagine. (Just judging by how often I hear about it and from DxE’s interest in the place.) I hope the mourning over Derek Parfit has also reminded people in your circles of the hitchhiker analogy and two-level utilitarianism. (Actually, I’m having a hard time finding out whether Parfit came up with it or whether Eliezer just named it for him on a whim. ^^)

I should add that I'm grateful for the many EAs who don't engage in dishonest behavior, and that I'm equally grateful for the EAs who used to be more dishonest, and later decided that honesty was more important (either instrumentally, or for its own sake) to their system of ethics than they'd previously thought. My insecurity seems to have sadly dulled my warmth in my above comment, and I want to be better than that.

1
Dawn Drescher
7y
Thanks. May I ask what your geographic locus is? This is indeed something that I haven’t encountered here in Berlin or online. (The only more recent example that comes to mind was something like “I considered donating to Sci-Hub but then didn’t,” which seems quite innocent to me.) Back when I was young and naive, I asked about such (illegal or uncooperative) options and was promptly informed of their short-sightedness by other EAs. Endorsing Kantian considerations is also something I can do without incurring a social cost.
4
JBeshir
7y
I find it difficult to combine "I want to be nice and sympathetic and forgiving of people trying to be good people and assume everyone is" with "I think people are not taking this seriously enough and want to tell you how seriously it should be taken". It's easier to be forgiving when you can trust people to take it seriously. I've kind of erred on the side of the latter today, because "no one criticises dishonesty or rationalisation because they want to be nice" seems like a concerning failure mode, but it'd be nice if I were better at combining both.

This issue is very important to me, and I stopped identifying as an EA after having too many interactions with dishonest and non-cooperative individuals who claimed to be EAs. I still act in a way that's indistinguishable from how a dedicated EA might act—but it's not a part of my identity anymore.

I've also met plenty of great EAs, and it's a shame that the poor interactions I've had overshadow the many good ones.

Part of what disturbs me about Sarah's post, though, is that I see this sort of (ostensibly but not actually utilitarian) willingness to compromi... (read more)

I should add that I'm grateful for the many EAs who don't engage in dishonest behavior, and that I'm equally grateful for the EAs who used to be more dishonest, and later decided that honesty was more important (either instrumentally, or for its own sake) to their system of ethics than they'd previously thought. My insecurity seems to have sadly dulled my warmth in my above comment, and I want to be better than that.

Since there are so many separate discussions surrounding this blog post, I'll copy my response from the original discussion:

I’m grateful for this post. Honesty seems undervalued in EA.

An act-utilitarian justification for honesty in EA could run along the lines of most answers to the question, “how likely is it that strategic dishonesty by EAs would dissuade Good Ventures-sized individuals from becoming EAs in the future, and how much utility would strategic dishonesty generate directly, in comparison?” It’s easy to be biased towards dishonesty, since it’s ... (read more)

Good Ventures recently announced that it plans to increase its grantmaking budget substantially (yay!). Does this affect anyone's view on how valuable it is to encourage people to take the GWWC pledge on the margin?

1
Castand
7y
A sharper way to put the question would be by how much this news should make us discount GiveWell's claims about a $1000 donation can do. (Also because not everyone's going to take the GWWC pledge of course. The key thing is simply that they donate and donate wisely.)

It's worth pointing out past discussions of similar concerns with similar individuals.

I'd definitely be happy for you to expand on how any of your points apply to AMF in particular, rather than aid more generally; constructive criticism is good. However, as someone who's been around since the last time we had this discussion, I'm failing to find any new evidence in your writing—even qualitative evidence—that what AMF is doing is any less effective than I'd previously believed. Maybe you can show me more, though?

Thanks for the post.

0
carneades
7y
Thanks for you comment. I speak specifically to AMF and bed net distributions because those are what I have first hand experience with. The argument is not that, if your only goal is to save lives, the AMF does not succeed at this (though I could tell many a story of communities who just put their bed nets up when the observers come around or the cultural practice of most people here of staying outside chatting well into the peak mosquito time, but that is an aside). The argument is that we should have other goals than just saving lives, such as creating jobs, letting people choose their own development initiatives, and decreasing dependence. The question for you is which part of the argument do you object to? The claim that AMF destroys jobs, limits freedom, and creates dependence, the claim that we should evaluate charities based on these three criteria, or the claim that this means that AMF does net harm? If your only concern is with the last claim, but you would agree with the first two, there are other organizations which focus on capacity building and behavior change which succeed where AMF fails. I am not a critic of all aid, simply aid whose main focus is giving physical objects, ignoring what people actually want, and making everyone dependent on it, instead of training people, allowing them a say in their development, and working themselves out of a job.

This post was incredibly well done. The fact that no similarly detailed comparison of AI risk charities had been done before you published this makes your work many times more valuable. Good job!

At the risk of distracting from the main point of this article, I'd like to notice the quote:

Xrisk organisations should consider having policies in place to prevent senior employees from espousing controversial political opinions on facebook or otherwise publishing materials that might bring their organisation into disrepute.

This seems entirely right, consideri... (read more)

I think liberating altruists to talk about their accomplishments has potential to be really high value, but I don't think the world is ready for it yet... Another thing is that there could be some unexpected obstacle or Chesterton's fence we don't know about yet.

Both of these statements sound right! Most of my theater friends from university (who tended to have very good social instincts) recommend that, to understand why social conventions like this exist, people like us read the "Status" chapter of Keith Johnstone's Impro, which contains thi... (read more)

Creating a community panel that assesses potential egregious violations of those principles, and makes recommendations to the community on the basis of that assessment.

This is an exceptionally good idea! I suspect that such a panel would be taken the most seriously if you (or other notable EAs) were involved in its creation and/or maintenance, or at least endorsed it publicly.

I agree that the potential for people to harm EA by conducting harmful-to-EA behavior under the EA brand will increase as the movement continues to grow. In addition, I also think ... (read more)

Thank you for posting this, Ian; I very much approve of what you've written here.

In general, people's ape-y human needs are important, and the EA movement could become more pleasant (and more effective!) by recognizing this. Your involvement with EA is commendable, and your involvement with the arts doesn't diminish this.

Ideally, I wouldn't have to justify the statement that people's human needs are important on utilitarian grounds, but maybe I should: I'd estimate that I've lost a minimum of $1k worth of productivity over the last 6 months that could have... (read more)

2
IanDavidMoss
8y
I appreciate the vote of confidence! But I should also clarify that my wavering on self-identification with effective altruism has mostly not been due to lack of kindness from other EAs. I've sometimes been asked tough and direct questions, but I fully expected that and didn't consider it any kind of harassment (with one exception where the guy later apologized). It sounds like you've experienced much worse and I'm sorry for that.

It seems like there's a disconnect between EA supposedly being awash in funds on the one hand, and stories like yours on the other.

This line is spot-on. When I look around, I see depressingly many opportunities that look under-funded, and a surplus of talented people. But I suspect that most EAs see a different picture--say, one of nearly adequate funding, and a severe lack of talented people.

This is ok, and should be expected to happen if we're all honestly reporting what we observe! In the same way that one can end up with only Facebook friends who a... (read more)

Nice post. Spending resources on self-improvement is generally something EA's shouldn't feel bad about.

One solution may be different classes of risk-aversity. One low-risk class may be dedicated to GiveWell- or ACE-recommended charities, another to metacharities or endeavors as Open Phil might evaluate, and another high-risk class to yourself, an intervention as 80,000 Hours might evaluate.

I do intuit that the best high-risk interventions ought to be more cost-effective than the best medium-risk interventions, which ought to be more cost-effective than... (read more)

Thanks! I've never looked into the Brain Preservation Foundation, but since RomeoStevens' essay, which is linked to in the post you linked to above, mentions it as being potentially a better target of funding than SENS, I'll have to look into it sometime.

Epistemic status: low confidence on both parts of this comment.

On life extension research:

See here and here, and be sure to read Owen's comments after clicking on the latter link. It's especially hard to do proper cost effectiveness estimates on SENS, though, because Aubrey de Grey seems quite overconfident (credence-wise) most of the time. SENS is still the best organization I know of that works on anti-aging, though.

On cyonics:

I suspect that most of the expected value from cyonics comes from the outcomes in which cyonics becomes widely enough available t... (read more)

You mention that far meta concerns with high expected value deserve lots of scrutiny, and this seems correct. I guess that you could use a multi-level model to penalize the most meta of concerns, and calculate new expected values for different things that you might fund, but maybe even that wouldn't be sufficient.

It seems like funding a given meta activity on the margin should be given less consideration (i.e. your calculated expected value for funding that thing should be further revised downwards) if x % of charitable funds being spent by EA's are alread... (read more)

Does anyone have any thoughts on how much we should value leading other people to donate? I mean this in a very narrow sense, and my thoughts on this topic are quite muddled, so I'll try to illustrate what I mean with a simplified example. I apologize if my confusion ends up making my writing unclear.

If I talk with a close friend of mine about EA for a bit, and she donates $100 to, say, GiveWell, and then she disengages from EA for the rest of her life, how much should I value her donation to GiveWell? In this scenario, it seems like I've put some time and... (read more)

5
Owen Cotton-Barratt
9y
I've thought a bit about this in the past. It's a complicated issue because it mixes what's already a philosophically awkward point, with significant uncertainty. I'll see if I can get somewhere with untangling it: First, it may be helpful to remember that the real question is "what actions should I take?", not "how good was this thing I did?". Expectation of how good the different actions is helpful in choosing what to do, of course. If you knew precisely the counterfactual that would apply absent your action (and it's that she would never have made that donation and lived an otherwise similar life), it would be correct to say that you'd done $100 worth of good. Likewise from her perspective if she knew the precise counterfactuals attaching to her donation, it would be correct to say she'd done $100 of good. These numbers don't need to add up to $100; Parfit has a lengthier explanation in Five mistakes in moral mathematics. However, in practical terms we aren't that close to precise knowledge of the counterfactuals. Even in theory it's not clear that we could all be, when there are other agents involved. If you model everyone as agents trying to be credited with good for their deeds, then cooperative game theory can give you some tools for assigning credit -- and it will add up to $100. But this doesn't seem quite right as a model either, since it wasn't clear your friend was even playing this game (it may be a better model for splitting credit among EAs). There are some other advantages of assuming as a heuristic that the credit has to add up to $100. It's relatively easy to apply, and it's fairly robust -- it's harder for a group of people to get confused and collectively do something that's a big mistake. Particularly because there are so many uncertainties when we try to guess counterfactuals, we want to judge on expectations, and the cap is a method of keeping our expectations more anchored to reality.

Thanks for the encouragement, Ryan!

I've been tentatively considering a career in the actuarial sciences recently. It seems like the field compensates people pretty well, is primarily merit-based, doesn't require much, if any programming ability (which I don't really have), and doesn't have very many prerequisites to get into, other than strong mathematical ability and a commitment to taking the actuarial exams.

Also, actuarial work seems much slower paced than the work done in many careers that are frequently discussed on 80K Hours, which would make me super happy. I'm a bit burnt out on lif... (read more)

1
RyanCarey
9y
I'm not expert on actuarial studies but I agree with your description that it seems good, challenging, decently rewarding reasonable low-stress and good for people with strong mathematical ability. Regarding programming ability, if you're a relatively analytical thinker, it's pretty feasible to learn to program, even without formal study, so doubt that it would be a decisive factor for most people. "I guess that if I wasn't a failure, I would have figured out what I was doing after graduation by now." Although you're also clearly giving it some thought, considering that you're posting about it here, and that you're reading 80,000 Hours, so maybe you needn't feel so downcast about things.

I'm an emotivist-- I believe that "x is immoral" isn't a proposition, but, rather, is just another way of saying "boo for x". This didn't keep me from becoming an EA, though; I would feel hugely guilty if I didn't end up supporting GiveWell and other similar organizations once I have an income, and being charitable just feels nice anyways.

I agree with everything in your two replies to my post.

You know, I'm probably more susceptible to being dazzled by de Grey than most-- he's a techno-optimist, he's an eloquent speaker, he's involved in Alcor, and I personally have a stake in life-extension tech being developed. I'm not sure how much these factors have influenced me in subtle ways while I was writing up my thoughts on SENS.

Anyhow, doing cost-effectiveness estimates is one of my favorite ways of thinking about and better understanding problems, even when I end up throwing out the cost-effectiveness estimates at the end of the day.

I haven't found any such breakdown, even after looking around for a while. The 80,000 Hours interview with Aubrey, as well as a number of Youtube interviews featuring Aubrey (I don't remember which ones, sorry) note that Aubrey thinks SENS could make good use of $1 billion over the next ten years, but none of these sources justify why this much money is needed.

Thank you for sharing this! I hadn't known that Bronies for Good had switched to fundraising for organizations recommended by GiveWell-- given the variety of organizations that Bronies for Good has supported in the past, I certainly hope that they continue to support EA-approved organizations in the future, rather than moving on to another cause.

We've talked to them at Charity Science, and it sounds like they'll be sticking with GiveWell charities. It's worth highlighting again quite how impressive their fundraising achievements have been: I believe they've raised $220,000 since 2012.

Anti-aging seems like a plausible area for effective altruists to consider giving to, so thank you for raising this thought. It looks like GiveWell briefly looked into this area before deciding to focus its efforts elsewhere.

I've seen a few videos of Aubrey de Grey speaking about how SENS could make use of $100 million per year to fund research on rejuvenation therapies, so presumably SENS has plenty of room for more funding. SENS's I-990 tax forms show that the organization's assets jumped by quite a lot in 2012, though this was because of de Grey's donat... (read more)

1
Owen Cotton-Barratt
9y
Thanks, lots of useful things here. I absolutely agree with your last paragraph. I agree that looking more carefully at SENS would be the right move for a deeper investigation of the area. I think before that step it's worth having some idea of roughly how valuable the area is (which is what I was very crudely doing). I don't put too much stock in the particular numbers I produced here. They make anti-ageing look just slightly less promising than the best direct health interventions we know of (hence indeed better than a lot of medical research), but the previous time I came up with numbers for this problem -- for a conference talk -- I must have been in a more optimistic mood, because my estimate was a couple of orders of magnitude better. I wouldn't be surprised if the truth is somewhere in the middle. I would like to see more people provide estimates, even if not carefully justified, as I think we can get some wisdom of the crowds coming through, and to understand which figures are the most controversial or would benefit most from careful research.

Hi there! In this comment, I will discuss a few things that I would like to see 80,000 Hours consider doing, and I will also talk about myself a bit.

I found 80,000 Hours in early/mid-2012, after a poster on LessWrong linked to the site. Back then, I was still trying to decide what to focus on during my undergraduate studies. By that point in time, I had already decided that I needed to major in a STEM field so that I would be able to earn to give. Before this, in late 2011, I had been planning on majoring in philosophy, so my decision in early 2012 to do ... (read more)

2
Benjamin_Todd
9y
Hi Fluttershy, Really appreciate hearing you're feedback. We've written about how to choose what subject to study a bunch of times, but I agree it's hard to find, and it's not a major focus of what we do. Unfortunately we have very limited research capacity and have decided to focus on choosing jobs rather than subjects because we think we'll be able to have more impact that way. In the future I'd love to have more content on subject choice though. I also realise our careers list comes across badly. I'm really keen to expand the range of careers that we consider - we're trying to hire someone to do more career profiles but haven't found anyone suitable yet. Being an actuary and engineering are both pretty high on the list. I also know that a lot of people around 80,000 Hours think most people should do earning to give. That's not something I agree with. Earning to give is just one of a range of strategies. Ben
1
AGB
9y
"Actually, I can't find any discussion of choosing a college major on the 80,000 Hours site, though there are a couple of threads on this topic posted to LessWrong." Not a tremendous excuse, but it wouldn't surprise me if this is basically because 80k is UK-based, where there is no strong analogue to 'choosing a major' as practised by US undergraduates; by the time someone is an undergraduate in the UK (actually, probably many months before that, given application deadlines), they've already chosen their subject and have no further choices to make on that front except comparatively minor specialisation choices.
0
RyanCarey
9y
Not to take away from the substance of your post, but when you note that impact is power-law distributed, doing important scientific research sounds (much)[https://80000hours.org/2012/08/should-you-go-into-research-part-1/] (more skill-dependent)[https://80000hours.org/2013/01/should-you-go-into-research-part-2/] than quantitative finance.
2
John_Maxwell
9y
Seems like 80K could probably stand to link to more of Cognito Mentoring's old stuff in general. No reason to duplicate effort.