As a guy who used to be female (I was AMAB), Kelly's post rings true to me. Fully endorsed. It would be particularly interesting to hear about AFAB transmen's experiences with respect to this.
The change in how you're treated is much more noticeable when making progress in the direction of becoming more guyish; not sure if this is because this change tends to happen quickly (testosterone is powerful + quick) or because of the offsetting stigma re: people making transition progress towards being female. I could also see this stigma making up some of the posi...
I like the article. The first table makes it viscerally available that the VOI for better estimating eta (or for finding a better model for utility as a function of consumption on the margins) could be high, if you're relatively more interested in global poverty-focused EA than in other causes within EA.
I'm not aware of any better figures you could have used for GWWC/TLYCS/REG's leverage, and I'm not sure if many of us take estimates of leverage for meta-organizations literally, even relative to how literally we take normal EA cost-effectiveness estimates....
I strongly agree with both of the comments you've written in this thread so far, but the last paragraph here seems especially important. Regarding this bit, though:
I might be a bit of an outlier
This factor may push in the opposite way than you'd think, given the context. Specifically, if people who might have gotten into EA in the past ended up avoiding it because they were exposed to this example, then you'd expect the example to be more popular than it would be if everyone who once stood a reasonable chance of becoming an EA (or even a hardcore EA) had stuck around to give you their opinion on whether you should use that example. So, keep doing what you're doing! I like your approach.
The objection about it being ableist to promote funding for trachoma surgeries rather than guide dogs doesn't have to do with how many QALYs we'd save from providing someone with a guide dog or a trachoma surgery. Roughly, this objection is about how much respect we're showing to disabled people. I'm not sure how many of the people who have said that this example is ableist are utilitarians, but we can actually make a good case that using the example causes negative consequences for the reason that it's ableist. (It's also possible that using the example a...
...It just seems like the simplest explanation of your observed data is 'the community at large likes the funds, and my personal geographical locus of friends is weird'.
And without meaning to pick on you in particular (because I think this mistake is super-common), in general I want to push strongly towards people recognising that EA consists of a large number of almost-disjoint filter bubbles that often barely talk to each other and in some extreme cases have next-to-nothing in common. Unless you're very different to me, we are both selecting the people we
A more detailed discussion of the considerations for and against concluding that EA Funds had been well received would have been helpful if the added detail was spent examining people's concerns re: conflicts of interest, and centralization of power, i.e. concerns which were commonly expressed but not resolved.
I'm concerned with the framing that you updated towards it being correct for EA Funds to persist past the three month trial period. If there was support to start out with and you mostly didn't gather more support later on relative to what one would ...
In one view, the concept post had 43 upvotes, the launch post had 28, and this post currently has 14. I don't think this is problematic in itself, since this could just be an indication of hype dying down over time, rather than of support being retracted.
Part of what I'm tracking when I say that the EA community isn't supportive of EA Funds is that I've spoken to several people in person who have said as much--I think I covered all of the reasons they brought up in my post, but one recurring theme throughout those conversations was that writing up criticis...
I appreciate that the post has been improved a couple times since the criticisms below were written.
A few of you were diligent enough to beat me to saying much of this, but:
Where we’ve received criticism it has mostly been around how we can improve the website and our communication about EA Funds as opposed to criticism about the core concept.
This seems false, based on these replies. The author of this post replied to the majority of those comments, which means he's aware that many people have in fact raised concerns about things other than communicati...
Things don't look good regarding how well this project has been received
I know you say that this isn't the main point you're making, but I think it's the hidden assumption behind some of your other points and it was a surprise to read this. Will's post introducing the EA funds is the 4th most upvoted post of all time on this forum. Most of the top rated comments on his post, including at least one which you link to as raising concerns, say that they are positive about the idea. Kerry then presented some survey data in this post. All those measures of su...
This is a problem, both for the reasons you give:
Why do I think intuition jousting is bad? Because it doesn’t achieve anything, it erodes community relations and it makes people much less inclined to share their views, which in turn reduces the quality of future discussions and the collective pursuit of knowledge. And frankly, it's rude to do and unpleasant to receive.
and through this mechanism, which you correctly point out:
The implication is nearly always that the target of the joust has the ‘wrong’ intuitions.
The above two considerations combine...
I'd like to respond to your description of what some people's worries about your previous proposal were, and highlight how some of those worries could be addressed, hopefully without reducing how helpfully ambitious your initial proposal was. Here goes:
the risk of losing flexibility by enforcing what is an “EA view” or not
It seems to me like the primary goal of the panel in the original proposal was to address instances of people lowering the standard of trustworthiness within EA and imposing unreasonable costs (including unreasonable time costs) on in...
Noted! I can understand that it's easy to feel like you're overstepping your bounds when trying to speak for others. Personally, I'd have been happy for you all to take a more central leadership role, and would have wanted you all to feel comfortable if you had decided to do so.
My view is that we still don't have reliable mechanisms to deal with the sorts of problems mentioned (i.e. the Intentional Insights fiasco), so it's valuable when people call out problems as they have the ability to. It would be better if the EA community had ways of calling out suc...
I believe you when you say that you don't benefit much from feedback from people not already deeply engaged with your work.
There's something really noticeable to me about the manner in which you've publicly engaged with the EA community through writing for the past while. You mention that you put lots of care into your writing, and what's most noticeable about this for me is that I can't find anything that you've written here that anyone interested in engaging with you might feel threatened or put down by. This might sound like faint praise, but it really ...
When you speculate too much on complicated movement dynamics, it's easy to overlook things like this via motivated reasoning.
Thanks for affirming the first point. But lurkers on a forum thread don't feel respected or disrespected. They just observe and judge. And you want them to respect us, first and foremost.
I appreciate that you thanked Telofy; that was respectful of you. I've said a lot about how using kind communication norms is both agreeable and useful in general, but the same principles apply to our conversation.
I notice that, in the first pa...
I agree with your last paragraph, as written. But this conversation is about kindness, and trusting people to be competent altruists, and epistemic humility. That's because acting indifferent to whether or not people who care about similar things as we do waste time figuring things out is cold in a way that disproportionately drives away certain types of skilled people who'd otherwise feel welcome in EA.
But this is about optimal marketing and movement growth, a very empirical question. It doesn't seem to have much to do with personal experiences
I'm hap...
There's nothing necessarily intersectional/background-based about that
People have different experiences, which can inform their ability to accurately predict how effective various interventions are. Some people have better information on some domains than others.
One utilitarian steelman of this position that's pertinent to the question of the value of kindness and respect of other's time would be that:
We're trying to make the world a better place as effectively as possible. I don't think that ensuring convenience for privileged Western people who are wandering through social movements is important.
I'm certainly a privileged Western person, and I'm aware that that affords me many comforts and advantages that others don't have! I also think that many people from intersectional perspectives within the scope of "privileged Western person" other than your own may place more or less value on respecting people's efforts, time, and autonomy than yo...
For me, most of the value I get out of commenting in EA-adjacent spaces comes through tasting the ways in which I gently care about our causes and community. (Hopefully it is tacit that one of the many warm flavors of that value for me is in the outcomes our conversations contribute to.)
But I suspect that many of you are like me in this way, and also that, in many broad senses, former EAs have different information than the rest of us. Perhaps the feedback we hear when anyone shares some of what they've learned before they go will tend to be less rewarding...
Personally, I've noticed that being casually aware of smaller projects that seem cash-strapped has given me the intuition that it would be better for Good Ventures to fund more of the things it thinks should be funded, since that might give some talented EAs more autonomy. On the other hand, I suspect that people who prefer the "opposite" strategy, of being more positive on the pledge and feeling quite comfortable with Givewell's approach to splitting, are seeing a very different social landscape than I am. Maybe they're aware of people who would...
You're clearly pointing at a real problem, and the only case in which I can read this as melodramatic is the case in which the problem is already very serious. So, thank you for writing.
When the word "care" is used carelessly, or, more generally, when the emotional content of messages is not carefully tended to, this nudges EA towards being the sort of place where e.g. the word "care" is used carelessly. This has all sorts of hard to track negative effects; the sort of people who are irked by things like misuse of the word "care&qu...
What I'd like to see is an organization like CFAR, aimed at helping promising EAs with mental health problems and disabilities -- doing actual research on what works, and then helping people in the community who are struggling to find their feet and could be doing a lot in cause areas like AI research with a few months' investment. As it stands, the people who seem likely to work on things relevant to the far future are either working at MIRI already, or are too depressed and outcast to be able to contribute, with a few exceptions.
I'd be interested in c...
It updates me in the direction that the right queries can produce a significant amount of valuable material if we can reduce the friction to answering such queries (esp. perfectionism) and thus get dialogs going.
Definitely agreed. In this spirit, is there any reason not to make an account with (say) a username of username, and a password of password, for anonymous EAs to use when commenting on this site?
It’s not a coincidence that all the fund managers work for GiveWell or Open Philanthropy.
Second, they have the best information available about what grants Open Philanthropy are planning to make, so have a good understanding of where the remaining funding gaps are, in case they feel they can use the money in the EA Fund to fill a gap that they feel is important, but isn’t currently addressed by Open Philanthropy.
It makes some sense that there could be gaps which Open Phil isn't able to fill, even if Open Phil thinks they're no less effective than the...
Thank you! I really admired how compassionate your tone was throughout all of your comments on Sarah's original post, even when I felt that you were under attack . That was really cool. <3
I'm from Berkeley, so the community here is big enough that different people have definitely had different experiences than me. :)
I should add that I'm grateful for the many EAs who don't engage in dishonest behavior, and that I'm equally grateful for the EAs who used to be more dishonest, and later decided that honesty was more important (either instrumentally, or for its own sake) to their system of ethics than they'd previously thought. My insecurity seems to have sadly dulled my warmth in my above comment, and I want to be better than that.
This issue is very important to me, and I stopped identifying as an EA after having too many interactions with dishonest and non-cooperative individuals who claimed to be EAs. I still act in a way that's indistinguishable from how a dedicated EA might act—but it's not a part of my identity anymore.
I've also met plenty of great EAs, and it's a shame that the poor interactions I've had overshadow the many good ones.
Part of what disturbs me about Sarah's post, though, is that I see this sort of (ostensibly but not actually utilitarian) willingness to compromi...
I should add that I'm grateful for the many EAs who don't engage in dishonest behavior, and that I'm equally grateful for the EAs who used to be more dishonest, and later decided that honesty was more important (either instrumentally, or for its own sake) to their system of ethics than they'd previously thought. My insecurity seems to have sadly dulled my warmth in my above comment, and I want to be better than that.
Since there are so many separate discussions surrounding this blog post, I'll copy my response from the original discussion:
I’m grateful for this post. Honesty seems undervalued in EA.
An act-utilitarian justification for honesty in EA could run along the lines of most answers to the question, “how likely is it that strategic dishonesty by EAs would dissuade Good Ventures-sized individuals from becoming EAs in the future, and how much utility would strategic dishonesty generate directly, in comparison?” It’s easy to be biased towards dishonesty, since it’s ...
Good Ventures recently announced that it plans to increase its grantmaking budget substantially (yay!). Does this affect anyone's view on how valuable it is to encourage people to take the GWWC pledge on the margin?
It's worth pointing out past discussions of similar concerns with similar individuals.
I'd definitely be happy for you to expand on how any of your points apply to AMF in particular, rather than aid more generally; constructive criticism is good. However, as someone who's been around since the last time we had this discussion, I'm failing to find any new evidence in your writing—even qualitative evidence—that what AMF is doing is any less effective than I'd previously believed. Maybe you can show me more, though?
Thanks for the post.
This post was incredibly well done. The fact that no similarly detailed comparison of AI risk charities had been done before you published this makes your work many times more valuable. Good job!
At the risk of distracting from the main point of this article, I'd like to notice the quote:
Xrisk organisations should consider having policies in place to prevent senior employees from espousing controversial political opinions on facebook or otherwise publishing materials that might bring their organisation into disrepute.
This seems entirely right, consideri...
I think liberating altruists to talk about their accomplishments has potential to be really high value, but I don't think the world is ready for it yet... Another thing is that there could be some unexpected obstacle or Chesterton's fence we don't know about yet.
Both of these statements sound right! Most of my theater friends from university (who tended to have very good social instincts) recommend that, to understand why social conventions like this exist, people like us read the "Status" chapter of Keith Johnstone's Impro, which contains thi...
Creating a community panel that assesses potential egregious violations of those principles, and makes recommendations to the community on the basis of that assessment.
This is an exceptionally good idea! I suspect that such a panel would be taken the most seriously if you (or other notable EAs) were involved in its creation and/or maintenance, or at least endorsed it publicly.
I agree that the potential for people to harm EA by conducting harmful-to-EA behavior under the EA brand will increase as the movement continues to grow. In addition, I also think ...
Thank you for posting this, Ian; I very much approve of what you've written here.
In general, people's ape-y human needs are important, and the EA movement could become more pleasant (and more effective!) by recognizing this. Your involvement with EA is commendable, and your involvement with the arts doesn't diminish this.
Ideally, I wouldn't have to justify the statement that people's human needs are important on utilitarian grounds, but maybe I should: I'd estimate that I've lost a minimum of $1k worth of productivity over the last 6 months that could have...
It seems like there's a disconnect between EA supposedly being awash in funds on the one hand, and stories like yours on the other.
This line is spot-on. When I look around, I see depressingly many opportunities that look under-funded, and a surplus of talented people. But I suspect that most EAs see a different picture--say, one of nearly adequate funding, and a severe lack of talented people.
This is ok, and should be expected to happen if we're all honestly reporting what we observe! In the same way that one can end up with only Facebook friends who a...
Nice post. Spending resources on self-improvement is generally something EA's shouldn't feel bad about.
One solution may be different classes of risk-aversity. One low-risk class may be dedicated to GiveWell- or ACE-recommended charities, another to metacharities or endeavors as Open Phil might evaluate, and another high-risk class to yourself, an intervention as 80,000 Hours might evaluate.
I do intuit that the best high-risk interventions ought to be more cost-effective than the best medium-risk interventions, which ought to be more cost-effective than...
Thanks! I've never looked into the Brain Preservation Foundation, but since RomeoStevens' essay, which is linked to in the post you linked to above, mentions it as being potentially a better target of funding than SENS, I'll have to look into it sometime.
Epistemic status: low confidence on both parts of this comment.
On life extension research:
See here and here, and be sure to read Owen's comments after clicking on the latter link. It's especially hard to do proper cost effectiveness estimates on SENS, though, because Aubrey de Grey seems quite overconfident (credence-wise) most of the time. SENS is still the best organization I know of that works on anti-aging, though.
On cyonics:
I suspect that most of the expected value from cyonics comes from the outcomes in which cyonics becomes widely enough available t...
You mention that far meta concerns with high expected value deserve lots of scrutiny, and this seems correct. I guess that you could use a multi-level model to penalize the most meta of concerns, and calculate new expected values for different things that you might fund, but maybe even that wouldn't be sufficient.
It seems like funding a given meta activity on the margin should be given less consideration (i.e. your calculated expected value for funding that thing should be further revised downwards) if x % of charitable funds being spent by EA's are alread...
Does anyone have any thoughts on how much we should value leading other people to donate? I mean this in a very narrow sense, and my thoughts on this topic are quite muddled, so I'll try to illustrate what I mean with a simplified example. I apologize if my confusion ends up making my writing unclear.
If I talk with a close friend of mine about EA for a bit, and she donates $100 to, say, GiveWell, and then she disengages from EA for the rest of her life, how much should I value her donation to GiveWell? In this scenario, it seems like I've put some time and...
I've been tentatively considering a career in the actuarial sciences recently. It seems like the field compensates people pretty well, is primarily merit-based, doesn't require much, if any programming ability (which I don't really have), and doesn't have very many prerequisites to get into, other than strong mathematical ability and a commitment to taking the actuarial exams.
Also, actuarial work seems much slower paced than the work done in many careers that are frequently discussed on 80K Hours, which would make me super happy. I'm a bit burnt out on lif...
I'm an emotivist-- I believe that "x is immoral" isn't a proposition, but, rather, is just another way of saying "boo for x". This didn't keep me from becoming an EA, though; I would feel hugely guilty if I didn't end up supporting GiveWell and other similar organizations once I have an income, and being charitable just feels nice anyways.
I agree with everything in your two replies to my post.
You know, I'm probably more susceptible to being dazzled by de Grey than most-- he's a techno-optimist, he's an eloquent speaker, he's involved in Alcor, and I personally have a stake in life-extension tech being developed. I'm not sure how much these factors have influenced me in subtle ways while I was writing up my thoughts on SENS.
Anyhow, doing cost-effectiveness estimates is one of my favorite ways of thinking about and better understanding problems, even when I end up throwing out the cost-effectiveness estimates at the end of the day.
I haven't found any such breakdown, even after looking around for a while. The 80,000 Hours interview with Aubrey, as well as a number of Youtube interviews featuring Aubrey (I don't remember which ones, sorry) note that Aubrey thinks SENS could make good use of $1 billion over the next ten years, but none of these sources justify why this much money is needed.
Thank you for sharing this! I hadn't known that Bronies for Good had switched to fundraising for organizations recommended by GiveWell-- given the variety of organizations that Bronies for Good has supported in the past, I certainly hope that they continue to support EA-approved organizations in the future, rather than moving on to another cause.
We've talked to them at Charity Science, and it sounds like they'll be sticking with GiveWell charities. It's worth highlighting again quite how impressive their fundraising achievements have been: I believe they've raised $220,000 since 2012.
Anti-aging seems like a plausible area for effective altruists to consider giving to, so thank you for raising this thought. It looks like GiveWell briefly looked into this area before deciding to focus its efforts elsewhere.
I've seen a few videos of Aubrey de Grey speaking about how SENS could make use of $100 million per year to fund research on rejuvenation therapies, so presumably SENS has plenty of room for more funding. SENS's I-990 tax forms show that the organization's assets jumped by quite a lot in 2012, though this was because of de Grey's donat...
Hi there! In this comment, I will discuss a few things that I would like to see 80,000 Hours consider doing, and I will also talk about myself a bit.
I found 80,000 Hours in early/mid-2012, after a poster on LessWrong linked to the site. Back then, I was still trying to decide what to focus on during my undergraduate studies. By that point in time, I had already decided that I needed to major in a STEM field so that I would be able to earn to give. Before this, in late 2011, I had been planning on majoring in philosophy, so my decision in early 2012 to do ...
Yeah, this sort of thing is basically always in danger of becoming politics all the way down. One good heuristic is to keep the goals you hope to satisfy by engaging in mind--if you want to figure out whether to accept an article's central claim, is the answer to your question decisive with respect to your decision? If you're trying to sway people, are you being careful to make sure it's plausibly deniable that you're doing anything other than truthseeking? If you're engaging because you think it's impactful to do so, are you treating your engagement as a tool rather than an end?