All of Katja_Grace's Comments + Replies

many AI researchers just don’t seem too concerned about the risks posed by AI, so may not have opened the survey

Note that we didn't tell them the topic that specifically.

I am wondering whether a better approach would instead be to randomly sample a subset of potential respondents (say, 4,000 people), and offer to compensate them at a much higher rate (e.g., $100)..

Tried sending them $100 last year and if anything it lowered the response rate.

If you are inclined to dismiss this based on your premise "many AI researchers just don’t seem too concerned ... (read more)

Note that we didn't tell them the topic that specifically.

I understand that, and think this was the right call. But there seems to be consensus that in general, a response rate below ~70% introduces concerns of non-response bias, and when you're at 15%—with (imo) good reason to think there would be non-response bias—you really cannot rule this out. (Even basic stuff like: responders probably earn less money than non-responders, and are thus probably younger, work in academia rather than industry, etc.; responders are more likely to be familiar with the pri... (read more)

Thank you both for your past and future work on EV, and best wishes to both of you in your new roles. Really looking forward to seeing you more in the geographic vicinity of Open Phil!

Possible, but likely a smaller effect than you might think because: a) I was very ambiguous about the subject matter until they were taking the survey (e.g. did not mention AGI or risk or timelines) b) Last time (for the 2016 survey) we checked the demographics of respondents against those for a random subset of non-respondents, and they weren't very different.

Participants were also mostly offered substantial payment for taking the survey ($50 usually, for a ~15m survey), in part in the hope of making payment a larger motivator than desire to express some particular view, but I don't think payment actually made a large difference to the response rate, so probably failed have the desired effect on possible response bias.

>I would be very excited to see research by Giving Green into whether their approach of recommending charities which are, by their own analysis, much less cost effective than the best options is indeed justified.

Several confusions I have:

  • When did they say these were much less cost-effective? I thought they just failed to analyze cost effectiveness? (Which is also troubling, but different from what you are saying, so I'm confused)
  • What do you mean by it being justified? It looks like you mean 'does well on a comparison of immediate impact', but, supposing
... (read more)
8
alex lawsen (previously alexrjl)
3y
I asked them! The website does now make it clear, I think, that they think policy options are best, though some of that is a recent change, and the language is still less effective than I'd like. You're right that I meant "does well on a comparison of immediate impact" here, but your second point is, I think, really important.  Having said that, while it's worth thinking about I don't think the current presentation of the difference between offsetting and policy intervention could be fairly described as "dishonest". I think it is clear that GG thinks policy is more effective, it's just that the size of the difference is not emphasised.  I agree that, even in worlds where it produce the most immediate good from a donation perspective, presenting two options as equal when you think they are not is dishonest, and not justifiable. I don't think Giving Green has ever intended to do that though. In terms of CATF vs Sunshine, I had initially suspected that it might be the case that they thought CATF was much better but that Sunshine was worth including to capture a section of the donations market which broadly likes progressive stuff. I agree that this would not be acceptable without a caveat that they thought CATF was best. Having spoken to them, I don't think this is the case (and Dan can confirm if he's still following the thread); I think they genuinely think that there's no difference in expectation between CATF and TSM. I strongly disagree with this assesment, but do believe it to be genuine.

Do you have quantitative views on the effectiveness of donating these organizations, that could be compared to other actions? (Or could you point me to any of the links go to something like that?) Sorry if I missed them.

0
VinceB
6y
Charity Science Health also featured really good RCTs in their proposal that you can see in their proposal or just google. LMK if I should link them. There is also the promise of future data in this arena. JPAL, WHO, and a few other orgs are setting their sails to investigate this as well, so the decent data will be getting much better. If WHO and JPAL are interested theres at the least something big to investigate for sure, and to get that data you need programs to be active.
3
Peter Wildeford
6y
I focused more on identifying organizations that met the three criteria I outlined and then vetting them individually. Because I was just looking for organizations I felt confident in being "good enough to be considered above average", I did not take the time to develop quantitative views for them yet. I'm also not sure if such views would be useful. For Charity Science Health, I'd rely on "What is the expected value of creating a GiveWell top charity?". While published in Dec 2016, I've revisited the underlying numbers in May 2017 and Dec 2017 and found them to still be roughly the same. Notably this estimate is for value of time spent on the project rather than value of marginal funding, but I think the two would be roughly equivalent. For the Sentience Institute or the Wild-Animal Suffering Research Institute, I have a rough guess as to the value of cause prioritization efforts, generally speaking and I think these organizations would fall under that. Again, this estimate is looking at the value of time spent rather than value of marginal funding, but that shouldn't really matter. For Rethink Charity, I don't have any quantitative estimates at this time. I tried making one for the Local Effective Altruism Network (LEAN) last year, but was held back by not having any quantitative information about local groups. LEAN has put a lot of time into improving this quantitative situation this year, publishing one report and aiming to publish more. This should make constructing a quantitative estimate possible.

It seems worth distinguishing 'effectiveness' in the sense of personal competence (as I guess is meant in the first case, e.g. 'reasonably sharp') and 'effectiveness' in the sense of trying to choose interventions by cost-effectiveness.

Also remember that selecting people to encourage in particular directions is a subset of selecting interventions. It may be that 'E not A' people are more likely to be helpful than 'A not E' people, but that chasing either group is less helpful than doing research on E that is helpful for whichever people already care about it. I think I have stronger feelings about E-improving interventions overall being good than about which people are more promising allies.

Yeah, and among common intuitions I think. But I thought EAs were mostly consequentialists, so the intended role of obligations is not obvious to me.

0
arrowind
9y
I think the survey of EAs from the start of the year picked up a few hundred non-consequentialists. It had a high %age of consequentialists, but emphasized this figure shouldn't be taken as covering all EAs out there.

I'm curious about the implicit framework where some things are obligatory and some things are choices.

1[anonymous]9y
I suspect the social intuition of when we consider someone obligated has at least a little to do with the level of personal sacrifice required. As in, you are almost always obligated to be good if it the personal cost to you is nothing,and you are almost never obligated to be good if it costs you a great deal. (Which is why you are obligated to save a drowning child but you are a hero if you save the same child from a dangerous burning building.) If Singer says we're "obligated" to be effective altruists, he's trying to transfer the social norm we have for being obligated to save drowning children because the personal cost is very slight, over to being obligated to, say, buy mosquito nets, because the personal cost is very slight. (personal morality, divorced from social ideas of what is an obligation, of course, might widely differ) That's also combined with whether the person is culpable. (You're obligated to clean up your mess, but you're extra good if you clean up someone elses.)
0
arrowind
9y
Isn't that a common distinction among philosophers? I recall that there's a technical name for it.

We evaluated all of the projects other than the three I specifically mentioned not evaluating. Sorry for not writing up the other evaluations - we just didn't have time. We bought the ones that gave us the most impact per dollar, according to our evaluations (and based on the prices people wanted for their work). So we didn't purchase Joao's work this round because we calculated that it was somewhat less cost-effective than the things we did purchase, given the price. We may still purchase it in a later round.

1
Evan_Gaensbauer
9y
Thanks for the response. That's great feedback to hear.

Changing one's values does not more effectively promote the values one has initially, so it seems one should be averse to it. I think the expanding circle case is more complicated - the advocates of a wider circle are trying to convince the others that those others are mistaken about their own existing values, and that by consistency they must care about some entities they think they don't care about. This is why the phenomenon looks like an expanding circle - points just outside a circle look a lot like points just inside it, so consistency pushes the circle outwards (this doesn't explain why the circle expands rather than contracting).

1
Tom_Ash
9y
Unless you're a moral realist, and want to have the correct values.
0
Evan_Gaensbauer
9y
That makes more sense. I haven't read much philosophy, or engaged with that sort of thinking very deeply, so I often get confused about what I or others (are supposed to) mean by the word 'value'. I meant that people would be more effective if they altered their actions to be more in line with their values after they were updated for consistency. If someone says "I don't value X" one day, and "I now value X" the next day, I myself semantically think of that as a 'change of values' rather than 'an update of values toward greater behavioral consistency'. The latter definition seems to be the more common one around these parts, and also more precise, so I'll just go with that one from now on.
0
Evan_Gaensbauer
9y
That makes more sense. I haven't read much philosophy, or engaged with that sort of thinking very deeply, so I often get confused about what I or others (are supposed to) mean by the word 'value'. I meant that people would be more effective if they altered their actions to be more in line with their values after they were updated for consistency. If someone says "I don't value X" one day, and "I now value X" the next day, I myself semantically think of that as a 'change of values' rather than 'an update of values toward greater behavioral consistency'. The latter definition seems to be the more common one around these parts, and also more precise, so I'll just go with that one from now on.

It seems there are some common states where this comes up, such as when one person is doing a thing which they think is good, given personal constraints which are hidden to their conversation partner, and worries that they are harshly judged because the constraints are hidden. Or where one person is trying out a thing, because they think it might be very good, however they don't already think it is very good (except for VOI), and worry that others think they are actually advocating for something suboptimal. Or where one person doesn't think what they are d... (read more)

the other is that the particular style in which the EA community pursues that idea (looking for interventions with robust academic evidence of efficacy, and then supporting organizations implementing those interventions that accountably have a high amount of intervention per marginal dollar) is novel, but mostly because the cultural background for it seeming possible as an option at all is new.

The kinds of evidence available for some EA interventions, e.g. existential risk ones, doesn't seem different in kind to the evidence probably available earlier i... (read more)

0
Evan_Gaensbauer
10y
Effective altruism as a social movement emerged as the confluence of clusters of non-profit organizations based out of San Francisco, New York, and Oxford
4
atucker
10y
My other point was that EA isn't new, but that we don't recognize earlier attempts because they're not really doing evidence in a way that we would recognize. I also think that x-risk was basically not something that many people would worry about until after WWII. Prior to WWII there was not much talk of global warming, and AI, genetic engineering, nuclear war weren't really on the table yet.

I agree that those I mentioned are probably not the only contentious claims, and that the one you mention in particular is probably another.

Interesting suggestions.

I'd expect the internet to make many minority causes and interests more successful by letting their rare supporters get together, and I think it has had this effect. However that doesn't seem to explain why they are minority causes to begin with.

Do you mean that before computer programming the philosophically minded just didn't have lucrative professions?

Have we recently passed some threshold in high quality evidence for what works in aid? I'd expect in future we think of 2014 level of evidence as low, and still say we only recently got good evidence.

0
tomstocker
9y
Before the internet, it probably didn't make sense to organise around such a high level of abstraction away from concrete goals. Before the modern economy it probably didn't make that much sense to invest so much time into thinking about alternatives in this way, and some utilitarians seem to have done so anyway.

The last four paragraphs of this post are repeated from earlier, and appear to be cutting out some of the original post.

0
RyanCarey
10y
Thanks, it should be fixed now.

Good point. I have wondered before whether something like anti-cosmopolitanism sometimes accounts for disagreement that is attributed to anti-effectiveness or anti-altruism. Cosmopolitanism seems like a much more plausible thing for a human to dislike than effectiveness, especially given the popularity of effectiveness in other areas of life.