C

ChrisJenkins

37 karmaJoined Sep 2014

Comments
9

You mentioned the separation of parents and offspring, but it's worth including the effects on the economics of (non-crated) veal production into your considerations. Approximately, each dairy cow is productive for the last three of the five years it is kept alive, and is impregnated once per year to maintain an optimal lactation cycle. source

I'd hypothesize that the overall amount of suffering experienced per day by an adult animal would be less than the amount of suffering experienced by a calf that's experiencing the stress of separation from its parent. Since they're slaughtered while still in this state, there could be an even better case for veal calves having lives that are overwhelmingly negative.

Great idea! I like that you give particular suggestions for reading that aren't inside-EA information sources. Still, though, I would worry that people can quickly read up on the general EA platform and might then expect that toeing the GiveWell line is their best strategy for winning. That would at least get people to think about that information enough to summarize it, which is useful. But it might be useful to explicitly ask for consideration of interventions that differ from any of the major program types recommended by GiveWell/TLYCS/etc.

(I don't know why this would matter, but it also occurs to me that the most serious contestants are probably reading these comments. Hello!)

I haven't read any of LW's debates on this, so I'm not sure why one would be interested in whether the relationship between demographics and intelligence in EAs is weaker than usual, or what that would imply about EA. Mainly, I'd like to know by what routes people with predicted-to-be-average intelligence and average educational backgrounds are coming to EA, so I hope age, years of education, and occupation will be included so that the option exists of using the estimation techniques referred to above.

Having said that - intelligence research is politically toxic, and I'd also worry that people could spread bad ideas about how to use IQ estimates (e.g., general bragging rights, or "the smartest EAs focus on X, so we should pay more attention to X"), so I wouldn't argue for including anything related to IQ estimation in publicly-announced results.

The best IQ proxy questions are demographic variables anyway (age, years of education, and occupation), which predict about 50% of the variance in full-scale IQ - see papers shared here: http://jmp.sh/b/V717o7yuqvQutQYTHIMh

It wouldn't be hard to plug data we're going to get anyway into Crawford's regression equation - the only extra work would be plugging in occupations to the standardized occupational classification system. Reporting it could be bad PR, but it wouldn't hurt for anyone who's interested to take a look.

The game will run at least through to the end of Peter's UK book tour (the last event of which is June 9, at least for now), but TLYCS hasn't committed to an end date yet, because they will want to keep it going if it's still driving donations after the tour.

The main things we know in advance that we'll focus on this year are those two books and the EA Global summit. We're also keeping an eye out for unexpected situations where a social media response could be useful. For example, when Will MacAskill's ice bucket challenge article received widespread attention last year, a coordinated response could have helped direct readers of that article to the follow-up article that went into greater depth about impact metrics.

It would be good to know when a lot of people in the group are interested in coordinating to promote something CEA might not already be thinking about, though. In the short term we can discuss possibilities on the Facebook group (invitations for that will be going out by email soon).

There's a "charity portfolio management" startup called The Agora Fund that's trying to funnel money to high-impact nonprofits. It seems to be using a reasonable definition of "impact" (e.g., they're using Givewell as an information source and listing GiveDirectly and Deworm the World as top charities). I'd never heard of them before, but saw that they're hiring in New York.

They try to tailor impact research reports donors' priorities, aggregate donations for payment and tax purposes, and take a cut between 2.25 and 7.25% of the donated amount. (The higher figure is for charities they provide research reports on, so it's actually a larger cut for the charities they recommend.) Page 13 describes the service levels/costs/benefits.

The fact that it's a for-profit company is a bit surprising. Does anyone have any thoughts about whether this seems useful, beneficial, likely to succeed, etc.?

Thanks for raising this topic. Your position probably captures what very many people think when hearing about "earning to give" for the first time. It's difficult to engage with most of your points, though, because in the last sentence of your reply you seem to be favoring a situation where conditions deteriorate, rather than improve, for impoverished people, so that political changes will take place that you believe will be ultimately beneficial. That's probably correct under some circumstances, but in general the burden of proof would be on you.

Alternatively, if you think political change is the way to ultimately help people, wouldn't you want high-earning persons to support efforts at political change, if there are advocacy organizations in need of funds? Would you agree that, in principle, the positive effects of that investment could outweigh negative effects from the marginal usefulness of that person to their employer, above their next-best-qualified potential employee?

Thanks for writing this! I think it could be a useful way of referring to a baseline philosophical position among EAs regarding impartiality toward currently living humans. Beyond that consensus, it seems reasonable to suppose that there will be room in the movement for people who judge impact from a variety of philosophical perspectives. It would be interesting if charity evaluators took that diversity into consideration when evaluating effectiveness. The most obvious split might be over beliefs about the subjective experiences of non-human animals, which is why it makes sense for a group (i.e., Animal Charity Evaluators) to focus on impact on animals if it's being neglected elsewhere.

Different flavors of utilitarianism could also make a substantial difference. "Total" versus "prior-existence" views could affect whether one assigns overwhelming value to ensuring that very many very happy people will come to exist in the future. The degree to which one is hedonistic or negative-leaning could affect whether a charity that alleviates a given number of cases of non-fatal causes of suffering (e.g., SCI) might be favored over one that saves a given number of lives (e.g., AMF).