david_reinstein's Shortform

6 comments, sorted by Highlighting new comments since Today at 2:51 AM
New Comment

ImpactMatters acquired by CharityNavigator; but is it being incorporated/presented/used in a good way?

ImpactMatters was founded in 2015 by Dean Karlan and Elijah Goldberg. They brought evidence-based impact ratings to a wider set of charities than GiveWell. Rather than focusing only on the very most effective charities they investigated impact and effectiveness across a wider range of charities willing to participate. (In some ways, this resembled SoGive). E.g., "in November 2019, ImpactMatters released over 1,000 ratings."

I saw strong potential for Impact Matters to move an EA-adjacent impactfulness metric beyond a small list of Givewell and ACE charities, to move the conversation, get charities to compete on this basis, and ‘raise awareness’ (ugh, hate that expression). (I was not so happy about their rating much-less-impactful USA-based charities with international charities without making clear the distinction, but perhaps a necessary evil).

In late 2020 CharityNavigator acquired Impact Matters. They have added "Impact and Results Scores" for 100 or more charities and this is incorporated into their 'Encompass Rating' but not into their basic and most prominent and famous "stars system", if I understand (it is complicated).

I think this has great positive potential, for the same reasons I thought Impact Matters had potential... and even more 'bringing this into the mainstream.

However, I'm not fully satisfied with the way things are presented:

  1. The Impact Ratings don't seem to convey a GiveWell-like 'impact per dollar' measure
  2. In the presentation, they are a bit folded into and mixed up with the Encompass ratings. E.g., I couldn't figure out how to sort or filter charities by their 'Impact and Results Score' itself.
  3. Impact Ratings are not prominent or mentioned when one is looking through most categories of charities (e.g., my mother was looking for charities her organization could support dealing with "Human trafficking, COVID-19, hunger, or the environment" and nothing about impact came up)
  4. In some presentations on their page cause categories with order-of-magnitude differences in impact are presented side-by-side, but only comparable with 'within-category ratings. Thus, a charity building wells in Africa may receive a much lower score, and thus appear to be much less effective, than a charity giving university scholarships to students in the USA.
  5. They only have impact ratings for eight charities working internationally (vs 186 ratings for charities that only work within regions in the USA, I believe), and no animal or otherwise EA-relevant, as far as I know.

What do you think? Is this being used well? How could it be done better? How could we push them in the right direction?

I spent a few minutes looking at the impact feature, and I... will also go with "not satisfied". 

From their review of Village Enterprise:

Impact & Results scores of livelihood support programs are based on income generated relative to cost. Programs receive an Impact & Results score of 100 if they increase income for a beneficiary by more than $1.50 for every $1 spent and a score of 75 if income increases by more than $0.85 for every $1 spent. If a nonprofit reports impact but doesn't meet the threshold for cost-effectiveness, it earns a score of 50.

My charitable interpretation is that the "$0.85" number is meant to represent one year's income, and to imply a higher number over time (e.g. you have new skills or a new business that boosts your income for years to come).

But I also think it's plausible that "$0.85" is meant to refer to the total increase, such that you could score "75" by running a program that, in your own estimation, helps people less than just giving them money. 

(The "lowest score is 50" element puzzled me at first, but this page clarifies that you score "0" if CN can't find enough information to estimate your impact in the first place.)

*****

Still, this is much better than the original CN setup, and I hope this is an early beta version with many improvements on the way.

There was some discussion of the original acquisition here.

Historically, Charity Navigator has been extremely hostile to effective altruism, as you probably know, so perhaps this isn't surprising. 

Thank you, I had not seen Luke Freeman @givingwhatwecan's earlier post

That 2013 opinion piece/hit job is shocking. But that was 9 years ago or so.

I doubt CN would have acquired IM just to bury it; there might be some room for positive suasion here.

I'm

  • reading EA forum posts
  • and comments
  • and some links
  • and adding some comments/thoughts of my own

HERE (podcast 'found in the struce' available on all platforms).

I think this will help people who have limited screen time get more from the EA Forum.

I’d like to encourage others to also narrate/record forum posts. I would love to listen to this too on those long drives/walks.

AI consciousness and valenced sensations: unknowability?

Variant of Chinese room argument? This seems ironclad to me, what am I missing:

My claims:

Claim: AI feelings are unknowable: Maybe an advanced AI can have positive and negative sensations. But how would we ever know which ones are which (or how extreme they are?

Corollary: If we cannot know which are which, we can do nothing that we know will improve/worsen the “AI feelings”; so it’s not decision-relevant

Justification I: As we ourselves are bio-based living things, we can infer from the apparent sensations and expressions of bio-based living things that they are happy/suffering. But for non-bio things, this analogy seems highly flawed. If a dust cloud converges on a ‘smiling face’, we should not think it is happy.

Justification II. (Related) AI, as I understand it, is coded to learn to solve problems and maximise things, optimize certain outcomes or do things it “thinks” will yield positive feedback.

We might think then, that the AI ‘wants’ to solve these problems, and things that bring it closer to the solution make it ‘happier’. But why should we think this? For all we know, it may feel pain when it gets closer to the objective, and pleasure when it avoids this.

Does it tell us it makes it happy to come closer to the solution? That may merely because we programmed it to learn how to come to a solution, and one thing it ‘thinks’ will help is telling us it gets pleasure from doing so, even though it actually gains pain.

A colleague responded:

If we get the AI through a search process (like training a neural network) then there's a reason to believe that AI would feel positive sensations (if any sensations at all) from achieving its objective since an AI that feels positive sensations would perform better at its objective than an AI that feels negative sensations. So, the AI that better optimizes for the objective would be more likely to result from the search process. This feels analogous to how we judge bio-based living things in that we assume that humans/animals/others seek to do those things that make them feel good, and we find that the positive sensations of humans are tied closely to those things that evolution would have been optimizing for. A version of a human that felt pain instead of pleasure from eating sugary food would not have performed as well on evolution's optimization criteria.

OK but this seems only if we:

  1. Knew how to induce or identify "good feelings"
  2. Decided to induce these and tie them in as a reward for getting close to the optimum.

But how on earth would we know how to do 1 (without biology at least) and why would we bother doing so? Couldn't the machine be just as good an optimizer without getting a 'feeling' reward from optimizing?

Please tell me why I'm wrong.