November 2022 update: I wrote this post during a difficult period in my life. I still agree with the basic point I was gesturing towards, but regret some of the presentation decisions I made. I may make another attempt in the future. 


"A system that ignores feedback has already begun the process of terminal instability."

– John Gall, Systemantics

 

(My request from last time still stands.)

 

jimrandomh wrote a great comment in response to my last post:

The core thesis here seems to be:

"I claim that [cluster of organizations] have collectively decided that they do not need to participate in tight feedback loops with reality in order to have a huge, positive impact."

There are different ways of unpacking this, so before I respond I want to disambiguate them. Here are four different unpackings:

  1. Tight feedback loops are important, [cluster of organizations] could be doing a better job creating them, and this is a priority. (I agree with this. Reality doesn't grade on a curve.)
  2. Tight feedback loops are important, and [cluster of organizations] is doing a bad job of creating them, relative to organizations in the same reference class. (I disagree with this. If graded on a curve, we're doing pretty well. )
  3. Tight feedback loops are important, but [cluster of organizations] has concluded in their explicit verbal reasoning that they aren't important. (I am very confident that this is false for at least some of the organizations named, where I have visibility into the thinking of decision makers involved.)
  4. Tight feedback loops are important, but [cluster of organizations] is implicitly deprioritizing and avoiding them, by ignoring/forgetting discouraging information, and by incentivizing positive narratives over truthful narratives.

(4) is the interesting version of this claim, and I think there's some truth to it. I also think that this problem is much more widespread than just our own community, and fixing it is likely one of the core bottlenecks for civilization as a whole.

I think part of the problem is that people get triggered into defensiveness; when they mentally simulate (or emotionally half-simulate) setting up a feedback mechanism, if that feedback mechanism tells them they're doing the wrong thing, their anticipations put a lot of weight on the possibility that they'll be shamed and punished, and not much weight on the possibility that they'll be able to switch to something else that works better. I think these anticipations are mostly wrong; in my anecdotal observation, the actual reaction organizations get to poor results followed by a pivot is usually at least positive about the pivot, at least from the people who matter. But getting people who've internalized a prediction of doom and shame to surface those models, and do things that would make the outcome legible, is very hard.

...

 

I replied:

Thank you for this thoughtful reply! I appreciate it, and the disambiguation is helpful. (I would personally like to do as much thinking-in-public about this stuff as seems feasible.)

I mean a combination of (1) and (4). 

I used to not believe that (4) was a thing, but then I started to notice (usually unconscious) patterns of (4) behavior arising in me, and as I investigated further I kept noticing more & more (4) behavior in me, so now I think it's really a thing (because I don't believe that I'm an outlier in this regard).

...

 

I agree with jimrandomh that (4) is the most interesting version of this claim. What would it look like if the cluster of EA & Rationality organizations I pointed to last time were implicitly deprioritizing getting feedback from reality?

I don't have a crisp articulation of this yet, so here are some examples that seem to me to gesture in that direction:

 

Please don't misunderstand – I'm not suggesting that the people involved in these examples are doing anything wrong. I don't think that they are behaving malevolently. The situation seems to me to be more systemic: capable, well-intentioned people begin participating in an equilibrium wherein the incentives of the system encourage drift away from reality. 

There are a lot of feedback loops in the examples I list above... but those loops don't seem to connect back to reality, to the actual situation on the ground. Instead, they seem to spiral upwards – metrics tracking opinions, metrics tracking the decisions & beliefs of other people in the community. Goodhart's Law neatly sums up the problem.

Why does this happen? Why do capable, well-intentioned people get sucked into equilibria that are deeply, obviously strange? 

Let's revisit this part of jimrandomh's great comment:

I think part of the problem is that people get triggered into defensiveness; when they mentally simulate (or emotionally half-simulate) setting up a feedback mechanism, if that feedback mechanism tells them they're doing the wrong thing, their anticipations put a lot of weight on the possibility that they'll be shamed and punished, and not much weight on the possibility that they'll be able to switch to something else that works better. I think these anticipations are mostly wrong; in my anecdotal observation, the actual reaction organizations get to poor results followed by a pivot is usually at least positive about the pivot, at least from the people who matter. But getting people who've internalized a prediction of doom and shame to surface those models, and do things that would make the outcome legible, is very hard.

 

I don't have a full articulation yet, but I think this starts to get at it. The strange equilibria fulfill a real emotional need for the people who are attracted to them (see Core Transformation for discussion of one approach towards developing an alternative basis for meeting this need). 

And from within an equilibrium like this, pointing out the dynamics by which it maintains homeostasis is often perceived as an attack...

-15

0
0

Reactions

0
0

More posts like this

Comments27
Sorted by Click to highlight new comments since: Today at 8:55 PM

Just a quick comment that I don't think the above is a good characterisation of how 80k assesses its impact. Describing our whole impact evaluation would take a while, but some key elements are:

  • We think impact is heavy tailed, so we try to identify the most high-impact 'top plan changes'. We do case studies of what impact they had and how we helped. This often involves interviewing the person, and also people who can assess their work. (Last year these interviews were done by a third party to reduce desirability bias). We then do a rough fermi estimate of the impact.

  • We also track the number of a wider class of 'criteria-based plan changes', but then take a random sample and make fermi estimates of impact so we can compare their value to the top plan changes.

If we had to choose a single metric, it would be something closer to impact-adjusted years of extra labour added to top causes, rather than the sheer number of plan changes.

We also look at other indicators like:

  • There have been other surveys of the highest-impact people who entered EA in recent years, evaluating which fraction came from 80k, which let's us make an estimate of the percentage of the EA workforce from 80k.

  • We look at the EA survey results, which let's us track things like how many people are working at EA orgs and entered via 80k.

We use number of calls as a lead metric, not an impact metric. Technically it's the number of calls with people who made an application above a quality bar, rather than the raw number. We've checked and it seems to be a proxy for the number of impact-adjusted plan changes that result from advising.

This is not to deny that assessing our impact is extremely difficult, and ultimately involves a lot of judgement calls - we were explicit about that in the last review - but we've put a lot more work into it than the above implies – probably around 5-10% of team time in recent years.

I think similar comments could be made by several of the other examples e.g. GWWC also tracks dollars donated each year to effective charities (now via the EA Funds) and total dollars pledged. They track the number of pledges as well since that's a better proxy for the community building benefits.

As @Benjamin_Todd mentioned, GWWC also does report on pledged donations and donations made.

However, Giving What We Can’s core metric on a day-to-day basis is the number of active members who are keeping their pledge. This is in part because the organisation aim is “to create a culture where people are inspired to give more, and give more effectively” (a community building project) and we see pledges as a more correlated and stable reflection of that than the noisy donation data (which a single billionaire can massively skew in any given year). This aim is in service of the organisations’ mission to “Inspire donations to the world’s most effective organisations”. We believe this mission is important in making progress on the world's most pressing problems (whatever they might be throughout the lifetime of the members).

Because GWWC is cause-diverse and not the authority on impact evaluations of the charities its members donate to is hard to translate this into exact impact numbers across our membership. We do however regularly look at where our members are donating and in our impact analysis try to benchmark this to equivalent money donated to top charities. We do plan to improve our reporting of impact where it is possible (e.g. donations to GiveWell’s top charities) – however, this will never be a complete picture.

Note: I did not choose the mission of GWWC and do not speak on behalf of the board nor the founders. However, this is my best understanding of the mission and the core metrics as required for my day-to-day operations. It is also a mission I believe to be impactful not just through direct donation moved but through indirect factors (such as moral circle expansion, movement building, and changing the incentives for charities to be more impact focused because they see more donors seeking impact).

Indeed. I can speak to Founders Pledge which is another of the orgs listed here:

              Founders Pledge focusing on the amount of money pledged and the amount of money donated,                       rather than on the impact those donations have had out in the world. 

While these are the metrics we are reporting most prominently, we do of course evaluate the impact these grants are having. 
 

Thanks – does Founders Pledge publish these impact evaluations? Could you point me to an index of them, if so?

Thanks... I don't see impact evaluations of past FP money moved discussed on that page. 

Are you pointing to the link out to Lewis' animal welfare newsletter? That seems like the closest thing to an evaluation of past impact.

Impact = money moved * average charity effectiveness. FP tracks money to their recommended charities, and this is their published research on the effectiveness of those charities, and why they recommended them.

Forward-looking estimation of a charity's effectiveness is different from retrospective analysis of that charity's track record / use of FP money moved.

I agree - but my impression is that they consider track record when making the forward-looking estimates, and they also update their recommendations over time, in part drawing on track record. I think "doesn't consider track record" is a straw man, though there could be an interesting argument about whether more weight should be put on track record as opposed to other factors (e.g. intervention selection, cause selection, team quality).

I feel like I'm asking about something pretty simple. Here's a sketch:

  • FP recommends Charity Z
  • In the first year after recommending Charity Z, FP attributes $5m in donations to Charity Z because of their recommendation
  • The next time FP follows up with Charity Z, they ask "What did you guys use that $5m for?"
  • Charity Z tells them what they used the $5m for
  • FP thinks about this use of funds, forms an opinion about its effectiveness, and writes about this opinion in their next update of Charity Z

GiveWell basically does this for its top charities.

I asked someone from our impact analytics team to reply here re FP, as he will be better calibrated to share what is public and what is not.

But in principle what Ben describes is correct, we have assessments of charities from our published reports (incl. judgments of partners, such as GiveWell) and we relate that to money moved. We also regularly update our assessments of charities, charities get comprehensively re-evaluated every 2 years or so, with many adjustments in between when things (funding gaps, political circumstances) .

So, this critique seems to incorrectly equate headline figure reporting with all metrics we and others are optimizing for.
 

I think there's likely a difference here between:

 What easily countable short term goals and metrics are communicated to supporters? (bednet distributions, advising calls etc.)

and

What things do we actually care about and track internally on longer timescales, to feed into things like annual reviews and forward planning?

 

I'd be extremely surprised if 80k didn't care about the impact of their advisees, or AMF didn't care about reducing malaria.

Perhaps helpful to disambiguate what a person cares about from that person's decision-making processes (and from the incentive gradients which compose the environment wherein those processes are trained).

Yeah to be clear I meant that the decision making processes are probably informed by these things even if the metrics presented to donors are not, and from the looks of Ben's comment above this is indeed the case.

We make the impact evaluation I note above available to donors (and our donors also do their own version of it). We also publish top line results publicly in our annual reviews (e.g. number of impact-adjusted plan changes) , but don't publish the case studies since they involve a ton of sensitive personal information.

Why do you think the decision-making processes are informed by things that aren't presented to donors / supporters?

Because the orgs in question have literally said so, because I think the people working there genuinely care about their impact and are competent enough to have heard of Goodhart's law, and because in several cases there have been major strategy changes which cannot be explained by a model of "everyone working there has a massive blindspot and is focused on easy to meet targets". As one concrete example, 80k's focus has switched to be very explicitly longtermist, which it was not originally. They've also published several articles about areas of their thinking which were wrong or could have been improved, which again I would not expect for an organisation merely focused on gaming its own metrics.

Good framing! This problem extends not just to organizations and such, but also to people's  individual intellectual processes (and really all areas of life).  Like people naturally avoid "consider the opposite" type tools in their thinking. And even when it would be very revealing, people avoid thinking from other people's perspectives.

I also think its easy to be too negative on this kind of avoidance. At a fundamental level its there for a good reason (too much feedback is overwhelming),  its important  to be able to be OK that you are in fact avoiding some kinds of feedback so that you can grow what you do accept.

But obviously pointing out feedback avoidance is good.

Kit
3y10
0
0

(I downvoted this because a large fraction of the basic facts about what organisations are doing appear to be incorrect. See other comments. Mostly I think it's unfortunate to have incorrect things stated as fact in posts, but going on to draw conclusions from incorrect facts also seems unhelpful.)

Could you give some examples of the basic facts I stated that appear incorrect?

People from 80k, Founders Pledge and GWWC have already replied with corrections.

Those weren't corrections... 

The statements I make in the original post are largely about what an org is focusing on, not what it is formally tracking.

I also downvoted for the same reason. I've looked at 80k's reports pretty closely (bc I was basing our local EA group's metrics on them) and it seemed pretty obvious to me that the counterfactual impact their advisees have is in fact the main thing they try to track & that they use for decisionmaking.

I haven't looked into the other orgs as deeply, but your statement about 80k makes me disinclined to believe the rest of the list.

Where do you get the impression that they focus mainly on # of calls?

"Where do you get the impression that they focus mainly on # of calls?"

I don't have this impression. From the original post:

80,000 Hours tracking the number of advising calls they make and the number of career plan changes they catalyze, rather than the long-run impacts their advisees are having in the world.


It would be interesting to see a cohort analysis of 80k advisees by year, looking at what each advisee from each cohort has accomplished out in the world in the following years.

Maybe that already exists? I haven't seen it, if so.

I don't have this impression.

In the sentence you quoted, you literally state that 80k tracks the # of calls and # of career plan changes, but doesn't track the long-run impacts of their advisees.

Saying "80k tracks the # of calls and # of career plan changes, but doesn't track the long-run impacts of their advisees" is different from saying "80k focus[es] mainly on # of calls"

The problem of how to create feedback loops is much more difficult for organisations who focus on far-future (or even medium-future) outcomes. It's still worth trying to create some loops, as far as possible, and to tighten them.

Curated and popular this week
Relevant opportunities