November 2022 update: I wrote this post during a difficult period in my life. I still agree with the basic point I was gesturing towards, but regret some of the presentation decisions I made. I may make another attempt in the future. 


 

"Oh, Master, make me chaste and celibate... but not yet!"

– Augustine of Hippo, Confessions

 

"Surely you will quote this proverb to me: 'Physician, heal yourself!' And you will tell me, 'Do here in your hometown what we have heard that you did in Capernaum.'"

Luke 4:23

 

Consider reading these posts first before continuing on here (obviously this isn't required reading in any sense, though I will be much more likely to seriously engage with feedback from people who seem to have read & reflected carefully about the following). In rough order of importance:

The purpose of this essay is to clarify to YOU, the mind reading this essay, your relationship to the intellectual communities of Effective Altruism and (Bay Area) Rationality, the characteristic blind spots that arise from participating in these communities, and what might be done about all of this. 

I have an intuition that this will be a lengthy post [edit: actually this is probably going to become a sequence of posts, so stay tuned!], and I suspect that you probably feel like you don't have time to read it carefully, let alone all the prerequisites I linked out to.

That's interesting, if true... why do you feel like you don't have time to carefully read this post? What's going on there?

I can't control what you do, and I don't want to (I want you to think carefully and spaciously for yourself about what is best, and then do the things that seem best as they come to you from that spacious place). I do have a humble request to make of you though... as you continue on, perhaps feeling as though you really don't have time for this but might as well give it a quick skim anyhow to see what Milan has been up to, if as you do this you notice a desire to respond to something I say, please don't until you actually have space to read this post and all the other posts I linked to above carefully (with care, with space...). If you don't foresee being able to do this in the immediate future, that's fine... I just ask that you don't respond to this post in any way until you do. 

Thank you for considering this request... I really appreciate it.

Okay, the preliminaries have been taken care of. Very good. What's this about, anyway?

I'm going to ramble on for while, perhaps in follow-on posts, but this is the kernel of it:

I claim that the Effective Altruism and Bay Area Rationality communities (and especially their capital-allocating institutions (GiveWell, the Open Philanthropy Project (and its affiliates: 1, 2, 3), this Forum, the Centre for Effective Altruism, the Berkeley Existential Risk Initiative, and the Survival and Flourishing Fund for EA; LessWrong, CFAR, the Open Philanthropy Project, the Berkeley Existential Risk Initiative, and the Survival and Flourishing Fund for Bay Area Rationality)) have collectively decided that they do not need to participate in tight feedback loops with reality in order to have a huge, positive impact. 

This is a deeply rooted mistake, and I have personally participated in making and propagating this mistaken view, largely through my participation in GiveWell, the Open Philanthropy Project, and to some extent on this Forum and the LessWrong Forum as well. I have done a teensy bit of the same thing on my personal blog too. I am sorry for all of this. I avow my participation in all of it, and I feel shame when I think back on how I acted.  I am really, really sorry.

Happily, I am now mostly active on Twitter, where I no longer participate in this mistaken view, so you can communicate with me there with my thorough and complete guarantee that I will only communicate with you as a friend, that I have no ulterior motives when communicating with you there, that I will not infect you with any info hazards or mind viruses (intentionally or unintentionally), and I will not take anything from you (or try to do this, consciously or unconsciously) that is not freely given. That is my promise and commitment to you. 

(Though we probably won't have a very fruitful conversation on Twitter or anywhere else until you read all of the above, including the prerequisite posts, in a spacious, careful, open-minded way!)

-36

0
0

Reactions

0
0

More posts like this

Comments33
Sorted by Click to highlight new comments since: Today at 10:30 AM

Just want to briefly join in with the chorus here: I'm tentatively sympathetic to the claim, but I think requiring people to spend several hours reading and meditating on a bunch of other content – without explaining why, or how each ties into your core claim – and then refusing to engage with anyone who hasn't done so, is very bad practice. I might even say laziness. At the very least, wildly unrealistic – you are effectively filtering for people who are already familiar with all the content you linked to, which seems like a bad way to convince people of things.

Having skimmed the links, it is very non-obvious how many of them tie directly into your claim about the EA community's relationship with feedback loops. Plausibly if I read and meditated on each of them carefully, I would spot the transcendent theme linking all of them together – but that is very costly, and I am a busy person with no particular ex ante reason to believe it would be a good use of scarce time.

If you want to convince us of something, paying the costs in time and thought and effort to connect those dots is your job, not ours.

I agree that it's sorta lazy, but I strongly disagree that it is bad practice.

Well, of course you don't think it's bad practice, or you wouldn't have done it.

The interesting question is why, and who's right.

At the very least, I claim there's decent evidence that it's ineffective practice, in this venue: your post and comments here have been downvoted six ways from Sunday, which seems like a worse way to advocate for your claim than a different approach that got upvoted.

At the very least, I claim there's decent evidence that it's ineffective practice, in this venue: your post and comments here have been downvoted six ways from Sunday, which seems like a worse way to advocate for your claim than a different approach that got upvoted.

We are using very different metrics to track effectiveness.

I claim that the Effective Altruism and Bay Area Rationality communities have collectively decided that they do not need to participate in tight feedback loops with reality in order to have a huge, positive impact.

I am somewhat sympathetic to this complaint. However, I also think that many of the posts you linked are themselves phrased in terms of very high-level abstractions which aren't closely coupled to reality, and in some ways exacerbate the sort of epistemic problems they discuss. So I'd rather like to see a more careful version of these critiques.

I feel very similarly FWIW.

The title of this post did not inform me about the claim "that EAs have collectively decided that they do not need to participate in tight feedback loops with reality in order to have a huge, positive impact -- [and] this is a deeply rooted mistake." 

I came  very close to not actually reading what is an interesting claim I'd like to see explored because it came close to the end and there was no hint of it in the title or the start of the post. Since it is still relatively early in the life of this post you may want to consider revising the title and layout of the post to communicate more effectively. 

The core thesis here seems to be:

I claim that [cluster of organizations] have collectively decided that they do not need to participate in tight feedback loops with reality in order to have a huge, positive impact. 

There are different ways of unpacking this, so before I respond I want to disambiguate them. Here are four different unpackings:

  1. Tight feedback loops are important, [cluster of organizations] could be doing a better job creating them, and this is a priority. (I agree with this. Reality doesn't grade on a curve.)
  2. Tight feedback loops are important, and [cluster of organizations] is doing a bad job of creating them, relative to organizations in the same reference class. (I disagree with this. If graded on a curve, we're doing pretty well. )
  3. Tight feedback loops are important, but [cluster of organizations] has concluded in their explicit verbal reasoning that they aren't important. (I am very confident that this is false for at least some of the organizations named, where I have visibility into the thinking of decision makers involved.)
  4. Tight feedback loops are important, but [cluster of organizations] is implicitly deprioritizing and avoiding them, by ignoring/forgetting discouraging information, and by incentivizing positive narratives over truthful narratives.

(4) is the interesting version of this claim, and I think there's some truth to it. I also think that this problem is much more widespread than just our own community, and fixing it is likely one of the core bottlenecks for civilization as a whole.

I think part of the problem is that people get triggered into defensiveness; when they mentally simulate (or emotionally half-simulate) setting up a feedback mechanism, if that feedback mechanism tells them they're doing the wrong thing, their anticipations put a lot of weight on the possibility that they'll be shamed and punished, and not much weight on the possibility that they'll be able to switch to something else that works better. I think these anticipations are mostly wrong; in my anecdotal observation, the actual reaction organizations get to poor results followed by a pivot is usually at least positive about the pivot, at least from the people who matter. But getting people who've internalized a prediction of doom and shame to surface those models, and do things that would make the outcome legible, is very hard.

(Meta: Before writing this comment I read your post in full. I have previously read and sat with most, but not all, of the posts linked to here. I did not reread them during the same sitting I read this comment.)

Thank you for this thoughtful reply! I appreciate it, and the disambiguation is helpful. (I would personally like to do as much thinking-in-public about this stuff as seems feasible.)

I mean a combination of (1) and (4). 

I used to not believe that (4) was a thing, but then I started to notice (usually unconscious) patterns of (4) behavior arising in me, and as I investigated further I kept noticing more & more (4) behavior in me, so now I think it's really a thing (because I don't believe that I'm an outlier in this regard).

 

(4) is the interesting version of this claim, and I think there's some truth to it. I also think that this problem is much more widespread than just our own community, and fixing it is likely one of the core bottlenecks for civilization as a whole.

I agree with this. I think EA and Bay Area Rationality still have a plausible shot at shifting out of this equilibrium, whereas I think most  communities don't (not self-reflective enough, too tribal, too angry, etc...)

 

I think part of the problem is that people get triggered into defensiveness; when they mentally simulate (or emotionally half-simulate) setting up a feedback mechanism, if that feedback mechanism tells them they're doing the wrong thing, their anticipations put a lot of weight on the possibility that they'll be shamed and punished, and not much weight on the possibility that they'll be able to switch to something else that works better.

Yes, this is a good statement of one of the equilibria that it would be profoundly good to shift out of. Core transformation is one operationalization of how to go about this.

Curated and popular this week
Relevant opportunities