All of Ezra Newman's Comments + Replies

I didn't write about them because, as opposed to 80k:

  1. They don't vet and don't claim to. Anyone can add a job.
  2. They are not (sorry for being extra subjective here, feel free to push back) a corner stone of EA that many people know and trust and have certain expectations from.

FWIW, I have to correct myself every time I read EV Ops that it’s not Expected Value Ops. (That being said, I don’t know anything about marketing; n=1)

Easy context: 14.) I don't think we pay enough attention to some aspects of EA that could be at cross-purposes

This is a good point, sorry for getting back to it so late.

One idea I cut from the post: I think scope insensitivity means we should be suspicious of our gut intuitions in situations dealing with lots of people, so I think that’s another point in favor of accepting the RC. My main  goal with this point was to suggest this central idea: “sometimes trust your ethical framework in situations where you expect your intuituon to be wrong.”

 

That being said, the rest of your point still stands.

I said this on Twitter, but: this is really great! (Also very glad it’s coming directly to my inbox!)

This is a good point, I guess.

From my (new since you asked this) reply to Kirmani’s comment:

I’m advocating for updating in the general direction of trusting your small-scale intuition when you notice a conflict between your large scale intuition and your small scale intuition.

Honestly, its a pretty specific argument/recommendation so I’m having trouble thinking of another example that adds something. Maybe the difference between how I feel about my dog vs farmed animals, or near vs far people. If you’d like/it would help you or someone else, I can spend some more time thinking of one. 

I’m advocating for updating in the general direction of trusting your small-scale intuition when you notice a conflict between your large scale intuition and your small scale intuition.

Specifically:

  • Have as much sex as you want (with a consenting adult, etc). Have as many children as you can reasonably care for. But even if you disagree with that, I don’t think this is a good counterexample. It’s not a conflict between small scale beliefs and large scale beliefs. 
  • This is new information, not a small-large conflict. 
  • Same as above. 
4
Daniel Kirmani
2y
As Wei Dai mentioned, tribes in the EEA weren't particularly fond of other tribes. Why should people's ingroup-compassion scale up, but their outgroup-contempt shouldn't? Your argument supports both conclusions.

In response to “Shut Up and Divide:”

I think you should be in favor of caring more (shut up and multiply) over caring less (shut up and divide) because your intuitive sense of caring evolved when your sphere of influence was small. A tribe might have at most a few hundred people, which happens to be ~where your naive intuition stops scaling linearly.

So it seems like your default behavior should be extended to your new circumstances instead of extending your new circumstances to default state.

(Although, I think SUAD might be useful for not getting trapped in... (read more)

6
Wei Dai
2y
I don't think I understand what your argument is. Even in our EEA we had influence beyond the immediate tribe, e.g., into neighboring tribes, which we were evolved to care much less about, hence inter-tribal raids, warfare, etc. I'm just not sure what you mean here. Can you explain with some other examples? (Are Daniel Kirmani's extrapolations of your argument correct?)

I think you should be in favor of caring more (shut up and multiply) over caring less (shut up and divide) because your intuitive sense of caring evolved when your sphere of influence was small.

Your argument proves too much:

  • My sex drive evolved before condoms existed. I should extend it to my new circumstances by reproducing as much as possible.
  • My subconscious bias against those who don't look like me evolved before there was a globalized economy with opportunities for positive-sum trade. Therefore, I should generalize to my new circumstances by beco
... (read more)

I have 13 followers, and most of those are friends or coworkers, so I don’t feel qualified to be that someone. But I would also love to see this!

FWIW, this has worked for me too. I got hired this summer (college freshman) because I was impressed with + interested by some GPT-3 stuff that Peter Wildeford was doing on Twitter and wanted to try it myself. Those tweets got me hired!

 

TLDR: Tweet about interesting stuff and reply to people you think are smart!

1
random_tips
2y
Twitter is definitely underrated for career dev (including by me up until a few weeks ago). Perhaps someone could write a post about dynamics on the platform? I find it quite intimidating and infinite game-ish relative to emailing/DMs, and I wouldn't be surprised if others felt similarly.

Do you mean for the title to say "<= 3 mins"? I think you have your ">" inverted. (It took me about 3 minutes for the first section, and about 10 minutes all-in)

4
Niel_Bowerman
2y
I was meaning to say  "3 or more minutes". 

With all the “AI psychology” posts on here and Twitter, I thought this was going to be “interviews with AIs that are researchers” not “interviews with humans researching AI”. This is probably more valuable!

My justification is pretty simple:

  1. I like being happy and not having malaria and eating food.

  2. I appear to be fundamentally similar to other people.

  3. Therefore, other people probably want to be happy and not have malaria and have food to eat.

  4. I don’t appear to be special, so my interests shouldn’t be prioritized more than my fair share.

  5. Therefore I should help other people more than I help myself because there are more of them and they need more help.