ESRogs

197Joined Sep 2014

Comments
49

I had a similar impression. Some related thoughts here.

Copying over some comments I made on Twitter, in response to someone suggesting that Sam now appears to be "a sociopath who never gave a toss about EA or its ideals":
 

He does seem pretty sociopathic, but it's still unclear to me whether he really cared about EA.

I think it's totally possible that he genuinely wanted to improve the world by funding EA causes, and is also a narcissistic liar who is unwilling to place limits on his own behavior.

As Jess Riedel pointed out to me, it looks like Bill Gates ruthlessly exploited his monopoly in the 90s, and also genuinely tried to do good with his money in the 2000s. Trying to cause good things to happen is totally compatible with also doing bad things.

I think it's important for us to keep this possibility in mind. Otherwise I think we're more likely to fail to question and put limits on our own behavior, since we're confident our intentions are good.

there is a thing where if you say stuff that seems weird from an EA framework this can come across as cringe to some people, and I do hate a bunch of those cringe reactions, and I think think it contributes a lot to conformity

Can you give an example (even a made up one) of the kind of thing you have in mind here? What kinds of things sound weird and cringy to someone operating within an EA framework, but are actually valuable from an EA perspective?

(Like, play-pumps-but-they-actually-work-this-time? Or some kind of crypto thing that looks like a scam but isn't? Or... what?)

The culture emphasizes analysis over practice, and it does not attract many of the leaders and builders that are critical for maximizing impact.

 

EA has a lot of rhetoric around openness to ideas and perspectives, but actual interaction with the EA universe can feel more like certain conclusions are encased in concrete.

It seems to me that there is some tension between these two criticisms — you want EA to focus less on analysis, but you also don't want us to be too wedded to our conclusions. So how are we supposed to change our minds about the conclusions w/o doing analysis?

My guess (based on the rest of the essay), is that you want our analysis to be more informed by practice.

But I just want to emphasize that, in my view, analysis (and specifically cause neutrality) is what makes EA unique. If you take out the analysis, then it's not clear what value EA has to offer the rest of the charity / social impact world.

And then they can read the post above to have that question clearly answered!

Any tips on the 'how' of funding EA work at such think tanks?

Reach out to individual researchers and suggest they apply for grants (from SFF, LTFF, etc.)? Reach out as a funder with a specific proposal? Something else?

opinion which ... is mainly advocated by billionaires

Do you mean that most people advocating for techno-positive longtermist concern for x-risk are billionaires, or that most billionaires so advocate?

I don't think either claim is true (or even close to true).

The RSP idea is cool.

Dumb question — what part of the post does this refer to?

Load More