All of MikeJ's Comments + Replies

Yes, i’m always unsure of what “bad faith” really means. I often see it cited as a main reason to engage or not engage with an argument. But I don’t know why it should matter to me what a writer or journalist intends deep down. I would hope that “good faith” doesn’t just mean aligned on overall goals already.

To be more specific, i keep seeing reference hidden context behind Phil Torres’s pieces. To someone who doesn’t have the time to read through many cryptic old threads, it just makes me skeptical that the bad faith criticism is useful in discounting or not discounting an argument.

2
Chris Leong
2y
Have you ever had conversations where someone has misrepresented everything you've said or where they kept implying that you were a bad person every time you disagreed with them?

Maintaining that healthy level of debate, disagreement, and skepticism is critical, but harder to do when an idea becomes more popular. I believe most of the early "converts" to AI Safety have carefully weighed the arguments and made a decision based on analysis of the evidence. But as AI Safety becomes a larger portion of EA, the idea will begin to spread for other, more "religious" reasons (e.g., social conformity, $'s, institutionalized recruiting/evangelization, leadership authority). 

As an example, I'd put the belief in prediction markets as an E... (read more)

4
Linch
2y
I'm pretty confused here. On the one hand, I think it's probably good to have less epistemic deference and more independent thinking in EA. On the other, I think if I take your statements literally  and extend them, I think they're probably drawing the boundaries of "religious" way too broadly, in mostly-unhelpful ways. I think people who study forecasting are usually aware of the potential limitations of prediction markets. See e.g. here, here, and here. And to the extent they aren't, this is because "more research is needed", not because of an unhealthy deference to authority. People who don't study forecasting may well overestimate the value of prediction markets, and some of this might be due to deference. But I don't know, this just seems unavoidable as part of a healthy collective epistemic process, and categorizing it as "tends towards the religious" just seems to stretch the definition of "religious" way too far.  Analogously, many non-EAs also believe that a) handwashing stops covid-19, and b) the Earth orbits the Sun. In both cases, the epistemic process probably looks much more like some combination of "people I respect believe this", "this seems to make sense", and "the authorities believe this" rather than a deep principled understanding of the science. And this just seems...broadly not-religious to me? Of course, the main salient difference between a) and b) is that one of the above is probably false. But I don't think it'd be appropriate to frame "have a mistaken belief because the apparent scientific consensus is incorrect" as "religious"
7
Richard Ren
2y
This point has helped me understand the original post more. I feel that too many times, many EAs take current EA frameworks and ways of thinking for granted instead of questioning those frameworks and actively trying to identify flaws and in-built assumptions. Thinking through and questioning those perspectives is a good exercise in general but also extremely helpful to contribute to the motivating worldview of the community. Still don't believe that this necessarily means EAs "tend toward the religious" - there are probably several layers of nuance that are missing in that statement. All in all, I'd love to see more people critique EA frameworks and conventional EA ideas in this forum - I believe there are plenty of flaws to be found.

Are there any amateur EA historians who can help explain how longtermism grew in importance/status? I’d say 80k for instance is much more likely now to encourage folks to start a longtermist org than a global health org. There is lots of funding still moving towards the traditional neartermist causes like malaria and deworming, but not too much funding encouraging people to innovate there (or start another AMF).

Ultimately, I’m curious which person or orgs got convinced about longtermism first! It feels much more driven by top-down propagation than a natural evolution of an EA idea.

5[anonymous]2y
Holden Karnofsky  wrote in 2016 how his personal thinking evolved on topics that heavily overlap with longtermism and how that was a major factor in Open Phil deciding to work on them:

This post wanted data, and I’m looking forward to that … but here is another anecdotal perspective. 

I was introduced to EA several years ago via a Life You Can Save. I learned a lot about effective, evidence-based giving, and “GiveWell approved” global health orgs. I felt that EA had shared the same values as the traditional “do good” community, just even more obsessed with evidence-based, rigorous measurement. I changed my donation strategy accordingly and didn’t pay much more attention to EA community for a few years.

But in 2020, I checked back in t... (read more)

4
N N
2y
The original EA materials (at least the ones that I first encountered in 2015 when I was getting into EA) promoted evidence-based charity, that is making donations to causes with very solid evidence. But the the formal definition of EA is equally or more consistent with hits based charity, making donations with limited or equivocal evidence but large upside with the expectation that you will eventually hit the jackpot.  I think the failure to separate and explain the difference between these things leads to a lot of understandable confusion and anger.
2
Nathan Young
2y
I think I find it a bit hard to know what to do with this. It seems fair to me that the overall tone has changed. Do you think the tone within global poverty has? ie, if someone wants to do RCTs can't they still?