The 2019 EA survey found that the clear majority of EAs (80.7%) identified with consequentialism, especially utilitarian consequentialism. Their moral views color and influence how EA functions. So the lack of dependence of effective altruism on utilitarianism is a weak argument, historically and presently.
Yes, EA should still uphold data-driven consequentialist principles and methodologies, like those seen in contemporary utilitarian calculus.
over time we get better at discussing how to adapt to different situations and what it even is that we want to maximise.
Overtime EA has become increasingly big tent and has ventured into offering opinions on altruistic initiatives it would have previously criticized or deemed ineffective.
That is to say, the concern is that EA is becoming merely A, overtime.
An uncharitable tone? Perhaps I should take it as a compliment. Being uncharitably critical is a good thing.
This post suggests that the EA community already values diversity, inclusion, etc. and a greater understanding of intersectionality could help further those aims.
When I first became an EA a decade ago and familiarized myself with (blunt and iconoclastic) EA concepts and ideas, in the EA handbooks and other relevant writings, there was no talk of diversity, righting historic wrongs with equity, inclusion, and intersectionality. These were not the ...
Wonderfully written.
Although Fukuyama’s end is anything but, as there will come a point where democracy, free markets, and consumerism will collapse and sunder into AI-driven technocracy.
Democracy, human rights, free markets, and consumerism “won out” because they increased human productivity and standards of living, relative to rivaling systems. That doesn’t make them a destiny, but rather a step that is temporary like all things.
For the wealthy and for rulers or anyone with power, other humans were and are simultaneously assets and liabilities. But we ar...
At min, his life is as much a marvel to praise as it is a bit of a tragedy. Like a true altruist, he quite literally worked himself to death for the good of others. Even if his methodologies weren’t always the most effective, there are very few who will be able to match his degree of selfless sacrifice.
Man I miss the days EA wasn’t caught up in pop culture ethics like 1st world SJ intersectionality or DEI, and focused instead on tractable problems in the developing world.
Discrimination in the Us is bad and all (GM example above in Op’s article), sure, but it truly pails in comparison to the suffering experienced by those sick with infectious diseases like malaria or animals on factory farms.
DEI initiatives, promoted by the likes of BLM, raised dozens of millions yet hardly any of it went to save actual black lives. It was a failed experiment that makes t...
Not the most charitable tone, I think. And I disagree strongly with your points.
You compare DEI initiatives with interventions in global health and animal suffering - but this post doesn't argue for such a comparison. This post suggests that the EA community already values diversity, inclusion, etc. and a greater understanding of intersectionality could help further those values. The applications considered in the post are how intersectionality can offer new insights or perspectives on existing cause areas, and how intersectionality might improve com...
As EA grew from humble, small, and highly specific beginnings (like but not limited high impact philanthropy), it became increasingly big tent.
In becoming big tent, it has become tolerant of ideas or notions that previously would be heavily censured or criticized in EA meetings.
Namely, this is in large part because early EA was more data driven with less of a focus on hypotheticals, speculation, and non-quantifiable metrics. That’s not to say current EA isn’t these things- it’s just relatively less stressed compared to 5-10 years ago.
In practice, this mean... (read more)
I actually don't relate to much of what you're saying here.
I know je... (read more)
I don't think core EA is more "big tent" now than it used to be. Relatively more intellectual effort is devoted to longtermism now than global health and development, which represents more a shift in focus than a widening of focus.
What you might be seeing is an influx of money across the board, which results at least partially in decreasing the bar of funding for more speculative interventions.
Also, many people now believe that the ROI of movement building is incredibly high, which I think was less true even a few years ago. So net positive but not very ex... (read more)
This doesn't seem true to me, but I'm not an "old guard EA". I'd be curious to know what examples of this you have in mind.