All of OldSchoolEA's Comments + Replies

As EA grew from humble, small, and highly specific beginnings (like but not limited high impact philanthropy), it became increasingly big tent.

In becoming big tent, it has become tolerant of ideas or notions that previously would be heavily censured or criticized in EA meetings.

Namely, this is in large part because early EA was more data driven with less of a focus on hypotheticals, speculation, and non-quantifiable metrics. That’s not to say current EA isn’t these things- it’s just relatively less stressed compared to 5-10 years ago.

In practice, this mean... (read more)

I actually don't relate to much of what you're saying here. 

For ex. If you think studying jellyfish is the most effective way to spend your life and career, draft an argument or forum post explaining the potential boonful future consequences of your research and bam, you are now an EA. In the past it would have been received as, why study jellyfish when you could use your talents to accomplish X or Y and something greater and follow a proven career path that is less risky and more profitable (intellectually and fiscally) than jellyfish study.

I know je... (read more)

I don't think core EA is more "big tent" now than it used to be. Relatively more intellectual effort is devoted to longtermism now than global health and development, which represents more a shift in focus than a widening of focus.

What you might be seeing is an influx of money across the board, which results at least partially in decreasing the bar of funding for more speculative interventions.

Also, many people now believe that the ROI of movement building is incredibly high, which I think was less true even a few years ago. So net positive but not very ex... (read more)

[This comment is no longer endorsed by its author]Reply

Now it’s more like any and all causes are assumed effective or potentially effective off the get-go and then are supported by some marginal amount of evidence.

This doesn't seem true to me, but I'm not an "old guard EA".  I'd be curious to know what examples of this you have in mind.

0
NunoSempere
Strongly upvoted, but this should be its own top-level post.
  1. The 2019 EA survey found that the clear majority of EAs (80.7%) identified with consequentialism, especially utilitarian consequentialism. Their moral views color and influence how EA functions. So the lack of dependence of effective altruism on utilitarianism is a weak argument, historically and presently.

  2. Yes, EA should still uphold data-driven consequentialist principles and methodologies, like those seen in contemporary utilitarian calculus.

1
Ben Stewart
I agree that most EAs identify with consequentialism, and that proportion was likely higher in the past. I also lean consequentialist myself. But that's not what we disagree about. You move from 'The majority of EAs lean consequentialist' to 'The only ideas EA should consider seriously are utilitarian ones' - and that I disagree with. Moral Uncertainty is a book about what to do given there are multiple plausible ethical theories,  written by two of EA's leading lights Toby Ord and Will MacAskill (in addition to Krister Bykvist). Perhaps you could consider it.  

over time we get better at discussing how to adapt to different situations and what it even is that we want to maximise.

Overtime EA has become increasingly big tent and has ventured into offering opinions on altruistic initiatives it would have previously criticized or deemed ineffective.

That is to say, the concern is that EA is becoming merely A, overtime.

An uncharitable tone? Perhaps I should take it as a compliment. Being uncharitably critical is a good thing.

This post suggests that the EA community already values diversity, inclusion, etc. and a greater understanding of intersectionality could help further those aims.

When I first became an EA a decade ago and familiarized myself with (blunt and iconoclastic) EA concepts and ideas, in the EA handbooks and other relevant writings, there was no talk of diversity, righting historic wrongs with equity, inclusion, and intersectionality. These were not the ... (read more)

3
Guy Raveh
The change over time from a simplistic, first order theory of effective altruism is warranted and natural. You describe a set of thumb rules for utilitarianism, but the thing is - over time we get better at discussing how to adapt to different situations and what it even is that we want to maximise. You may prefer to keep the old ways, but that doesn't make it the "correct" EA formalism.
4
Ben Stewart
That's... a lot to unpack. I think we probably disagree on a lot, and I'm not sure further back-and-forth will be all that productive. I trust other readers to assess whose responses were substantive or convincing.  Two final comments:  1) As mentioned in McMahan's 'Philosophical Critiques of Effective Altruism', the earliest arguments by Singer and Unger were based on intuition to a thought experiment and consistency, and "there is no essential dependence of effective altruism on utilitarianism." 2) Even if we grant that early EA was 100% and whole-heartedly utilitarian, does it follow that EA today should be?

Wonderfully written.

Although Fukuyama’s end is anything but, as there will come a point where democracy, free markets, and consumerism will collapse and sunder into AI-driven technocracy.

Democracy, human rights, free markets, and consumerism “won out” because they increased human productivity and standards of living, relative to rivaling systems. That doesn’t make them a destiny, but rather a step that is temporary like all things.

For the wealthy and for rulers or anyone with power, other humans were and are simultaneously assets and liabilities. But we ar... (read more)

2
Mahdi Complex
I believe the end-goal isn't a world ruled by a benevolent global elite that owns all the robots. The goal isn't to create a 'techno-leviathan' for people to ride. The goal is to find a benevolent God in mind design space. One we would be happy to give up sovereignty to. That's I think what AI alignment is about. (A related discussion on LW.) Either way, I think we're going to need some serious 'first principles' work at the intersection of AI alignment and political philosophy. "What is the nature of a just political and economic order when humans are economically useless and authority lies with a superhuman AI?" "What institution would even have the legitimacy to ask this question, let alone answer it?"
1
James_Banks
The plausibility of this depends on exactly what the culture of the elite is.  (In general, I would be interested in knowing what all the different elite cultures in the world actually are.)  I can imagine there being some tendency toward thinking of the poor / "low-merit", as being  superfluous, but I can also imagine superrich people not being that extremely elitist and thinking "why not? The world is big, let the undeserving live."  or even things which are more humane than that. But also, despite whatever humaneness there might be in the elite, I can see there being Molochian pressures to discard humans.  Can Moloch be stopped?  (This seems like it would be a very important thing to accomplish, if tractable.)   If we could solve international competition (competition between elite cultures who are in charge of things), then nations could choose to not have the most advanced economies they possibly could, and thus could have a more "pro-slack" mentality.   Maybe AGI will solve international competition?  I think a relatively simple, safe alignment for an AGI , would be for one that was the servant of humans -- but which ones?  Each individual? Or the elites who currently represent them?  If the elites, then it wouldn't automatically stop Moloch.  But otherwise it might.   (Or the AGI could respect the autonomy of humans and let them have whatever values they want, including international competition, which may plausibly be humanity's "revealed preference".)

At min, his life is as much a marvel to praise as it is a bit of a tragedy. Like a true altruist, he quite literally worked himself to death for the good of others. Even if his methodologies weren’t always the most effective, there are very few who will be able to match his degree of selfless sacrifice.

4
JMonty🔸
I think his friends and Farmer himself would disagree with you- he loved  what he did, felt like he could not do otherwise. He was also always smiling, laughing, and joking. His memorial service is on youtube, both of his cofounders talked about his sense of humor and his love for the work he was involved in. I think a life lived happy and in the service of improving the world is about the farthest possible from a tragedy, even if it is shorter than average

Man I miss the days EA wasn’t caught up in pop culture ethics like 1st world SJ intersectionality or DEI, and focused instead on tractable problems in the developing world.

Discrimination in the Us is bad and all (GM example above in Op’s article), sure, but it truly pails in comparison to the suffering experienced by those sick with infectious diseases like malaria or animals on factory farms.

DEI initiatives, promoted by the likes of BLM, raised dozens of millions yet hardly any of it went to save actual black lives. It was a failed experiment that makes t... (read more)

3
John M Bridge 🔸
I think you might have misunderstood the scope of this post. I want to emphasise that I endorse none of the following claims: If we remove these claims and just consider whether intersectionality would be a useful tool (of many different possible tools) for helping EAs think through difficult ideas, would this change your position at all?

Not the most charitable tone, I think. And I disagree strongly with your points. 

You compare DEI initiatives with interventions in global health and animal suffering - but this post doesn't argue for such a comparison. This post suggests that the EA community already values diversity, inclusion, etc. and a greater understanding of intersectionality could help further those values. The applications considered in the post are how intersectionality can offer new insights or perspectives on existing cause areas, and how intersectionality might improve com... (read more)