O

OldSchoolEA

18 karmaJoined Mar 2022

Bio

I don’t want to lose my job because of my opinions, hence this ALT account.

Comments
8

As EA grew from humble, small, and highly specific beginnings (like but not limited high impact philanthropy), it became increasingly big tent.

In becoming big tent, it has become tolerant of ideas or notions that previously would be heavily censured or criticized in EA meetings.

Namely, this is in large part because early EA was more data driven with less of a focus on hypotheticals, speculation, and non-quantifiable metrics. That’s not to say current EA isn’t these things- it’s just relatively less stressed compared to 5-10 years ago.

In practice, this means today’s EA is more willing to consider altruistic impact that can’t be easily accurately measured or quantified, especially with (some) Longtermist interests. I find this to be a rather damning weakness, although one could make the case it is also a strength.

This also extends to outreach.

For example, I wouldn’t be surprised if an EA would give a dollar or volunteer for the seeing eye-dog organizations [or any other ineffective organizations] under the justification that this is “community-building” and like the borg, someday we will be able to assimilate them and make them more effective or recruit new people in EA.

To me and other old guard EAs, it’s wishful thinking, because it makes EA less pure to its epistemic roots, esp. overtime as non-EA ideas enter the fold and influence the group. One example of this is how DEI initiatives are wholeheartedly welcomed by EA organizations whereas in fact there is little evidence the DEI/progressive way of hiring personnel and staff results in better performance outcomes compared to normal hiring that doesn’t factor in or give advantage/edge to a candidate based on their ethnicity, gender, race, or sexual orientation.

But even more so, with cause prioritization. In the past, I felt that it became very difficult to have your championed or preferred cause even considered remotely effective. In fact, the null hypothesis was that your cause isn’t effective... and that most causes weren’t.

Now it’s more like any and all causes are assumed effective or potentially effective off the get-go and then are supported by some marginal amount of evidence. A less elitist and stringent approach, but inevitable once you become big tent. Some people feel this made EA a friendlier place. Let’s just say that today you’d be less likely to be kicked out of an EA meeting for being naively optimistic and without a care for figures/numbers, and more likely to be kicked out for being overtly critical (or even mean) even if that criticalness was the strict attitude of early EA meetings that turned a lot of people off from EA (including myself, when I first heard about it. I later came around to appreciate and welcome that sort of attitude and its relative rarity in the world. Strict and robust epistemics are underappreciated).

For ex. If you think studying jellyfish is the most effective way to spend your life and career, draft an argument or forum post explaining the potential boonful future consequences of your research and bam, you are now an EA. In the past it would have been received as, why study jellyfish when you could use your talents to accomplish X or Y and something greater and follow a proven career path that is less risky and more profitable (intellectually and fiscally) than jellyfish study.

Unlike Disney and its iron claw grip over its brands and properties, it’s much easier to call oneself or identify an EA nowadays or as part of the EA sphere… because well, anything and everything can be EA. The EA brand, whilst once tightly controlled and small, has now grown and it can be difficult to tell the fake Gucci bags from the real deal when both are sold at the same market.

My greatest fear is that EA will overtime become just A, without the E, and lose the initial ruthless data and results driven form of moral concern.

  1. The 2019 EA survey found that the clear majority of EAs (80.7%) identified with consequentialism, especially utilitarian consequentialism. Their moral views color and influence how EA functions. So the lack of dependence of effective altruism on utilitarianism is a weak argument, historically and presently.

  2. Yes, EA should still uphold data-driven consequentialist principles and methodologies, like those seen in contemporary utilitarian calculus.

over time we get better at discussing how to adapt to different situations and what it even is that we want to maximise.

Overtime EA has become increasingly big tent and has ventured into offering opinions on altruistic initiatives it would have previously criticized or deemed ineffective.

That is to say, the concern is that EA is becoming merely A, overtime.

An uncharitable tone? Perhaps I should take it as a compliment. Being uncharitably critical is a good thing.

This post suggests that the EA community already values diversity, inclusion, etc. and a greater understanding of intersectionality could help further those aims.

When I first became an EA a decade ago and familiarized myself with (blunt and iconoclastic) EA concepts and ideas, in the EA handbooks and other relevant writings, there was no talk of diversity, righting historic wrongs with equity, inclusion, and intersectionality. These were not the values the community sought to maximize or the domains of knowledge meant to be understood. They had nothing to do with increasing utility and combating disutility. Granted, not every EA was utilitarian. But EA grew out of utilitarianism and utilitarian philosophers like Singer and MacAskill. The consequentialist focus was on maximizing good via high-impact philanthropy, how one do good better, relative to QALYs and DALYs. EA wasn’t very inclusive either- it was (necessarily) harsh towards those any and all who rejected an evidence-based, quantifiable, doing good better approach, irrespective of their backgrounds.

There was extreme methodological, data-driven rigor. If you suggested that there was a pressing need to follow in the footsteps of inter-sectionalist activists and fight racial discrimination injustice in the US, adopting the jargon and flawed ideas of the intersectionalists, you’d be laughed at… or at least critiqued at an EA meeting. That cause, whilst noble, was far from a tractable priority. People, animals, and countless other sentient beings were out there dying in the world and suffering. What are 300 or so people that die at the hands of American police brutality annually compared to the 300 kids in Africa who die every hour…

Things like seeing eye dog campaigns, giving to art museums were deemed ineffective. Today we have DEI campaigns and other sorts of ineffective altruism that have crept up and infiltrated the main EA sphere. Perhaps, today we should replace the give $1 to AMF or the seeing eye-dog experiment with give $1 to AMF or a DEI educational or instructional-based campaign. One is effective, the other not so much.

DEI would be fine if there was evidence that maxing DEI was good for EA ends, but frankly, I see no evidence of that being the case. The focus on community building in EA shifted from “Growing the EA community” to complaining the EA community was somehow inherently in the wrong or discriminatory or evil for ending up mostly male, white, secular, tech based etc. Now that couldn’t stand so there was a push to turn EA more diverse and open and inclusive.

Which is great and had my initial support. But it comes with risk that those who might not share EA values and methodologies will become EAs and overtime shift EA’s values/priorities as these individuals become more numerous, influential, and rise to leadership positions. EA became increasingly big tent, in part because of this.

I initially supported this outreach, but didn’t expect the epistemic baggage and prioritized non-EA values of others to in turn infiltrate and alter EA from the inside out. Whereas previously, I found EA had a stronger ideological unity and sense of purpose. No one cared about what gender/race you were— that wasn’t important. Only your beliefs, values, epistemologies and deeds mattered. And what mattered more was discourse-driven consensus among EAs but consensus and what we all share has given way to inclusion and relativistic diversity of thought. Look at the criterion of what it means to be an EA, look how vague and non-specific it has become :(

Today the EA community is one where diversity and “equity” and “justice” became innate, disseminated values, rather than potential or circumstantial instrumental ones for prior lauded ends. I’ve watched the sad and slow evolution of this take place. And it saddens the inner utilitarian in me.

So DEI has become a cause area within a cause area, and we are all aware of it.

Intersectionality is not just a flawed, unquantitative epistemology. It is the very means by which DEI initiatives are maximized and implemented.

After all, if your goal is to maximize diversity then you need intersectionality to draw up the dozens of (imo irrelevant) demographic categories (racial, religious/lacktherof, ethnic, gender, sex, health status, socioeconomic, sexual, age, lvl of education, citizenship status, etc.) then try to make sure you have people that match all the combinations and criteria. Then you have to make sure equity is there, so all historical wrongs have to be accounted for. Then you have to shame people for making assumptions or holding beliefs about those who are part of other categories.

For ex., intersectionalists claim it’s pointless for a male to study female psychology because a male will never understand what it’s like to be female and should instead have no voice in the conversation.

Second, the special obligations that people of colour might feel more strongly may not attach to skin colour only or at all. Racism and its effects is complex,  so special obligations might be more specific to a particular culture, ethnicity, geography, or history. For example, perhaps an obligation is due to a particular way an ethnicity is treated in a specific place, such as Cuban-Americans in Florida, or on the basis of a particular historic relationship, such as the slave trade.

These obligations, if they exist, are not EA. Period. They are not effective. Yes, they may be forms of altruism, but they are ineffective ones based on kinship, greenbeard effects, localism, etc. They aren’t EA. They aren’t neutral. We as a community used to take a harsher stance against these, because the money goes further overseas. Has that been lost?

I’m White and Asian, and I’ve experienced discrimination and dislike from humans who adopt tribalistic mentalities. I’m no stranger to racism, but I realize that culture and history can turn people into the opposite of what EAs strive for- cause neutrality.

It's simplistic to assume that special obligations felt more strongly by people of colour must be based on skin colour, and so alleviated by helping other people of a similar skin colour.

Having spoken to plenty of PoC intersectional activists, there is often an emphasis on color and I find it delusional to deny it.

One such campaign (for example) is the “Buy from this business it is Black-owned etc” or support this charity because it is run entirely by PoC and is fully diverse etc. These campaigns argue there is a moral obligation to support charities or businesses based on the demographic characteristics of their owners or leaders. I find this not justifiable, relative to other charities or initiatives.

While you are not wrong in pointing out that (today) one doesn’t have to be a utilitarian to be an EA, back in the day, it was rare to find an EA who wasn’t utilitarian or an adherent to the utilitarian moral prescriptions of Singer and the like.

Ibid. for your last point, which seems to claim that unless something is quantifiable it is epistemically suspect. I think there's a big range of ideas worth considering when thinking about how to do the most good in the world. Not all of those ideas are easily quantifiable.

I agree but Ea’s strength is its focus on what is quantifiable

EAs are humans, not utility-maximising machines. And human psychology is complex. You can't capture who someone is by asking them to write down all their beliefs, values, and/or actions. Because we can't write them all down or test them or even know about them all, it's worth being interested in gaining perspectives from lots of different people, who have lived different kinds of lives.

Humans are utility maximizing machines, though we are often very bad at it. You can get a good and workable approximation of someone based on their values, beliefs and actions.

it's worth being interested in gaining perspectives from lots of different people, who have lived different kinds of lives.

Gonna have to disagree there. The perspective of those training seeing eye dogs or caring about art or volunteering at the local theater are not worth considering. What I like about EA was that some perspectives are more important than others, and we can hone the perspectives that matter over those that are not morally or epistemically relevant.

Similarly, if our community is particularly homogenous with respect to gender, ethnicity, culture, or class, it would be worth trying to get more involvement and ideas from people from underrepresented gender/ethnicity/culture/class.

This seems to assume that people from from underrepresented gender/ethnicity/culture/class are incapable of generating the same ideas and that they somehow have different ideas that differ from the homogenous majority.

Or at minimum, if these ideas are in fact different, it assumes those ideas are better than what the majority has come up with (which I find unlikely, given the rarity of EA methodological rigor).

Frankly (for ex), I can’t tell the difference between a female/white/American/working class hedonistic utilitarian than a male/Black/French/middle class hedonistic utilitarian.

As far as I’m concerned, both are hedonistic utilitarians with the same (or highly similar) hedonistic utilitarian ideas. Their sex or gender or race doesn’t change that.

Wonderfully written.

Although Fukuyama’s end is anything but, as there will come a point where democracy, free markets, and consumerism will collapse and sunder into AI-driven technocracy.

Democracy, human rights, free markets, and consumerism “won out” because they increased human productivity and standards of living, relative to rivaling systems. That doesn’t make them a destiny, but rather a step that is temporary like all things.

For the wealthy and for rulers or anyone with power, other humans were and are simultaneously assets and liabilities. But we are gradually entering an age where other humans will cease to become assets yet will remain liabilities. After all, you don’t need to provide health insurance or pay the healthcare costs of a robot. If humans are economically needed, then the best system for them is a free market democracy.

But what happens to the free market democracy when humans are no longer needed?

We will eventually arrive at an ugly new era, fully automized, where humanity becomes increasingly redundant, worthless, and obsolete. The utility and power (economic, military, and civil) an average person possesses will become close to naught. No one will “need” you, the human, and if you aren’t part of the affluent, you’ll be lucky if others altruistically wish to keep you alive…

We still hold our hope that the global elites will care about human rights, lives, and democracy and consumerism in the coming age where we are powerless compared to those who own the robots and all the humanless means of production. But perhaps it’s the inner cynic in me that says it’s highly unlikely.

Yet as altruistic folks, we strive to make sure the system that replaces the current one will be benevolent to most, if not all.

At min, his life is as much a marvel to praise as it is a bit of a tragedy. Like a true altruist, he quite literally worked himself to death for the good of others. Even if his methodologies weren’t always the most effective, there are very few who will be able to match his degree of selfless sacrifice.

Man I miss the days EA wasn’t caught up in pop culture ethics like 1st world SJ intersectionality or DEI, and focused instead on tractable problems in the developing world.

Discrimination in the Us is bad and all (GM example above in Op’s article), sure, but it truly pails in comparison to the suffering experienced by those sick with infectious diseases like malaria or animals on factory farms.

DEI initiatives, promoted by the likes of BLM, raised dozens of millions yet hardly any of it went to save actual black lives. It was a failed experiment that makes the disastrous play pump look like a success in comparison…

I also really don’t understand the criticism supposedly voiced by EAs of colour mentioned in the article

For example, more than a few EA people of colour I’ve spoken to have expressed discomfort about only donating to maximally effective charities, and this relates directly to their intersectional identity. Being both a part of the wealthy global elite and people of colour, they feel a special obligation to help people within their own communities who are not blessed with the same advantages.

Last time I checked, deworming and anti-malarial initiatives were NOT taking place in or benefitting the (largely white) 1st world. Most of the max effective charities help out communities of color internationally that are in a far worse shape than communities in the developed world. So this criticism seems hogwash. If one is to care about PoC and wish the save the most lives, then it should not matter where those PoC communities are. We know we can save the most lives (who happen by sheer chance to be PoC) by de-worming and anti-malaria initiatives.

Moving on. Relative to other cause areas, DEI and intersectionality seems rather wasteful or inefficient and it’s definitely not neglected.

I have yet to see sufficient evidence that DEI or intersectionality is utilitarian in the slightest. If it is consequentialist, then it certainly isn’t trying to minimize disutility or maximize utility, instead opting for the paperclips that is “diversity”. I don’t understand how diversity is an inherent/innate good when there is insufficient evidence it is even instrumental to other pursued goods.

Unless DEI initiatives are sufficiently utilitarian, the goal shouldn’t be for EA to become diverse and equitable, but rather attract the most qualified candidates, irrespective of their [intersectional] backgrounds.

It is more effective getting more people (irrespective of their backgrounds) to become EAs than to focus on fulfilling racial/ethnic quotas for signaling purposes.

We need more “X” EAs, where X can be any marginalized category. Umm… sure, but the statement still holds if you take the X out. We need more EAs and we shouldn’t care if many of them turn out to be WEIRD or not. Idk why that is even relevant. Who gives a () about what an EA looks like or what their background is? Only their beliefs, actions, and values matter for utility maximization. Not the color of their skin, gender, or sexuality.

That is to say, the disadvantages faced by an individual[1] cannot be understood simply by totting up the separate reasons they might be disadvantaged.

And this is what turns me off about intersectionality, epistemically speaking. There is no attempt to mathematically quantify or statistically measure how much disutility they suffer from being X or Y, with confounding variables in mind. Their epistemology grounded in a lot of “woo” and unknowables. I’m told by intersectionalists that as a WEIRD I’ll never be able to understand or model the discrimination or pain faced by a (for example) queer, poor, Black and Muslim individual. Well, then what’s the point in engaging with that which is incalculable then?