Yes, and how many people we project will have this association in the future. I think it's reasonably likely that this view will pick up steam among vaguely activisty people on college campuses in the next five years. That's an important demographic for growing EA.
Great piece, I thought. I think Carrick Flynn's loss may in no small part be due to accidentally cultivating a white crypto-bro aesthetic. If that's right, it is a case of aesthetics mattering a fair amount. Personally, I'd like to see EA do more to avoid donning this aesthetic, which anecdotally seems to turn a lot of people off.
I'd be a little bit concerned by this. I think there's a growing sentiment among young people (especially on university campuses) that classicism is aesthetically: regressive, retrograde, old-white-man stuff. Here's a quote from a recent New York Times piece:
"Long revered as the foundation of “Western civilization,” [classics] was trying to shed its self-imposed reputation as an elitist subject overwhelmingly taught and studied by white men. Recently the effort had gained a new sense of urgency: Classics had been embraced by the far right, whose members held up the ancient Greeks and Romans as the originators of so-called white culture. Marchers in Charlottesville, Va., carried flags bearing a symbol of the Roman state; online reactionaries adopted classical pseudonyms; the white-supremacist website Stormfront displayed an image of the Parthenon alongside the tagline “Every month is white history month.”"
Edit: this is a criticism of classicism as a useful aesthetic, not of the enlightenment. Potentially they're severable.
I'm curious whether community size, engagement level, and competence might matter less than the general perception of EA among non-EAs.
Not just because low general positive perception of EA makes it harder to attract highly engaged, competent EAs. But also because general positive perception matters even if it never results in conversion. General positive perception increases our ability to cooperate with and influence non-EA individuals and institutions.
Suppose an aggressive community building tactic attracts one HEA, of average competence. In addition, it gives a number of people n a slightly negative view of EA -- not a strongly felt opposition, just enough of a dislike that they mention it in conversations with other non-EAs sometimes. What n would we accept to make this community building tactic expected value neutral? (This piece seems to suggest that many current strategies fit this model.)
I'm currently evaluating the feasibility and expected value of building a proxy voting advisory firm that would make EA-aligned voting recommendations. Would love to meet with you or anyone with expertise.
I think the virtues of moral expansiveness and altruistic sympathy for moral patients are really important for EAs to develop, and I think being vegan increased my stock of these virtues by reversing the "moral dulling" effect you postulate. (This paper makes the case for utilitarians to develop a set of similar virtues: https://psyarxiv.com/w52zm.) I've also developed a visceral disgust response to meat as a result of being vegan, which is for me probably inseparable from the motivating feeling of sympathy for animals as moral patients.
When I was a nonvegan, I underestimated the extent to which eating meat was morally dulling to me, and I suspect this is common. It was hard to know how morally dulled I was until I experienced otherwise.
If a community claims to be altruistic, it's reasonable for an outsider to seek evidence: acts of community altruism that can't be equally well explained by selfish impulses, like financial reward or desire for praise. In practice, that seems to require that community members make visible acts of personal sacrifice for altruistic ends. To some degree, EA's credibility as a moral movement (that moral people want to be a part of) depends on such sacrifices. GWWC pledges help; as this post points out, big spending probably doesn't.
One shift that might help is thinking more carefully about who EA promotes as admirable, model, celebrity EAs. Communities are defined in important ways by their heroes and most prominent figures, who not only shape behaviour internally, but represent the community externally. Communities also have control over who these representatives are, to some degree: someone makes a choice over who will be the keynote speaker at EA conferences, for instance.
EA seems to allocate a lot of its prestige and attention to those it views as having exceptional intellectual or epistemic powers. When we select EA role models and representatives, we seem to optimise for demonstrated intellectual productivity. But our selections are not necessarily the people who have made the greatest personal altruistic sacrifices. Often, they're researchers who live in relative luxury -- even if they've taken a GWWC pledge. Perhaps we should be more conscious to elevate the EA profile of people like those in MacFarquhar's Strangers Drowning : people who have made exceptional sacrifices to make the world better, rather than people who have been most successful at producing EA-relevant intellectual output. Maybe the keynote speaker at the next EA conference should be someone who once undertook an effective hunger strike, say. (Maybe even regardless of whether they have heard of EA, or consider themselves EA.)
There's an obvious reason to instead continue EA's current role model selection strategy: having a talk from a really clever researcher is helpful for internal community epistemics. We want to grant speaking platforms to those who might be able to offer the most valuable information or best thought-through view. And it's valuable for the external reputation of our community epistemics to have such people be the face of EA. We also don't want to promote the idea that the size of one's sacrifice is what ultimately matters.But there are internal and external reasons to choose a role model based on the degree of inspiring altruistic sacrifice that person has made, too. Just as Will MacAskill can make me a little more informed, or guide my thinking in a slightly better direction, an inspiring story of personal sacrifice can make me a little more dedicated, a little more willing to work hard and sacrifice to make the world better. And externally, such a role model signals community focus on altruistic commitment.
My low-confidence guess is that the optimum allocation of prestige still gives most EA attention and admiration to those with greatest demonstrated intellectual or epistemic power -- but not all. Those who've demonstrated acts of moral sacrifice should be held up as exemplars too, especially in external-facing contexts.
Proportional Chances Voting is basically equivalent to a mechanism where one vote is selected at random to be the deciding vote, as Newberry and Ord register in a footnote (they refer to it as "Random Dictator"; I've also seen it described as "lottery voting"). Newberry and Ord do say that Proportional Chances is supposed to be different because of the negotiation period, but I don't see how Random Dictator is incompatible with negotiation.
Anyway, some of the literature on this mechanism may be of interest here, given footnotes 8-9. This paper proposes such a mechanism, defends its plausibility: Saunders, Ben. “Democracy, Political Equality, and Majority Rule.” Ethics 121, no. 1 (2010): 148–77. I haven't read any good papers which offer interesting critiques of Saunders, but the paper seems to be influential, so maybe someone else knows of one?
As for calling new votes (footnote 9 of this post), votes could be scheduled by a separate body than that doing the voting, or could be scheduled by some regularised rule. For instance, in Kira's Dinner, the thought experiment in the Newberry and Ord paper, votes on what Kira should eat are scheduled according to the regular rhythm of Kira getting hungry. The voters take the votes as given -- I think there are usually similar ways to establish systems like this in real-world multi-person organizations.
To the extent average utilitarianism is motivated by avoiding the Repugnant Conclusion, I suspect that most average utilitarians would be as disturbed by aggregating over time as they are by aggregating within a generation, since we can establish a Repugnant Conclusion over times pretty straightforwardly. That said, to the extent intuitions differ when we aggregate over times, I can see that this could pose a challenge to average utilitarians.
I can't recall any work on this argument off the top of my head, but I did recently come across a hint of a related argument directed against distributive egalitarianism. From https://globalprioritiesinstitute.org/economic-inequality-and-the-long-term-future-andreas-t-schmidt-university-of-groningen-and-daan-juijn-ce-delft/ : "An additional question is whether distributive egalitarianism should extend to inequalities across generations." Which links to a footnote: "One of us elsewhere argues that distributive egalitarianism is implausible, because its extension to intergenerational distributions is necessary yet implausible [redacted]." Not sure why the citation is redacted, but I think "one of us" refers to Andreas Schmidt. Of course, extending the analysis to future generations threatens average utilitarianism and distributive egalitarianism in different ways. But the fact that both are threatened by this type of argument suggests to me that a lot of moral theories ought to be stress-tested against "what about across generations?" arguments. I agree that there's an interesting set of questions here.