A

AndrewDoris

230 karmaJoined Oct 2020

Bio

Participation
3

I'm a recent graduate of a Yale M.A. program in global affairs and public policy. Before coming to Yale I served four years as a US Army officer. Before that I studied political science and economics at Johns Hopkins. I love travel, sports, and writing, especially about the moral implications of policy issues.

I was first drawn to EA to maximize the impact of my charitable giving, but now use it to help plan my career as well. My current plan is to focus on U.S. foreign policy in an effort to mitigate the danger that great power competition can have as a cross-cutting risk factor for several types of existential threats. I also love Give Directly, and value altruism that respects the preferences of its intended beneficiaries.

Comments
27

This year I gave 13% of my income (+ some carryover from last year, which I had postponed) to EA charities. Of this, I gave about half to global health and development (mostly to GiveWell Top Charities, some to Give Directly) and the other half to animal welfare (mostly to the EA Funds Animal Welfare Fund, some to The Humane League). I also gave $1,250 to various political candidates I felt were EA-aligned. In prior years I've given overwhelmingly to global health and development and I still think that's very important: it's what initially drew me to EA and what I'm most confident is good. But last year I was convinced I had underinvested in animal welfare historically and I'm starting to make up for that.

I strongly prefer near-term causes with my personal donations, partly because my career focuses on speculative long-term impact. I'm bothered by the strong possibility that my career efforts will benefit nobody, and want to ensure I do at least some good along the way. I also think that in recent years, the wealthiest and most prominent EAs have invested more money into longterm causes than we can be confident is helpful, in ways that have sometimes backfired, damaged or dominated EA's reputation, promoted groupthink in pursuit of jobs/community, and ultimately saddened or embarrassed me. Relatedly, I think managing public perceptions of EA is inescapably important work if we want to effectively improve government policies in democratic countries. So even on longtermist grounds, I think it's important for self-described EAs at the grassroots level to keep proven, RCT-backed, highly effective charities with intuitive mass appeal on the funding menu (perhaps especially if we personally work on longtermism and want people to trust our motives).

Within neartermism, I like to split my donations across a single-digit number of the most impactful funds or charities. This is because I do not have a strong, confident belief that any one of them is most effective, want to maximize my chance of doing a large amount of good overall, and see hedging my bets as a mark of intellectual humility. I don't mind if this makes my altruism less effective than that of the very best EAs, because I'm confident it's better than that of 99% of people. Likewise, I think the path to effective giving at a societal scale depends much more on outreach to the bottom 90% or so of givers, who give barely any quantitative thought to relative impact, than it does on redirecting donations from those already in the movement.

Great comment. I think "people who sacrifice significantly higher salaries to do EA work" is a plausible minimum definition of who those calling for democratic reforms feel deserve a greater say in funding allocation. It doesn't capture all of those people, nor solve the harder question of "what is EA work/an EA organization?" But it's a start.

Your 70/30 example made me wonder whether redesigning EA employee compensation packages to include large matching contributions might help as a democratizing force. Many employers outside EA/in the private sector offer a matching contributions program, wherein they'll match something like 1-5% of your salary (or up to a certain dollar value) in contributions to a certified nonprofit of your choosing. Maybe EA organizations (whichever voluntarily opt into this) could do that except much bigger - say, 20-50% of your overall compensation is paid not to you but to a charity of your choosing. This could also be tied to tenure so that the offered match increases at a faster rate than take home pay, reflecting the intuition that committed longtime members of the EA community have engaged with the ideas more, and potentially sacrificed more, and consequently deserve a stronger vote than newcomers.

Ex: Sarah's total compensation is 100k, of which she takes home 80k and her employer offers an additional 20k to a charity of her choosing. After 2 years working there, her total package jumps to 120k, of which she takes home 88k and allocates another 32k. After 10 years she takes home 110 and allocates another 90, etc. This tenure could be interchangeable across participating organizations. With time, it may even resemble the "impact certificates" you mention.

Employers could limit this match to a prespecified list of plausibly EA recipients if they wish. Employees could accept this arrangement en lieu of giving X% of their personal incomes (which has the added benefit of avoiding taxation on "income" that's only going to be given away to largely tax-deductible organizations anyway). Employees could also elect to give a certain amount back to their employing organization, which some presumably would since people tend to believe in the importance of work they are doing. We could write software to anonymize these donations, and avoid any fear of recrimination for NOT regifting it to the employing org.

One downside could be making it more expensive for EA organizations to hire, and thus harder for them to grow and harder for individual EAs to find an EA job. It also wouldn't solve the fact that the resources controlled by EA organizations are not proportional to the number of people they employ, especially at the extremes. Perhaps if mega-donors like Dustin are open to democratization but wary of how to define the EA electorate, they'd support higher grants to participating recipients, on the logic that "if they're EA enough to deserve my grant for X effective project, they're EA enough to deserve a say in how some of my other money is spent too" (even beyond what they need for X).

For all I know EA organizations may have something like this already. If anyone has toyed with or tried to implement this idea before, I'd love to hear about it.

It is often the explicit job of a journalist to uncover and release publicly important information from sources who would not consent to its release.

"Moral authority" and "intellectual legitimacy" are such fuzzy terms that I'm not really sure what this post is arguing.

Insofar as they just denote public perceptions, sure: this is obviously bad PR for the movement. It shows we're not immune from big mistakes, and raises fair questions about the judgment of individual EAs, or certain problematic norms/mindsets among the living breathing community of humans associated with the label. We'll probably get mocked a bit more and be greeted with more skepticism in elite circles. There are meta-EA problems that need fixing, that I've been introspecting on the past two weeks.

But "careful observers" also knew that before the FTX scandal, and it's unclear to me which specific ideas in EA philosophy are less intellectually legitimate or authoritative than they were before. When a prominent Democrat politician has a scandal, Democrats get mocked - but nobody intelligent thinks that reduces the moral authority of being pro-choice or supporting stricter gun control, etc. The ideas are right or wrong independent of how their highest-profile advocates behave.

Perhaps SBF's fraud indicts the EA community's lack of scrutiny or safeguards around how to raise money. But to me, it does not at all indict EA's ability to allocate resources once they've been raised. It's not as if the charities SBF was funding were proven ineffective. "The idea that the EA movement is better than others at allocating time and resources toward saving and improving human lives" could be right or wrong, but this incident isn't good evidence either way.

Thanks Thomas - appreciate the updated research. And that wasn't a typo, just a poorly expressed idea. I meant to say, "Only 17% of respondents reported less than 90% confidence that HLMI will eventually exist."

If you are a consequentialist, then incorporating the consequences of reputation into your cost-benefit assessment is "actually behaving with integrity." Why is it more honest - or even perceived as more honest - for SBF to exempt reputational consequences from what he thinks is most helpful?

Insofar as SBF's reputation and EA's reputation are linked, I agree with you (and disagree with OP) that it could be seen as cynical and hypocritical for SBF to suddenly focus on American beneficiaries in particular. These have never otherwise been EA priorities, so he would be transparently buying popularity. But I don't think funding GiveWell's short-term causes - nor even funding them more than you otherwise would for reputational reasons - is equally hypocritical in a way that suggests a lack of integrity. These are still among the most helpful things our community has identified. They are heavily funded by OpenPhilanthropy and by a huge portion of self-identified EAs, even apart from their reputational benefits. Many, both inside and outside the movement, see malaria bednets as the quintessential EA intervention. Nobody outside the movement would see that as a betrayal of EA principles.

Insofar as EA and SBF's reputations are severable, perhaps it doesn't matter what's quintessentially EA, because "EA principles" are broader than SBF's personal priorities. But in that case, because SBF's personal priorities incline him towards political activism on longtermism, they should also incline him towards reputation management. Caring about things with instrumental value to protecting the future should not be seen as a dishonest deviation from longtermist beliefs, because it isn't!

In another context, doing broadly popular and helpful things you "actually don't think are the most helpful" might just be called hedging against moral uncertainty. Responsiveness to social pressure on altruists' moral priorities is a humble admission that our niche and esoteric movement may have blind spots. It's also, again, what representative politics are all about. If we want to literally help govern the country, we must be inclusive. We must convey that we are not here to evangelize to the ignorant masses, but are self-aware enough to incorporate their values. So if there's a broad bipartisan belief that the very rich have obligations to the poor, SBF may have to validate that if he wants to be seen as altruistic elsewhere.

(I'm in a rush, so apologies if the above rambles).

I disagree with this for two reasons. First, it's odd to me to categorize political advertising as "direct impact" but short-term spending on poverty or disease as "reputational." There is overlap in both cases; but if we must categorize I think it's closer to the opposite. Short-term, RCT-backed spending is the most direct impact EA knows how to confidently make. And is not the entire project of engaging with electoral politics one of managing reputations? 

To fund a political campaign is to attempt to popularize a candidate and their ideas; that is, to improve their reputation. That only works at all if you're deeply in tune with which of our ideas are political winners, and which are less so. It only works if you're sensitive to what the media will say. If selectively highlighting our most popular causes seems disingenuous, manipulative, or self-defeating to an impression of integrity, I hear you - but that's hardly a case FOR political advertising. To support what SBF's doing in the first place starts by accepting that, at least to some extent, framing EA in a way the mainstream can get behind instrumentally overlaps with "doing things because we think they're right."

If you accept that reputation matters, why is optimizing for an impression of greater integrity better than optimizing for an impression of greater altruism? In both cases, we're just trying to anticipate and strategically preempt a misconception people may have about our true motivations. It just boils down to which misconception you think is empirically more common or dangerous.

My second and broader worry is that EA may be entering the most dangerous reputational period of its existence to date. I'm planning a standalone post on this soon, so I won't elaborate too much on why I think this here. But the surge of recent posts you mention suggests I'm not alone; and if we're right, high-level PR mindfulness could be more important now than ever before. EA's reputation is important for long-term impact, especially if you think (as SBF appears to) that some of the most important X-risk reductions will have to come from within democratic governments.

I do the same, but I think we should be transparent about what those harmful ideas are. Have posted rules about what words or topics are beyond the pale, which a moderator can enforce unilaterally with an announcement, much like they do on private Facebook groups or Reddit threads. Where a harmful comment doesn't explicitly violate a rule, users can still downvote it into oblivion - but it shouldn't be up to one or two people's unilateral discretion.

*(Note: This neighbor threatened me with a kitchen knife when we were both eight years old, and seemed generally prone to violence and antisocial behavior. So I don't think his apparent indifference to mosquito suffering should be taken as a counter-example suggesting that most people are also indifferent.)

TL;DR - Thanks for an interesting and accessible post! With the caveat that I've done no research and have only anecdotes to back this up, I wonder if you may underestimate people's intuitive ability to feel empathy for insects. Perhaps the more daunting obstacle to social concern for insect welfare overlaps with our indifference toward wild animal welfare in general?

***

When I was about 7, one of my young neighbors used to pin large mosquitoes against his playset slide and slowly tear off one limb at a time.* My siblings, parents, and I universally found this repulsive, long before we knew anything about EA. As Brian Tomasik documents in some of his videos, many insects writhe as they die in ways that humans typically associate with pain.

They also attempt to escape death in ways we understand as fear. I used to live in a place with lots of American cockroaches, which are large enough to be gross and startling. I probably squashed 50 - 100 of them over the years. Each time, I couldn't help but feel conflicted chasing them, then applying enough force to feel them burst under a wadded paper towel as they frantically scurried to escape. "If the Jains are right," I joked to a friend, "I'm going to hell."

My reflection from these biased and highly unscientific anecdotes is that even if we do not intuitively feel a moral obligation to protect or care for insects, ensure they live flourishing lives, or even refrain from killing them when they annoy us (or legitimately threaten our health/hygiene), we do at least dimly suspect they are capable of pain and negative emotions and feel an obligation not to gratuitously intensify that suffering. We kill bugs, but we prefer to give them a quick death. That's arguably similar to our moral intuitions for other animals. Most people object to dogfighting much more than they object to putting down unwanted strays in a shelter, for example.

For this reason, I do think "don't boil silkworms alive" could eventually catch on as a mainstream cause. So could "don't farm insects in stressful conditions" and "ensure pesticides kill only the desired insects, as quickly as possible." We can be convinced to mitigate whatever unnecessary suffering we are directly responsible for, especially when the required sacrifices are minor. I'd be glad to see EA get involved in this work.

On the other hand, these intuitions will not reach the overwhelming majority of those 10 quintillion insects, and I suspect you'll struggle to convince most people to go further than that. My hunch is that this is for the same reason people are skeptical of wild animal welfare in general. Most people's moral intuitions have at least some deontological streak, so they feel much more responsible for animals that suffer at human hands than they do for those that suffer from natural predation, starvation, infection, etc. When we watch one animal eat another in a nature documentary, we may feel some compassion (admittedly proportional to how cute the eaten animal was). But we do not feel guilt or responsibility to change our own behavior in the same way we might if we were to have personally hunted or eaten the animal.

So my theory is that even though insects are uniquely small, weird, or scary, we can empathize with them in similar circumstances to our empathy for other animals. Nonetheless, this empathy isn't enough to reach most suffering insects. 

If this theory is true, it has implications for what strategies are likeliest to succeed in improving insect welfare, as well as how we should categorize insect welfare among other EA causes. Whereas factory-farmed chickens represent the overwhelming majority of overall chickens on Earth, farmed insects are a tiny minority of overall insects, and seem likely to remain so. In this way, insect welfare could be seen as a speculative but high-stakes subset of wild animal welfare, the tractability of which may depend on similar advocacy approaches.

Load more