A

AndrewDoris

235 karmaJoined

Bio

Participation
3

I'm a recent graduate of a Yale M.A. program in global affairs and public policy. Before coming to Yale I served four years as a US Army officer. Before that I studied political science and economics at Johns Hopkins. I love travel, sports, and writing, especially about the moral implications of policy issues.

I was first drawn to EA to maximize the impact of my charitable giving, but now use it to help plan my career as well. My current plan is to focus on U.S. foreign policy in an effort to mitigate the danger that great power competition can have as a cross-cutting risk factor for several types of existential threats. I also love Give Directly, and value altruism that respects the preferences of its intended beneficiaries.

Comments
31

I'm surprised nobody has commented yet, and want to say that I really enjoyed and largely agree with this piece. The logic of needing to accelerate an AGI arms race to stay ahead of China is deeply flawed in ways that mirror unfortunate pathologies in the US foreign policy community, and IMO worsens US national security, for many of the reasons you mention.

Two questions for you:

  1. How politically feasible is it to advance messaging along these lines, given the incoming administration's tech optimism and zero-sum foreign policy mindset (and indeed, the rare bipartisan consensus on hawkishness towards China?) I could see a lot of folks in the EA community saying "You're right, of course, but the train has left the station. We as a community lack the power to redirect policymakers' incentives and perceived interests on this issue any time soon, and the timelines are getting shorter, so we don't have time to try. Instead of marginalizing ourselves by trying to prevent an arms race that is by now inevitable and well underway, or push for collaborative international frameworks that MAGA has no interest in, it'd be more impactful to work within the existing incentives to slow down China and lobby for whatever marginal safety improvements we can."

  2. Why did you label the views we disagree with "AGI realism"? Is that the preferred title of their advocates or did you pick the word realism? I ask because I think much of the argument dramatizing the stakes of China getting this before us is linked with liberal internationalist mindsets that see the 21st century as a civilizational struggle between democracy and autocracy, and see AI as just one complicating wrinkle in that big-picture fight. Inversely, many of the voices calling for more restraint in US foreign policy (ex: abandoning hegemony and embracing multipolarity) call themselves realists, and see the path to peace as ensuring a stable and durable balance of power. So I think of it more as a debate between AI hawks and AI doves/restrainers, both of which could be either realists or something else.

I don't think this post has aged well overall, but I'd particularly like to focus on your claim the EAs should seek out Cold Uggies instead of running away from them. I disagree strongly. Cold Uggies are often a useful warning that you're about to engage in immoral activity by the standards of your own conscience. For example, both EA and the world would surely have been better off if SBF had listened to his Cold Uggies more closely. Often, Cold Uggies indicate the existence of important side constraints that should not be breached. If there are neglected opportunities on the other side, they may be neglected for good reason.

I'm closer to a libertarian than a leftist, and I might have agreed with your argument in 2012. But in the 2020s, after trying one coup already - and now setting loose an unelected tycoon to run roughshod over Congress' budgetary powers, purge anyone loyal to the constitution, and turn government into a vehicle for personal enrichment and revenge - it's clear enough to me that the Republican political movement is overwhelmingly a force for bad. Anyone "going Republican" would have to go along with too much of this bad from the inside.

Far from being too hostile to Republicans, too many EAs are naively ambivalent about Trump in particular, and the extent to which his reckless, illiberal, and anti-intellectual political movement is a cross-cutting risk factor exacerbating many forms of X-risk at once, from AII to bio to nuclear to climate change. Republicans must be engaged with where necessary, but actively supporting them likely has spillover harms that exceed the benefits.

As someone who started a Substack six months ago, I actually suspect it could have the opposite impact: people with relevant content will be even more eager to cross-post it here to try and build their audience.

I've not done this myself because most of my content is not EA-related and the two posts of mine which were EA-related felt too introductory for the forum (ex: defending EA against non-EA critics). I also felt weird about using an altruistic platform to self-promote too nakedly if the post wasn't a good fit (and even now I feel weird that someone might interpret this comment that way - isn't self-awareness fun?).

But to my point, neither of the posts I wrote on Substack were thoughts I'd have posted here if I hadn't started a Substack. And if I did have thoughts that felt like they would be of value to people who were already highly involved EAs, I'd be especially excited to put in effort towards fleshing them out now compared to how excited I'd have been a year ago, due to the ability to cross-post and potentially draw more eyeballs to my Substack.

In short, watching the subscriber count rise is an even more flattering dopamine boost than EA forum karma, and the market is competitive enough that many writers are sharking for excuses to self-promote on external sites.

(Relatedly, I'd definitely +1 to the idea of making a Substack for the weekly Forum Digest as soon as possible (maybe you already have one but I couldn't find it from a quick search just now).)

I share your impression that it's often used differently in broader society and mainstream animal rights groups than it is by technical philosophers and in the EA space. I think the average person would still hear the word as akin to racism or sexism or some other -ism. By criticizing those isms, we DO in fact mean to imply that individual human beings are of equal moral value regardless of their race or sex. And by that standard, I'd be a proud speciesist, because I do think individual beings of some species are innately more valuable than others.

We can split hairs about why that is - capacity for love or pain or knowledge or neuron count or whatever else we find valuable about a life - but it will still require you to come out with a multiplier for how much more valuable a healthy "normal" human is relative to a healthy normal member of other species, which would be absolutely anathema in the racial or sexual context.

This year I gave 13% of my income (+ some carryover from last year, which I had postponed) to EA charities. Of this, I gave about half to global health and development (mostly to GiveWell Top Charities, some to Give Directly) and the other half to animal welfare (mostly to the EA Funds Animal Welfare Fund, some to The Humane League). I also gave $1,250 to various political candidates I felt were EA-aligned. In prior years I've given overwhelmingly to global health and development and I still think that's very important: it's what initially drew me to EA and what I'm most confident is good. But last year I was convinced I had underinvested in animal welfare historically and I'm starting to make up for that.

I strongly prefer near-term causes with my personal donations, partly because my career focuses on speculative long-term impact. I'm bothered by the strong possibility that my career efforts will benefit nobody, and want to ensure I do at least some good along the way. I also think that in recent years, the wealthiest and most prominent EAs have invested more money into longterm causes than we can be confident is helpful, in ways that have sometimes backfired, damaged or dominated EA's reputation, promoted groupthink in pursuit of jobs/community, and ultimately saddened or embarrassed me. Relatedly, I think managing public perceptions of EA is inescapably important work if we want to effectively improve government policies in democratic countries. So even on longtermist grounds, I think it's important for self-described EAs at the grassroots level to keep proven, RCT-backed, highly effective charities with intuitive mass appeal on the funding menu (perhaps especially if we personally work on longtermism and want people to trust our motives).

Within neartermism, I like to split my donations across a single-digit number of the most impactful funds or charities. This is because I do not have a strong, confident belief that any one of them is most effective, want to maximize my chance of doing a large amount of good overall, and see hedging my bets as a mark of intellectual humility. I don't mind if this makes my altruism less effective than that of the very best EAs, because I'm confident it's better than that of 99% of people. Likewise, I think the path to effective giving at a societal scale depends much more on outreach to the bottom 90% or so of givers, who give barely any quantitative thought to relative impact, than it does on redirecting donations from those already in the movement.

Great comment. I think "people who sacrifice significantly higher salaries to do EA work" is a plausible minimum definition of who those calling for democratic reforms feel deserve a greater say in funding allocation. It doesn't capture all of those people, nor solve the harder question of "what is EA work/an EA organization?" But it's a start.

Your 70/30 example made me wonder whether redesigning EA employee compensation packages to include large matching contributions might help as a democratizing force. Many employers outside EA/in the private sector offer a matching contributions program, wherein they'll match something like 1-5% of your salary (or up to a certain dollar value) in contributions to a certified nonprofit of your choosing. Maybe EA organizations (whichever voluntarily opt into this) could do that except much bigger - say, 20-50% of your overall compensation is paid not to you but to a charity of your choosing. This could also be tied to tenure so that the offered match increases at a faster rate than take home pay, reflecting the intuition that committed longtime members of the EA community have engaged with the ideas more, and potentially sacrificed more, and consequently deserve a stronger vote than newcomers.

Ex: Sarah's total compensation is 100k, of which she takes home 80k and her employer offers an additional 20k to a charity of her choosing. After 2 years working there, her total package jumps to 120k, of which she takes home 88k and allocates another 32k. After 10 years she takes home 110 and allocates another 90, etc. This tenure could be interchangeable across participating organizations. With time, it may even resemble the "impact certificates" you mention.

Employers could limit this match to a prespecified list of plausibly EA recipients if they wish. Employees could accept this arrangement en lieu of giving X% of their personal incomes (which has the added benefit of avoiding taxation on "income" that's only going to be given away to largely tax-deductible organizations anyway). Employees could also elect to give a certain amount back to their employing organization, which some presumably would since people tend to believe in the importance of work they are doing. We could write software to anonymize these donations, and avoid any fear of recrimination for NOT regifting it to the employing org.

One downside could be making it more expensive for EA organizations to hire, and thus harder for them to grow and harder for individual EAs to find an EA job. It also wouldn't solve the fact that the resources controlled by EA organizations are not proportional to the number of people they employ, especially at the extremes. Perhaps if mega-donors like Dustin are open to democratization but wary of how to define the EA electorate, they'd support higher grants to participating recipients, on the logic that "if they're EA enough to deserve my grant for X effective project, they're EA enough to deserve a say in how some of my other money is spent too" (even beyond what they need for X).

For all I know EA organizations may have something like this already. If anyone has toyed with or tried to implement this idea before, I'd love to hear about it.

It is often the explicit job of a journalist to uncover and release publicly important information from sources who would not consent to its release.

"Moral authority" and "intellectual legitimacy" are such fuzzy terms that I'm not really sure what this post is arguing.

Insofar as they just denote public perceptions, sure: this is obviously bad PR for the movement. It shows we're not immune from big mistakes, and raises fair questions about the judgment of individual EAs, or certain problematic norms/mindsets among the living breathing community of humans associated with the label. We'll probably get mocked a bit more and be greeted with more skepticism in elite circles. There are meta-EA problems that need fixing, that I've been introspecting on the past two weeks.

But "careful observers" also knew that before the FTX scandal, and it's unclear to me which specific ideas in EA philosophy are less intellectually legitimate or authoritative than they were before. When a prominent Democrat politician has a scandal, Democrats get mocked - but nobody intelligent thinks that reduces the moral authority of being pro-choice or supporting stricter gun control, etc. The ideas are right or wrong independent of how their highest-profile advocates behave.

Perhaps SBF's fraud indicts the EA community's lack of scrutiny or safeguards around how to raise money. But to me, it does not at all indict EA's ability to allocate resources once they've been raised. It's not as if the charities SBF was funding were proven ineffective. "The idea that the EA movement is better than others at allocating time and resources toward saving and improving human lives" could be right or wrong, but this incident isn't good evidence either way.

Thanks Thomas - appreciate the updated research. And that wasn't a typo, just a poorly expressed idea. I meant to say, "Only 17% of respondents reported less than 90% confidence that HLMI will eventually exist."

If you are a consequentialist, then incorporating the consequences of reputation into your cost-benefit assessment is "actually behaving with integrity." Why is it more honest - or even perceived as more honest - for SBF to exempt reputational consequences from what he thinks is most helpful?

Insofar as SBF's reputation and EA's reputation are linked, I agree with you (and disagree with OP) that it could be seen as cynical and hypocritical for SBF to suddenly focus on American beneficiaries in particular. These have never otherwise been EA priorities, so he would be transparently buying popularity. But I don't think funding GiveWell's short-term causes - nor even funding them more than you otherwise would for reputational reasons - is equally hypocritical in a way that suggests a lack of integrity. These are still among the most helpful things our community has identified. They are heavily funded by OpenPhilanthropy and by a huge portion of self-identified EAs, even apart from their reputational benefits. Many, both inside and outside the movement, see malaria bednets as the quintessential EA intervention. Nobody outside the movement would see that as a betrayal of EA principles.

Insofar as EA and SBF's reputations are severable, perhaps it doesn't matter what's quintessentially EA, because "EA principles" are broader than SBF's personal priorities. But in that case, because SBF's personal priorities incline him towards political activism on longtermism, they should also incline him towards reputation management. Caring about things with instrumental value to protecting the future should not be seen as a dishonest deviation from longtermist beliefs, because it isn't!

In another context, doing broadly popular and helpful things you "actually don't think are the most helpful" might just be called hedging against moral uncertainty. Responsiveness to social pressure on altruists' moral priorities is a humble admission that our niche and esoteric movement may have blind spots. It's also, again, what representative politics are all about. If we want to literally help govern the country, we must be inclusive. We must convey that we are not here to evangelize to the ignorant masses, but are self-aware enough to incorporate their values. So if there's a broad bipartisan belief that the very rich have obligations to the poor, SBF may have to validate that if he wants to be seen as altruistic elsewhere.

(I'm in a rush, so apologies if the above rambles).

Load more