I'm a Yale grad student studying (mainly) US foreign policy. Before coming to Yale I served four years as a US Army officer. Before that I studied political science and economics at Johns Hopkins. I love travel, sports, and writing, especially about the moral implications of policy issues.

I was first drawn to EA to maximize the impact of my charitable giving, but now use it to plan my career and aspects of my lifestyle as well. My current plan is to focus on U.S. foreign policy in an effort to mitigate the danger that great power competition can have as a cross-cutting risk factor for several types of existential threats. My favorite charity is Give Directly, and I value altruism that respects the preferences of its intended beneficiaries.

Topic Contributions


Sam Bankman-Fried should spend $100M on short-term projects now

If you are a consequentialist, then incorporating the consequences of reputation into your cost-benefit assessment is "actually behaving with integrity." Why is it more honest - or even perceived as more honest - for SBF to exempt reputational consequences from what he thinks is most helpful?

Insofar as SBF's reputation and EA's reputation are linked, I agree with you (and disagree with OP) that it could be seen as cynical and hypocritical for SBF to suddenly focus on American beneficiaries in particular. These have never otherwise been EA priorities, so he would be transparently buying popularity. But I don't think funding GiveWell's short-term causes - nor even funding them more than you otherwise would for reputational reasons - is equally hypocritical in a way that suggests a lack of integrity. These are still among the most helpful things our community has identified. They are heavily funded by OpenPhilanthropy and by a huge portion of self-identified EAs, even apart from their reputational benefits. Many, both inside and outside the movement, see malaria bednets as the quintessential EA intervention. Nobody outside the movement would see that as a betrayal of EA principles.

Insofar as EA and SBF's reputations are severable, perhaps it doesn't matter what's quintessentially EA, because "EA principles" are broader than SBF's personal priorities. But in that case, because SBF's personal priorities incline him towards political activism on longtermism, they should also incline him towards reputation management. Caring about things with instrumental value to protecting the future should not be seen as a dishonest deviation from longtermist beliefs, because it isn't!

In another context, doing broadly popular and helpful things you "actually don't think are the most helpful" might just be called hedging against moral uncertainty. Responsiveness to social pressure on altruists' moral priorities is a humble admission that our niche and esoteric movement may have blind spots. It's also, again, what representative politics are all about. If we want to literally help govern the country, we must be inclusive. We must convey that we are not here to evangelize to the ignorant masses, but are self-aware enough to incorporate their values. So if there's a broad bipartisan belief that the very rich have obligations to the poor, SBF may have to validate that if he wants to be seen as altruistic elsewhere.

(I'm in a rush, so apologies if the above rambles).

Sam Bankman-Fried should spend $100M on short-term projects now

I disagree with this for two reasons. First, it's odd to me to categorize political advertising as "direct impact" but short-term spending on poverty or disease as "reputational." There is overlap in both cases; but if we must categorize I think it's closer to the opposite. Short-term, RCT-backed spending is the most direct impact EA knows how to confidently make. And is not the entire project of engaging with electoral politics one of managing reputations? 

To fund a political campaign is to attempt to popularize a candidate and their ideas; that is, to improve their reputation. That only works at all if you're deeply in tune with which of our ideas are political winners, and which are less so. It only works if you're sensitive to what the media will say. If selectively highlighting our most popular causes seems disingenuous, manipulative, or self-defeating to an impression of integrity, I hear you - but that's hardly a case FOR political advertising. To support what SBF's doing in the first place starts by accepting that, at least to some extent, framing EA in a way the mainstream can get behind instrumentally overlaps with "doing things because we think they're right."

If you accept that reputation matters, why is optimizing for an impression of greater integrity better than optimizing for an impression of greater altruism? In both cases, we're just trying to anticipate and strategically preempt a misconception people may have about our true motivations. It just boils down to which misconception you think is empirically more common or dangerous.

My second and broader worry is that EA may be entering the most dangerous reputational period of its existence to date. I'm planning a standalone post on this soon, so I won't elaborate too much on why I think this here. But the surge of recent posts you mention suggests I'm not alone; and if we're right, high-level PR mindfulness could be more important now than ever before. EA's reputation is important for long-term impact, especially if you think (as SBF appears to) that some of the most important X-risk reductions will have to come from within democratic governments.

Revisiting the karma system

I do the same, but I think we should be transparent about what those harmful ideas are. Have posted rules about what words or topics are beyond the pale, which a moderator can enforce unilaterally with an announcement, much like they do on private Facebook groups or Reddit threads. Where a harmful comment doesn't explicitly violate a rule, users can still downvote it into oblivion - but it shouldn't be up to one or two people's unilateral discretion.

Why should I care about insects?

*(Note: This neighbor threatened me with a kitchen knife when we were both eight years old, and seemed generally prone to violence and antisocial behavior. So I don't think his apparent indifference to mosquito suffering should be taken as a counter-example suggesting that most people are also indifferent.)

Why should I care about insects?

TL;DR - Thanks for an interesting and accessible post! With the caveat that I've done no research and have only anecdotes to back this up, I wonder if you may underestimate people's intuitive ability to feel empathy for insects. Perhaps the more daunting obstacle to social concern for insect welfare overlaps with our indifference toward wild animal welfare in general?


When I was about 7, one of my young neighbors used to pin large mosquitoes against his playset slide and slowly tear off one limb at a time.* My siblings, parents, and I universally found this repulsive, long before we knew anything about EA. As Brian Tomasik documents in some of his videos, many insects writhe as they die in ways that humans typically associate with pain.

They also attempt to escape death in ways we understand as fear. I used to live in a place with lots of American cockroaches, which are large enough to be gross and startling. I probably squashed 50 - 100 of them over the years. Each time, I couldn't help but feel conflicted chasing them, then applying enough force to feel them burst under a wadded paper towel as they frantically scurried to escape. "If the Jains are right," I joked to a friend, "I'm going to hell."

My reflection from these biased and highly unscientific anecdotes is that even if we do not intuitively feel a moral obligation to protect or care for insects, ensure they live flourishing lives, or even refrain from killing them when they annoy us (or legitimately threaten our health/hygiene), we do at least dimly suspect they are capable of pain and negative emotions and feel an obligation not to gratuitously intensify that suffering. We kill bugs, but we prefer to give them a quick death. That's arguably similar to our moral intuitions for other animals. Most people object to dogfighting much more than they object to putting down unwanted strays in a shelter, for example.

For this reason, I do think "don't boil silkworms alive" could eventually catch on as a mainstream cause. So could "don't farm insects in stressful conditions" and "ensure pesticides kill only the desired insects, as quickly as possible." We can be convinced to mitigate whatever unnecessary suffering we are directly responsible for, especially when the required sacrifices are minor. I'd be glad to see EA get involved in this work.

On the other hand, these intuitions will not reach the overwhelming majority of those 10 quintillion insects, and I suspect you'll struggle to convince most people to go further than that. My hunch is that this is for the same reason people are skeptical of wild animal welfare in general. Most people's moral intuitions have at least some deontological streak, so they feel much more responsible for animals that suffer at human hands than they do for those that suffer from natural predation, starvation, infection, etc. When we watch one animal eat another in a nature documentary, we may feel some compassion (admittedly proportional to how cute the eaten animal was). But we do not feel guilt or responsibility to change our own behavior in the same way we might if we were to have personally hunted or eaten the animal.

So my theory is that even though insects are uniquely small, weird, or scary, we can empathize with them in similar circumstances to our empathy for other animals. Nonetheless, this empathy isn't enough to reach most suffering insects. 

If this theory is true, it has implications for what strategies are likeliest to succeed in improving insect welfare, as well as how we should categorize insect welfare among other EA causes. Whereas factory-farmed chickens represent the overwhelming majority of overall chickens on Earth, farmed insects are a tiny minority of overall insects, and seem likely to remain so. In this way, insect welfare could be seen as a speculative but high-stakes subset of wild animal welfare, the tractability of which may depend on similar advocacy approaches.

Snakebites kill 100,000 people every year, here's what you should know

I suspect it would be easier to convince people who HAVE been bitten by a snake to go to the hospital than it will be to convince people who have not yet been bitten by a snake to constantly wear some kind of protective wraparound shinguards every time they're on the farm. The daily inconvenience level seems high for such a rare event. Even malaria nets are often not used for their intended purpose once distributed, and they seem to me like less of an inconvenience.

How likely is World War III?

Makes sense, and I'm not surprised to hear Allison may overestimate the risk. By coincidence, I just finished a rough cost/benefit analysis of U.S. counterterrorism efforts in Afghanistan for my studies, and his book on Nuclear Terrorism also seemed to exaggerate that risk. (I do give him credit for making an explicit prediction, though, a few years before most of us were into that sort of thing).

In any case, I look forward to a more detailed read of your Founders Pledge report once my exams end next week. The Evaluating Interventions section seems like precisely what I've been looking for in trying to plan my own foreign policy career.

[$20K In Prizes] AI Safety Arguments Competition

I took that from a Kelsey Piper writeup here, assuming she was summarizing some study:

"Most experts in the AI field think it poses a much larger risk of total human extinction than climate change, since analysts of existential risks to humanity think that climate change, while catastrophic, is unlikely to lead to human extinction. But many others primarily emphasize our uncertainty — and emphasize that when we’re working rapidly toward powerful technology about which there are still many unanswered questions, the smart step is to start the research now."

The hyperlink goes to an FHI paper that appears to just summarize various risks,  so it's unclear what her source was on the "most." I'd be curious to know as well. She does stress the greater variance of outcomes and uncertainty surrounding AI - writing "Our predictions about climate change are more confident, both for better and for worse." - so maybe my distillation should admit that too.

[$20K In Prizes] AI Safety Arguments Competition

Imagine if ants figured out a way to invent human beings. Because they spend all day looking for food, they might program us to "go make lots of food!" And maybe they'd even be cautious, and anticipate certain problems. So they also program us not to use any anteaters as we do it.  Those things are dangerous!

What would we do? Probably, we'd make a farm, that grows many times more food than the ants have ever seen. And then we'd water the crops - flooding the ant colony and killing all the ants. Of course, we didn't TRY to kill the ants; they were just in the way of the goal they gave us. And because we are many times smarter than ants, we accomplished their goal in a way they couldn't even fathom protecting against.

That's basically the worry with advanced Artificial Intelligence. Many scientists think we're approaching a day when AI will be many times smarter than us, and they still don't know how to stop it from doing things we don't want. If it gets powerful enough before we learn how to control it, it could make us like the ants.

[$20K In Prizes] AI Safety Arguments Competition

Artificial Intelligence is very difficult to control. Even in relatively simple applications, the top AI experts struggle to make it behave. This becomes increasingly dangerous as AI gets more powerful. In fact, many experts fear that if a sufficiently advanced AI were to escape our control, it could actually extinguish all life on Earth. Because AI pursues whatever goals we give it with no mind to other consequences, it would stop at nothing – even human extinction – to maximize its reward.

We can't know exactly how this would happen - but to make it less abstract, let's imagine some possibilities. Any AI with internet access may be able to save millions of copies of itself on unsecured computers all over the world, each ready to wake up if another were destroyed. This alone would make it virtually indestructible unless humans destroyed the internet and every computer on Earth. Doing so would be politically difficult in the best case - but especially so if the AI were also using millions of convincing disinformation bots to distract people, conceal the truth, or convince humans not to act. The AI may also be able to conduct brilliant cyber attacks to take control of critical infrastructures like power stations, hospitals, or water treatment facilities. It could hack into weapons of mass destruction - or, invent its own. And what it couldn't do itself, it could bribe or blackmail humans to do for it by seizing cash from online bank accounts.

For these reasons, most AI experts think advanced AI is much likelier to wipe out human life than climate change. Even if you think this is unlikely, the stakes are high enough to warrant caution.

Load More