Ariel Simnegar

2051 karmaJoined



I'm a managing partner at AltX, an EA-aligned quantitative crypto hedge fund. I previously earned to give as a Quant Trading Analyst at DRW. In my free time, I enjoy reading, discussing moral philosophy, and exploring Wikipedia rabbit holes.


(The following is mostly copied from this thread due to a lack of time. I unfortunately can't commit to much engagement on replies to this.)

The sign of the effect of MSI seems to rely crucially on a very high credence in the person-affecting view, where the interests of future people are not considered.

Since 2000, MSI has averted one maternal death by preventing on average 502 unintended pregnancies. Even if only ~20% of these unintended pregnancies would have counterfactually been carried to term (due to abortion, replacement, and other factors), that still means preventing one maternal death prevents the creation of ~100 human beings. In other words, MSI's intervention prevents ~100x as much human life experience as it creates by averting a maternal death. If one desires to maximize expected choice-worthiness under moral uncertainty, assuming the value of human experience is independent of the person-affecting view, one must be ~99% confident that the person-affecting view is true for MSI to be net positive.

However, many EAs, especially longtermists, argue that the person-affecting view is unlikely to be true. For example, Will MacAskill spends most of Chapter 8 of What We Owe The Future arguing that "all proposed defences of the intuition of neutrality [i.e. person-affecting view] suffer from devastating objections". Toby Ord writes in The Precipice p. 263 that "Any plausible account of population ethics will involve…making sacrifices on behalf of merely possible people."

If there's a significant probability that the person-affecting view may be false, then MSI's effect could in reality be up to 100x as negative as its effect on mothers is positive.

I worry about this line of reasoning because it's ends-justify-the-means thinking.

Let's say billions of people were being tortured right now, and some longtermists wrote about how this isn't even a feather in the scales compared to the cosmic endowment. These longtermists would be accused of callously gambling billions of years on suffering on a theoretical idea. I can just imagine The Guardian's articles about how SBF's naive utilitarianism is alive and well in EA.

The difference between the scenario for animals and the scenario for humans is that the former is socially acceptable but the latter is not. There isn't a difference in the actual badness.

Separately, to engage with the utilitarian merits of your argument, my main skepticism is an unwillingness to go all-in on ideas which remain theoretical when the stakes are billions of years of torture. (For example, let's say we ignore factory farming, and then there's a still unknown consideration which prevents us or anyone else from accessing the cosmic endowment. That scares me.) Also, though I'm not a negative utilitarian, I think I take arguments for suffering-focused views more seriously than you might.

I'd like to give some context for why I disagree.

Yes, Richard Hanania is pretty racist. His views have historically been quite repugnant, and he's admitted that "I truly sucked back then". However, I think EA causes are more important than political differences. It's valuable when Hanania exposes the moral atrocity of factory farming and defends EA to his right-wing audience. If we're being scope-sensitive, I think we have a lot more in common with Hanania on the most important questions than we do on political issues.

I also think Hanania has excellent takes on most issues, and that's because he's the most intellectually honest blogger I've encountered. I think Hanania likes EA because he's willing to admit that he's imperfect, unlike EA's critics who would rather feel good about themselves than actually help others.

More broadly, I think we could be doing more to attract people who don't hold typical Bay Area beliefs. Just 3% of EAs identify as right wing. I think there are several reasons why, all else equal, it would be better to have more political diversity:

  • In this era of political polarization, It would be a travesty for EA issues to become partisan.
  • All else equal, political diversity is good for community epistemics. In that regard, it should be encouraged for much the same reason that cultural and racial diversity are encouraged.
  • If we want EA to be a global social movement, we need to show that one can be EA even if they hold beliefs on other issues we find repugnant. I live in Panama for my job. When I arrived here, I had a culture shock from how backwards many people's views are on racism and sexism. If we can't be friends with the person next door with bad views, how are we going to make allies globally?

Funnily enough, that verse is often referenced to me by religious Jews when I talk about how many EAs donate >>20%.

MISHNA: Rabbi GWWC said in the name of Rabbi Singer: It is a mitzvah (good deed) to pledge 10%, but one is not required to take upon himself the chumra (stringency) of the Further Pledge.

GEMARA: Rava asks: One who takes the Further Pledge can be compared to the Nazirite, who is called a sinner, for he is depriving himself of what the Holy One, Blessed be He, has provided him. So how can Rabbi GWWC say that one who takes Further Pledge is a righteous man?

Abaye says in the name of Rabbi Singer: The mashal (parable) of the drowning child brings down that one is obligated to give up all of one's possessions to save another's life. For this reason Rabbi GWWC says one who takes the Further Pledge is a righteous man. As Scripture teaches us, "one who saves a life is as though he has saved the world entire".

Rava asks: But why then is 10% sufficient, if it is brought down that one must give up all of one's posessions to save a life?

Abaye says: In the matter of the city of Sodom, the Lord says that "for the sake of 10 righteous men, I would not destroy it". By homiletic interpretation, if one donates even 10%, for his sake the world will be spared.

  1. I agree that clinicians should use lidocaine or digoxin over potassium chloride (KCL) for the reason you gave.
  2. I wrote that the injection is "often of potassium chloride", not always.
    1. Given that the fetus is receiving a lethal dose of potassium chloride, I don't think adults tolerating a much smaller medicinal dose should tell us much about how painful a lethal dose would be?
    2. I agree that the fetus isn't being given potassium chloride intravenously, although I didn't know that when I wrote the post (another commenter pointed it out). I'll add a line in the post disclaiming that comparison.

Happy to hear we agree on fetal anesthesia :)

I also very much agree that there's no conflict between this and the pro-choice position, and that increased abortion access would reduce fetal suffering in late-term abortions. (Although increasing abortion access has other, larger ethical problems---from a total utilitarian perspective, there doesn't seem to be much difference between preventing a fetus from living a full life and doing the same for an infant or adult.)

On comparing individual fetuses to individual farm animals, it's worth noting that a 13-week fetus has about half as many neurons as an adult cow. (Cows have 3 billion neurons, while 13-week fetuses have 3 billion brain cells. Since humans have a near 1:1 neuron-glia ratio, a 13-week fetus's neuron count should be about half as many as a cow's.) So on at least one metric, they'd be pretty comparable. Of course, I'm pretty sure this fact is swamped by the other facts about factory farming you gave.

I agree that this probably wouldn't be competitive with animal welfare. However, if we're holding it to the standard for suffering-reducing interventions for humans, it could plausibly be more competitive.

This description of labor induction abortion says:

The skin on your abdomen is numbed with a painkiller, and then a needle is used to inject a medication (digoxin or potassium chloride) through your abdomen into the fluid around the fetus or the fetus to stop the heartbeat.

That sounds like local anesthesia for the mother, which from what I understand is achieved through an injection which numbs the tissue in a specific area rather than through an IV drip. So I don't think this protocol would have any anesthetic effect on the fetus, though I'm not a medical expert and could be wrong.

Based on this, I think the sentence “The fetus is administered a lethal injection with no anesthesia” is accurate.

Thanks for this! I agree that apart from speciesism, there isn't a good reason to prioritize GHD over animal welfare if targeting suffering reduction (or just directly helping others).

Would you mind expanding further on the goals of the "reliable global capacity growth" cause bucket? It seems to me that several traditionally longtermist / uncategorized cause areas could fit into this bucket, such as:

Under your categorization, would these be included in GHD?

It also seems that some traditionally GHD charities would fall into the "suffering reduction" bucket, since their impact is focused on directly helping others:

  • Fistula Foundation
  • StrongMinds

Under your categorization, would these be included in animal welfare?

Also, would you recommend that GHD charity evaluators more explicitly change their optimization target from metrics which measure directly helping others / suffering reduction (QALYs, WELLBYs) to "global capacity growth" metrics? What might these metrics look like?

Load more