AS�

Ariel Simnegar 🔸

2196 karmaJoined

Bio

Participation
3

I'm a managing partner at AltX, an EA-aligned quantitative crypto hedge fund. I previously earned to give as a Quant Trading Analyst at DRW. In my free time, I enjoy reading, discussing moral philosophy, and exploring Wikipedia rabbit holes.

Comments
175

I think some critiques of GVF/OP in this comments section could have been made more warmly and charitably.

The main funder of a movement's largest charitable foundation is spending hours seriously engaging with community members' critiques of this strategic update. For most movements, no such conversation would occur at all.

Some critics in the comments are practicing rationalist discussion norms (high decoupling & reasoning transparency) and wish OP's communications were more like that too. However, it seems there's a lot we don't know about what caused GFV/OP leadership to make this update. Dustin seems very concerned about GFV/OP's attack surface and conserving the bandwidth of their non-monetary resources. He's written at length about how he doesn't endorse rationalist-level decoupling as a rule of discourse. Given all of this, it's understandable that from Dustin's perspective, he has good reasons for not being as legible as he could be. Dishonest outside actors could quote statements or frame actions far more uncharitably than anything we'd see on the EA Forum.

Dustin is doing the best he can to balance between explaining his reasoning and adhering to legibility constraints we don't know about in order to engage with the rest of the community. We should be grateful for that.

Thanks for the post, Vasco!

From reading your post, your main claim seems to be: The expected value of the long-term future is similar whether it's controlled by humans, unaligned AGI, or another Earth-originating intelligent species.

If that's a correct understanding, I'd be interested in a more vigorous justification of that claim. Some counterarguments:

  1. This claim seems to assume the falsity of the orthogonality thesis? (Which is fine, but I'd be interested in a justification of that premise.)
  2. Let's suppose that if humanity goes extinct, it will be replaced by another intelligent species, and that intelligent species will have good values. (I think these are big assumptions.) Priors would suggest that it would take millions of years for this species to evolve. If so, that's millions of years where we're not moving to capture universe real estate at near-light-speed, which means there's an astronomical amount of real estate which will be forever out of this species' light cone. It seems like just avoiding this delay of millions of years is sufficient for x-risk reduction to have astronomical value.

You also dispute that we're living in a time of perils, though that doesn't seem so cruxy, since your main claim above should be enough for your argument to go through either way. Still, your justification is that "I should be a priori very sceptical about claims that the expected value of the future will be significantly determined over the next few decades". There's a lot of literature (The Precipice, The Most Important Century, etc) which argues that we have enough evidence of this century's uniqueness to overcome this prior. I'd be curious about your take on that.

(Separately, I think you had more to write after the sentence "Their conclusions seem to mostly follow from:" in your post's final section?)

(The following is mostly copied from this thread due to a lack of time. I unfortunately can't commit to much engagement on replies to this.)

The sign of the effect of MSI seems to rely crucially on a very high credence in the person-affecting view, where the interests of future people are not considered.

Since 2000, MSI has averted one maternal death by preventing on average 502 unintended pregnancies. Even if only ~20% of these unintended pregnancies would have counterfactually been carried to term (due to abortion, replacement, and other factors), that still means preventing one maternal death prevents the creation of ~100 human beings. In other words, MSI's intervention prevents ~100x as much human life experience as it creates by averting a maternal death. If one desires to maximize expected choice-worthiness under moral uncertainty, assuming the value of human experience is independent of the person-affecting view, one must be ~99% confident that the person-affecting view is true for MSI to be net positive.

However, many EAs, especially longtermists, argue that the person-affecting view is unlikely to be true. For example, Will MacAskill spends most of Chapter 8 of What We Owe The Future arguing that "all proposed defences of the intuition of neutrality [i.e. person-affecting view] suffer from devastating objections". Toby Ord writes in The Precipice p. 263 that "Any plausible account of population ethics will involve…making sacrifices on behalf of merely possible people."

If there's a significant probability that the person-affecting view may be false, then MSI's effect could in reality be up to 100x as negative as its effect on mothers is positive.

I worry about this line of reasoning because it's ends-justify-the-means thinking.

Let's say billions of people were being tortured right now, and some longtermists wrote about how this isn't even a feather in the scales compared to the cosmic endowment. These longtermists would be accused of callously gambling billions of years on suffering on a theoretical idea. I can just imagine The Guardian's articles about how SBF's naive utilitarianism is alive and well in EA.

The difference between the scenario for animals and the scenario for humans is that the former is socially acceptable but the latter is not. There isn't a difference in the actual badness.

Separately, to engage with the utilitarian merits of your argument, my main skepticism is an unwillingness to go all-in on ideas which remain theoretical when the stakes are billions of years of torture. (For example, let's say we ignore factory farming, and then there's a still unknown consideration which prevents us or anyone else from accessing the cosmic endowment. That scares me.) Also, though I'm not a negative utilitarian, I think I take arguments for suffering-focused views more seriously than you might.

I'd like to give some context for why I disagree.

Yes, Richard Hanania is pretty racist. His views have historically been quite repugnant, and he's admitted that "I truly sucked back then". However, I think EA causes are more important than political differences. It's valuable when Hanania exposes the moral atrocity of factory farming and defends EA to his right-wing audience. If we're being scope-sensitive, I think we have a lot more in common with Hanania on the most important questions than we do on political issues.

I also think Hanania has excellent takes on most issues, and that's because he's the most intellectually honest blogger I've encountered. I think Hanania likes EA because he's willing to admit that he's imperfect, unlike EA's critics who would rather feel good about themselves than actually help others.

More broadly, I think we could be doing more to attract people who don't hold typical Bay Area beliefs. Just 3% of EAs identify as right wing. I think there are several reasons why, all else equal, it would be better to have more political diversity:

  • In this era of political polarization, It would be a travesty for EA issues to become partisan.
  • All else equal, political diversity is good for community epistemics. In that regard, it should be encouraged for much the same reason that cultural and racial diversity are encouraged.
  • If we want EA to be a global social movement, we need to show that one can be EA even if they hold beliefs on other issues we find repugnant. I live in Panama for my job. When I arrived here, I had a culture shock from how backwards many people's views are on racism and sexism. If we can't be friends with the person next door with bad views, how are we going to make allies globally?

Funnily enough, that verse is often referenced to me by religious Jews when I talk about how many EAs donate >>20%.

MISHNA: Rabbi GWWC said in the name of Rabbi Singer: It is a mitzvah (good deed) to pledge 10%, but one is not required to take upon himself the chumra (stringency) of the Further Pledge.

GEMARA: Rava asks: One who takes the Further Pledge can be compared to the Nazirite, who is called a sinner, for he is depriving himself of what the Holy One, Blessed be He, has provided him. So how can Rabbi GWWC say that one who takes Further Pledge is a righteous man?

Abaye says in the name of Rabbi Singer: The mashal (parable) of the drowning child brings down that one is obligated to give up all of one's possessions to save another's life. For this reason Rabbi GWWC says one who takes the Further Pledge is a righteous man. As Scripture teaches us, "one who saves a life is as though he has saved the world entire".

Rava asks: But why then is 10% sufficient, if it is brought down that one must give up all of one's posessions to save a life?

Abaye says: In the matter of the city of Sodom, the Lord says that "for the sake of 10 righteous men, I would not destroy it". By homiletic interpretation, if one donates even 10%, for his sake the world will be spared.

  1. I agree that clinicians should use lidocaine or digoxin over potassium chloride (KCL) for the reason you gave.
  2. I wrote that the injection is "often of potassium chloride", not always.
  3.  
    1. Given that the fetus is receiving a lethal dose of potassium chloride, I don't think adults tolerating a much smaller medicinal dose should tell us much about how painful a lethal dose would be?
    2. I agree that the fetus isn't being given potassium chloride intravenously, although I didn't know that when I wrote the post (another commenter pointed it out). I'll add a line in the post disclaiming that comparison.

Happy to hear we agree on fetal anesthesia :)

I also very much agree that there's no conflict between this and the pro-choice position, and that increased abortion access would reduce fetal suffering in late-term abortions. (Although increasing abortion access has other, larger ethical problems---from a total utilitarian perspective, there doesn't seem to be much difference between preventing a fetus from living a full life and doing the same for an infant or adult.)

On comparing individual fetuses to individual farm animals, it's worth noting that a 13-week fetus has about half as many neurons as an adult cow. (Cows have 3 billion neurons, while 13-week fetuses have 3 billion brain cells. Since humans have a near 1:1 neuron-glia ratio, a 13-week fetus's neuron count should be about half as many as a cow's.) So on at least one metric, they'd be pretty comparable. Of course, I'm pretty sure this fact is swamped by the other facts about factory farming you gave.

I agree that this probably wouldn't be competitive with animal welfare. However, if we're holding it to the standard for suffering-reducing interventions for humans, it could plausibly be more competitive.

This description of labor induction abortion says:

The skin on your abdomen is numbed with a painkiller, and then a needle is used to inject a medication (digoxin or potassium chloride) through your abdomen into the fluid around the fetus or the fetus to stop the heartbeat.

That sounds like local anesthesia for the mother, which from what I understand is achieved through an injection which numbs the tissue in a specific area rather than through an IV drip. So I don't think this protocol would have any anesthetic effect on the fetus, though I'm not a medical expert and could be wrong.

Based on this, I think the sentence “The fetus is administered a lethal injection with no anesthesia” is accurate.

Load more