All of Rhyss's Comments + Replies

You might enjoy "On the Survival of Humanity" (2017) by Johann Frick. Frick makes the same point there that you quote Torres as making—that total utilitarians care about the total number and quality of experiences but are indifferent to whether these experiences are simultaneous or extended across time. Torres has favorably cited Frick elsewhere, so I wouldn't be surprised if they were inspired by this article. You can download it here: https://oar.princeton.edu/bitstream/88435/pr1rn3068s/1/OnTheSurvivalOfHumanity.pdf

I don't know if StrongMinds explicitly has a goal of reducing suicides, or what its predicted effect on suicide risk might be, but searching for "suicide" on the StrongMinds site (https://strongminds.org/?s=suicide) brings up a lot of results. Whether or not suicide prevention is part of their mission, treating depression would seem to potentially reduce the risk of suicide for some people . If so, some of the value of StrongMinds might come from the extension of lives. This would mean the value of StrongMinds could vary depending on which view of the harm of death we take.  

5
MichaelPlant
1y
Hello Rhyss. We actually hadn't considered incorporating a suicide-reducing effect of talk therapy onto our model. I think suicide rates in eg Uganda, one place where SM works, are pretty low - I gather they are pretty low in low-income countries in general.  Quick calculation. I came across these Danish numbers, which found that "After 10 years, the suicide rate for those who had therapy was 229 per 100,000 compared to 314 per 100,000 in the group that did not get the treatment." Very very naively, then, that's one life saved via averted suicide per 1,000 treated, or about $150k to save a life via therapy (vs $3-5k for AMF), so probably wouldn't make much difference. But  that is just looking at suicide. We could look at the all-cause mortality effects on treating depression (mental and physical health are often comorbid, etc.).

I'm currently taking a class with Jeff McMahan in which he discusses prenatal injury, and I'm pretty sure he would agree with how you put it here, Richard. This doesn't affect your point, but he now likes to discuss a complication to this: what he calls "the divergent lives problem." The idea is that an early injury can lead to a very different life path, and that once you're far enough down this path—and have the particular interests that you do, and the particular individuals in your life who are important to you—Jeff thinks it can be irrational to regre... (read more)

The article seems to contradict itself in the end. In the beginning of the article, I thought you were saying you're not an EA because you're not a utilitarian (because utilitarianism is poison), and to be an EA is just to be a utilitarian in some form—and that even if EAs are utilitarians in a very diluted form, the philosophy they are diluting is still a poison, no matter how diluted, and so is unacceptable. So, I was expecting you to offer some alternative framework or way of thinking to build an altruistic movement on, like moral particularism or contr... (read more)

This post leaves some dots unconnected. 

Are you suggesting that people pretend to have beliefs they don't have in order to have a good career and also shift the Republican party from the inside? 

Are you suggesting that anyone can be a Republican as long as they have a couple of beliefs or values that are not totally at odds with those of the Republican party — even if the majority of their beliefs and values are far more aligned with another party? 

Or by telling people to join the Republican party, are you suggesting they actively change som... (read more)

I don't think most EAs have an obligation to involve themselves in politics at all and I don't think every young EA should join the GOP- but I do think :

"Young effective altruists in the United States interested in using public policy to make the world better should almost all be Republicans."

The people I would most like to think about this post are:

  1. EAs who are conservative/centrist. Since I think  there are too few EAs within the Republican Party, I think they should keep in mind that they can probably do more good than a similar EA who is contemplat
... (read more)

The original argument you're reacting to is flawed, which carries over into your second one. To make both arguments clearer, we need to know the significance of an embryo being human, why this matters to utilitarians, and what sort of utilitarians you mean. Does an embryo being human mean it has the same moral status as an adult human? Does it mean it has a similar interest in continued living as an adult embryo does? Does it mean it is harmed by death -- and if so, does this harm of death leave it worse off than if it were never conceived at all?

And what ... (read more)

3
SaraAzubuike
2y
Hey, thanks for answering my post. Means a lot, especially since you seem to be more familiar with philosophy than me.  "Total utilitarians care about intrinsic value of outcomes." - But a) death is painful b) death is the loss of future life c) parents grieve over miscarriages just as people grieve over the loss of a friend. "Embryos must have an interest in continued existence." - Hm, but I argue this is a temporary state. Say I give that mother nutrition and I wait 9 months. That embryo now has an interest in continued existence. In a similar vein, suicidal people have no interest in continued existence. But if I give that suicidal person therapy and wait some time, that person now has an interest in continued existence. 
Answer by RhyssNov 24, 20202
0
0

Balliol tends to have a lot of philosophy graduate students, and Wadham is considered to be one of the most left-wing colleges. Looking at the list of current Oxford philosophy graduate students, I noticed there are a lot at St Anne's right now as well. But this can change depending on the year, and philosophy student obviously doesn't mean EA. I would be surprised if any college reliably had a higher number of EAs. 

AlasdairGives' suggestion to consider funding options makes sense, though you should also keep in mind that the wealthiest colleges get the most applications, so if you apply to St John's, there's more of a risk they won't pick you, and then there's more randomness in the college you end up at. 

I had a similar question myself. It seems like believing in a "long reflection" period requires denying that there will be a human-aligned AGI. My understanding would have been that once a human-aligned AGI is developed, there would not be much need for human reflection—and whatever human reflection did take place could be accelerated through interactions with the superintelligence, and would therefore not be "long." I would have thought, then, that most of the reflection on our values would need to have been completed before the creation of an AGI. From what I've read of The Precipice, there is no explanation for how a long reflection is compatible with the creation of a human-aligned AGI.

A lot of good ideas here!

In interested in how Demeny Voting is expected to work psychologically. I would expect just about everyone who is given a second vote (which they are told to submit on behalf of future generations) to use that second vote as a second vote for whatever their first vote was for. I imagine they would either think their first vote was for the best policy/person, in which case they could convince themselves that's best for future generations too, or they would realize their first vote is only good for the short term, but they would... (read more)

9
tylermjohn
4y
Surprising (and confusing!) as it may be, there is some evidence that voters would vote differently with their Demeny vote than with their first vote. I've asked Ben Grodeck (who clued me into Demeny voting) to weigh in with more data, but for now see this study from Japanese economist Reiko Aoki, who found (Table 8 and Figure 7) that the voting preferences of surveyed participants who are permitted to cast one vote on behalf of themselves and one vote for their child sometimes vote differently on their second vote. The effect isn't drastic, but it is certainly non-trivial. http://hermes-ir.lib.hit-u.ac.jp/rs/bitstream/10086/22250/1/cis_dp539.pdf The study authors further find that policy preferences on behalf of oneself and on behalf of one's children diverge to a greater degree, and the authors hypothesize that we would see more divergence between the multiple votes of Demeny voters if they had different political options that better reflected the divergence between these sets of preferences. Thus, they think that instituting Demeny voting would cause party platforms to change to try to cater to the policy preferences of parents voting on behalf of their children.

I haven't read the Srinivasan, Gray, and Nussbaum critiques. However, I did read the Krishna critique, and that one uses another rhetorical technique (aside from the sneering dismissal McMahan mentions) to watch out for in critiques of effective altruism. The technique is for the critic of EA to write in as beautiful, literary and nuanced a way possible, in part to subtly frame the EA critic as a much more fully developed, artistic and mature human than the (implied) shallow utilitarian robots who devote their lives to doing a lot of good.

Effective altrui... (read more)

2
Austen_Forrester
8y
Absolutely. That is such a common tactic. I think all of the criticisms against EA use one cheap rhetorical trick or another. Someone needs to make up a definitive web page that lists all the criticisms of EA with responses, and most importantly, calls out the rhetorical device that was used. It's mostly the same tired, discredited criticisms and persuasive tricks that are used over and over, so rather than responding to each individually, we can simply refer people to the web page.

Sure you may have saved hundreds of lives, but your essays feature too few obscure literary references, you monstrous, pathetic excuse for a human being.

I forgot to mention that your post did help to clarify points and alleviate some of my confusion. Particularly the idea that an ultra-powerful AI tool (which may or may not be sentient) "would still permit one human to wield power over all others."

The hypothetical of an AI wiping out all of humanity because it figures out (or thinks it figures out) that it will increase overall utility by doing so is just one extreme possibility. There must be a lot of credible seeming scenarios opposed to this one in which an AI could be used to increase overal... (read more)

2
Evan_Gaensbauer
9y
Brian Tomasik is ab self-described "negative-leaning" hedonic utilitarian who is a prominent thinker for effective altruism. He's written about how humanity might have values which lead us to generating much suffering in the future, but also worries a machine superintelligence might end up doing the same. They're myriad reasons he thinks this I can't do justice to here. I believe right now he thinks the best course of action is to try steering values of present-day humanity, as much of it or as crucially an influential subset as possible, towards neglecting suffering less. He also believes doing foundational research into ascertaining better the chances of a singleton to promulgate suffering throughout space in the future. To this end he both does research with and funds colleagues at the Foundational Research Institute. His whole body of work concerning future suffering is referred to as "astronomical suffering" considerations, sort of complementary utilitarian consideration to Dr Bostrom's astronomical waste argument. You can read more of Mr. Tomasik's work on the far future and related topics here. Note some of it is advanced and may require you to read beforehand to understand all premises in some of his essays, but he also usually provides citations for all this.

I haven't explored the debate over AI risk in the EA movement in depth, so I'm not informed enough to take a strong position. But Kosta's comment gets at one of the things that has puzzled me -- as basically an interested outsider -- about the concern for x-risk in EA. A very strong fear of human extinction seems to treat humanity as innately important. But in a hedonic utilitarian framework, humanity is only contingently important to the extent that the continuation of humanity improves overall utility. If an AI or AIs could improve overall utility by des... (read more)

2
Owen Cotton-Barratt
9y
If you're a hedonic utilitarian, you might retain some uncertainty over this, and think it's best to at least hold off on destroying humanity for a while out of deference to other moral theories, and because of the option value. Even if someone took the view you describe, though, it's not clear that it would be a helpful one to communicate, because talking about "AI destroying humanity" does a good job of successfully communicating concern about the scenarios you're worried about (where AI destroys humanity without this being a good outcome) to other people. As the exceptions are things people generally won't even think of, caveating might well cause more confusion than clarity.
3
RyanCarey
9y
Cautious support of giving an AI control is not opposed to x-risk reduction. Reduction of x-risk is defined as curtailing the potential of Earth-originating life. Turning civ over to AIs or ems might be inevitable, but would still be safety-critical. A non-careful transition to AI is bad for utilitarians and many others because of its irreversibility. Once you codify values (a definition of happiness and whatever else) in an AI, they're stuck, unless you've programmed the AI a way for it to reflect on its values. When combined with Bostrom's argument in Astronomical Waste, that the eventual awesomeness of a technologically mature civilisation is more important than when it is achived, this gives a strong reason for caution.