All of RobertDaoust's Comments + Replies

Yes I know, thank you ADS, but I rather have in mind something like "Toward an Institute for the Science of Suffering" https://docs.google.com/document/d/1cyDnDBxQKarKjeug2YJTv7XNTlVY-v9sQL45-Q2BFac/edit#

Do you know about QRI? They're pretty close to what you're describing. https://www.qualiaresearchinstitute.org/

Your concern about doomsday projects is very welcome in this age of high existential risks. Suffering in particular plays a central role in that game. Religious fanatics, for instance, are waiting for the cessation of suffering through some kind of apocalypse. Many negative utilitarians or antinatalists, on another side, would like us to organize the end of the world in the coming years, a prospect that can only lead to absurd results. For the short term, doomsday end suffering projects can plan to eliminate life (or at least human life, because bacteria a... (read more)

The solution to the problem of suffering cannot be to eliminate all life because lifeless evolution created life once and it could recreate it, and million years of pain would come along again before another intelligent species like ours re-appear with technical power and has a chance to resolve the problem of suffering by controlling that phenomenon through conscious rational efforts until the end of this universe.

6
AppliedDivinityStudies
3y
Million years of "state of nature" type pain is strongly preferable to s-risks.
1
dotsam
3y
The doomsday end suffering project would then be to eliminate life and the conditions for the evolution of life throughout the universe.

"I’m not sure what I should do with this information. There is no cosmic justice in suffering on behalf of others, in living burdened by its unthinkable urgency. Yet there is something like cosmic justice in acting to reduce the worst suffering in our world. I do not even know where to start."

We are numerous, since millennia, who want to do something about suffering. Why not work together in an enterprise for an optimal alleviation of suffering in the world? That is what the Algosphere Alliance is proposing: to organize the alleviation of suffering, steadi... (read more)

4
Aaron Bergman
3y
Very much agreed.  https://algosphere.org/ for those interested. 

Congrats on your approach, Sanjay and Meg. The Algosphere Alliance is inviting people who are interested in the organized alleviation of suffering in the world to be "Partner in the business world" as you can see in https://docs.google.com/document/d/1J9wOcCERPiegaPzfbDLfcjoH45GAgCPp2HeGE1s-5To/edit. That's a beginning.

Yes, Ramiro, you may write to me at daoust514@gmail.com and I will transmit your demand to them. 

2
Ramiro
3y
Thanks, I didn't know Algosphere. Btw, I saw there are two allies in Sao Paulo. I'd like to get in touch with them, if that's a possibility ;)

I am all in favor of chronic pain being a cause area. In itself, that would be a good thing, but there is another, more important reason: this might help us to realize how exactly physical pain and most concerns in EA are related to the arch-cause of suffering.  The very notion of EFFECTIVENESS is at stake in this matter. In my opinion, the whole field of pain research and management has not decisively advanced since the 1970s, when I first became interested in it. I believe progress is hindered by a fundamental problem in pain theory, as explained in... (read more)

How must we get systematically organized to alleviate suffering in the world?

"Impartiality or cause-neutrality means that in order to be more effective, one should only look at the top level in the hierarchical classification, i.e. consider the whole world (instead of a specific country), all beings (instead of members from a specific species), and all diseases (instead of a specific type of diseases such as cancers)." That is why a theoretical and practical organization based on a global systematic approach is required for optimizing the alleviation of suffering in the world.

Mental health is a prerequisite. Denis Drescher's Dissociation for Altruists suggests great tips. If you work on suffering, you cannot deal with it as you do in normal life, because you have to hold the thing steadily in front of you, instead of embracing and dancing with it while you are naked. You have to look at it through a glass that does not let its too bright fire damage your eyes. I recommend goggles that let you see emotional negativity as a harmless abstract degree of unpleasantness/unwantedness: your cold reason will pretty quickly get used to t... (read more)

1
Andy_Schultz
3y
Any good face shields you can recommend? :-) Perhaps a shield of good relationships and overall good mental health?

"The argument against" is that a Thanos-ing all humanity would not save the lives of other sentient beings, it would just allow those lives to continue being, much too often, miserable: human animals are currently the only chance for all animals to escape the grips of excessive suffering. The problem here, "somethoughts", is that you, like countless of us, value life so much more than the alleviation of suffering that you pose horribly absurd problems, and with such an unexamined value in the background lurks a nihilism that represents, to be frank, an existential risk. 

Thank you for this work, Marius, it fits well into a systematic approach that should be developed, as suggested in Preparatory Notes for the Measurement of Suffering.

Hi Derek, just in case something in there would be useful to you: https://docs.google.com/document/d/1OTCQlWE-GkY_V4V-OfJAr7Q-vJyIR8ZATpeMrLkmlAo/edit

1
Derek
3y
Interesting, thanks!

It is very high-impact when survival is considered indispensable to have control over nature for preventing negative values from coming back after extinction.

1
Prabhat Soni
3y
Insightful thoughts!
2
Jonathan_Michel
3y
I would recommend this short essay on the topic: Human Extinction, Asymmetry, and Option Value Abstract: "How should we evaluate events that could cause the extinction of the human species? I argue that even if we believe in a moral view according to which human extinction would be a good thing, we still have strong reason to prevent near-term human extinction." (Just to clarify: this essay was not written by me)  

You have this reference: https://ieeexplore.ieee.org/document/9001063/authors#authors where  the first paragraph reads:

"In the last year, the Association for Computing Machinery (ACM) released new ethical standards for professional conduct [1] and the IEEE released guidelines for the ethical design of autonomous and intelligent systems [2] demonstrating a shift among professional technology organizations toward prioritizing ethical impact. In parallel, thousands of technology professionals and social scientists have formed multidisciplinary committees... (read more)

We may sympathize in the face of such difficulties. Terminology is a big problem when speaking about suffering in the absence of a systematic discipline dealing with suffering itself. That's another reason why the philosophy of well-being is fraught with traps and why I suggest the alleviation of suffering as the most effective first goal. 

Okay, I realize that the relevance of neuroscience to the philosophy of well-being can hardly be made explicit in sufficient detail at the level of an introduction. That is unfortunate, if only for our mutual understanding because, with enough attention to details,  the stubbing toe example that I used would not be understood as you do:  if it is not unpleasant to stub your toe how can it be bad, pro tanto or otherwise?

2
MichaelPlant
3y
I think we may well be speaking past each other someone. In my example, I took it the toe stubbing was unpleasant, and I don't see any problem in saying the toe stubbing is unpleasant but I am simultaneously experiencing other things such that I feel pleasure overall. The usual case people discuss here is "how can BDSM be pleasant if it involves pain?" and the answer is to distinguish between bodily pain in certain areas vs a cognitive feeling of pleasure overall resulting from feeling bodily pain.

Inferential distance makes discussion hard indeed. Let’s try to go first to this focal point: what ultimate goal is the best for effective altruists. The answer cannot be found only by reasoning, it requires a collective decision based on shared values. Some prefer the goal of having a framework for thinking and acting with effectiveness in altruistic endeavors. You and I would not be satisfied with that because altruism has no clear content: your altruistic endeavor may go against mine (examples may be provided on demand). Some, then, realizing the necess... (read more)

Excellent response! I'll think about it and come back to let you know my thoughts, if you will. 

1
HStencil
3y
Thank you — please do!

Hmm... 1) When an individual's life is evaluated as good or bad there may be an ultimate reason that is invoked to explain it, but I would not say that an ultimate reason has an intrinsic value: it is just valued as more fundamental than others, in the current thinking scheme of the evaluating entity. 2)  Do we have an overriding moral reason to alleviate suffering?  In certain circumstances, yes, like if there is an eternal hell we ought to end it if we can. But in general, no,  I don't think morality is paramount:  it surely counts bu... (read more)

1
HStencil
3y
I suspect there may be too much inferential distance between your perspective on normative theory and my own for me to explain my view on this clearly, but I will try. To start, I find it very difficult to understand why someone would endorse doing something merely because it is “effective” without regard for what it is effective at. The most effective way of going about committing arson may be with gasoline, but surely we would not therefore recommend using gasoline to commit arson. Arson is not something we want people to be effective at! I think that if effective altruism is to make any sense, it must presuppose that its aims are worth pursuing. Similarly, I disagree with your contention that morality isn't, as you put it, paramount. I do not think that morality exists in a special normative domain, isolated far away from concerns of prudence or instrumental reason. I think moral principles follow directly from the principle of instrumental reason, and there is no metaphysical distinction between moral reasons and other practical reasons. They are all just considerations that bear on our choices. Accordingly, the only sensible understanding of what it means to say that something is morally best is: “It is what one ought to do,” (I am skeptical of the idea of supererogation). It is a practical contradiction to say, “X is what I ought to do, but I will not do it,” in the same way that it is a theoretical contradiction to say, “It is not raining, but I believe it’s raining.” Hopefully, this clarifies how confounding I find the perspective that EA should prioritize alleviating suffering regardless of whether or not doing so is morally good, as you put it (which is surely a lower bar than morally best). To me, that sounds like saying, “EA should do X regardless of whether or not EA should do X.” Regarding the idea of intrinsic value, I think what Fin, Michael et al. meant by “X has intrinsic value” is “X is valuable for its own sake, not for the sake of any further

Is there anyone who believes 1) and 2)?

1
HStencil
3y
I’m not sure, but it seemed to me that this was the view that you were defending in your original comment. Based on this comment, I take it that this is not, in fact, your view. Could clarify which premise you reject, 1) or 2)?

Thanks, Michael, for your reaction. Clearly, "qualia depend on each other for having any value/meaning" is a too short sentence to be readily understood. I meant that if consciousness or sentience are made up of qualia, i.e. meaningful and (dis)valuable elementary contents of experience, then each of those qualia has no value/meaning except inasmuch as it relates to other qualia: nothing is (dis)valuable by itself, qualia depend on each other... In other words, one "quale" has a conscious value or meaning only when it is within a psychoneural circuit that ... (read more)

2
MichaelPlant
3y
Sorry, I really don't follow your point in the first para.  One thing to say is that experience of suffering are pro tanto bad (bad 'as far as it goes'). So stubbing your toe is bad, but this may be accompanied by another sensation such that overall you feel good. But the toe stubbing is still pro tanto bad. Anyway, like I said, none of this is directly relevant to the post itself!

Quick thoughts. The goal of effective altruism ought to be based on something more precise than the good of others defined as "well-being" because nothing is intrinsically or non-instrumentally good for a sentient entity when qualia depend on each other for having any value/meaning.  As to prioritization, the largest common goal ought to be the alleviation of suffering, not because suffering is bad in itself but because we agree much more on what we don't want than on what we want, and the latter can be much more easily subordinated to the former than the contrary. 

8
MichaelPlant
3y
I'm not quite sure I understand what you mean. My experiences have no value unless there is another experiencer in the world? If I'm the last person on Earth and I stub my toe, I think that's bad because it bad's for me, that is, it reduces my well-being.  Also,  given your concerns, you'll need to define suffering in a way that is distinct from well-being. If I think suffering is just negative well-being - aka 'ill-being' - then your concerns about well-being apply to suffering too.  Also also, if suffering isn't instrinsically bad, in what sense is  it bad? Finally, I note that all of these concerns are about the value of well-being in a moral theory, which is a distinct question from what this post tackles, which is just what the theories of well-being are. One could (implausibly) say well-being had no moral value (which is, I suppose, almost what impersonal views of value do say...).
2
HStencil
3y
It’s not clear to me how one can believe 1) that there is nothing that ultimately explains what makes a person’s life go well for them, and 2) that we have an overriding moral reason to alleviate suffering. It would seem dangerously close to believing that we have an overriding moral reason to alleviate suffering in spite of the fact that it is not Bad for those who experience it. You might claim that suffering is instrumentally bad, that it makes it harder to achieve... whatever one wants to achieve, but presumably, if achieving whatever one wants to achieve is valuable, it is valuable because of the way in which it leads one’s life to “go well.” If that is the case, then you have a theory of well-being. If, on the other hand, achieving whatever one wants to achieve is not valuable in any absolute sense, then it is hard to say why it would be valuable at all, and you, again, would struggle to justify why suffering is a bad.

I like your thesis, Pedro, because when I look at its chapter on "Why the Future Matters" and "Optimal Control Theory", I think that useful links could be established between it and the long-term project of the Algosphere Alliance about organizing the alleviation of suffering in the world. As I wrote recently: in my view, the causes of severe suffering are so many and so diverse that the current pandemic is still only a small part of the issue when it comes to organizing global efforts to alleviate suffering. It is necessary to deal wit... (read more)