All of Alexander Herwix 🔸's Comments + Replies

I don't think it's unreasonable to discuss the appropriateness of particular timelines per se but the fact remains that this is not the purpose or goal of the book. As I acknowledged, short to medium term timelines are helpful for motivating the relevance or importance of the issue. However, I think timelines in the 5 to 50 year range are a very common position now, which means that the book can reasonably use this as a starting point for engaging with its core interest, the conditional what if. 

Given this as a backdrop, I think it's fair to say that ... (read more)

I didn’t comment on the accuracy of individual timelines but emphasized that the main topic of the book is the conditional what if… it doesn’t really make sense to critique the book at length for something it’s only tangentially touching upon to motivate the relevance of its main topic. And they are not making outrageous claims here if you look at the ongoing discourse and ramping up investments. 

It’s possible to take Yudkowsky seriously even if you are less certain on timelines and outcomes. 

It could be an interesting exercise for you to reflect on the origins of your emotional reactions to Yudkowski‘s views. 

5
Yarrow Bouchard 🔸
I think it’s fair to criticize Yudkowsky and Soares’ belief that there is a very high probability of AGI being created within ~5-20 years because that is a central part of their argument. The purpose of the book is to argue for an aggressive global moratorium on AI R&D. For such a moratorium to make sense, probabilities need to be high and timelines need to be short. If Yudkowsky and Soares believed there was an extremely low chance of AGI being developed within the next few decades, they wouldn’t be arguing for the moratorium.  So, I think Oscar is right to notice and critique this part of their argument. I don’t think it’s fair to say Oscar is critiquing a straw man.  You can respond with a logical, sensible appeal to the precautionary principle: shouldn’t we prepare anyway, just in case? First, I would say that even if this is the correct response, it doesn’t make Oscar’s critique wrong or not worth making. Second, I think arguments around whether AGI will be safe or unsafe, easy or hard to align, and what to do to prepare for it — these arguments depend on how specific assumptions on how AGI will be built. So, this is not actually a separate question from the topic Oscar raised in this post.  It would be nice if there were something we could do just in case, to make any potential future AGI system safer or easier to align, but I don’t see how we can do this in advance of knowing what technology or science will be used to build AGI. So, the precautionary principle response doesn’t add up, either, in my view.

You are not addressing the key point of my comment which is regarding the nature of their argument and your straw manning of their position. Why should I take your posts seriously if you feel the need to resort to these kind of tactics? 

I am just trying to provide you with some perspective with why people might feel the need to downvote you. If you want people like me to engage (although I didn’t downvote, I don’t really have an interest in reading your blog), I would recommend meeting us where we are: Concerned about current developments potentially ... (read more)

I didn’t down vote but it seems like you are attacking a straw man here… the book is explicitly focused on the conditional IF anyone builds it. They never claim to know how to build it but simply suggest that it is not unlikely to be built in the future. I don’t know in which world you are living but this starting assumption seems pretty plausible to me (and quite a few other people more knowledgeable than me on those topics such as Nobel prize and Turing Award winners…). If not in 5 then maybe in 50 years. 

I would say at this point the burden is on you to make the case that the overall topic is nothing to worry about. Why not write your own book or posts where you let your arguments speak for themselves? 

4
Yarrow Bouchard 🔸
Eliezer Yudkowsky forecasts a 99.5% chance of human extinction from AGI "well before 2050", unless we implement his proposed aggressive global moratorium on AI R&D. Yudkowsky deliberately avoids giving more than a vague forecast on AGI, but he often strongly hints at a timeline. For example, in December 2022, he tweeted: In April 2022, when Metaculus’ forecast for AGI was in the 2040s and 2050s, Yudkowsky harshly criticized Metaculus for having too long a timeline and not updating it downwards fast enough. In his July 2023 TED Talk, Yudkowsky said: In March 2023, during an interview with Alex Fridman, Fridman asked Yudkowsky what advice he had for young people. Yudkowsky said:  In that segment, he also said, "we are not in the shape to frantically at the last minute do decades’ worth of work." After reading these examples, do you still think Yudkowsky only believes that AGI is "not unlikely to be built in the future", "if not in 5 then maybe in 50 years"?  
1
Oscar Davies
I've presented some of my arguments in articles on my Substack, as well as in a philosophy of mind book I wrote addressing topics like "what is reasoning/thinking?" that sadly I haven't been able to get published yet. On my Substack I also have articles on Hinton and others.

So, you do it on purpose, not out of inability? Thanks for clarifying.

I love this question and I am looking forward to see what hedonic utilitarians come up with here. This has similar vibes to computronium thought experiments but better. Thanks for pointing this question out to me :)

5
ThomNorman
Now this is uncharitable

Thanks for sharing this! It's an entertaining read and a valuable reminder of the limits of our perspectives. I love how the cleaner shows up at the end. True koan vibes!

I don't have time to read the full post and series but the logic of your argument reminds me very much of Werner Ulrich's work. May be interesting for you to check him out. I will list suggested references in order of estimated cost/benefit. The first paper is pretty short but already makes some of your key arguments and offers a proposal for how to deal with what you call "unawareness". 

Ulrich, W. (1994). Can We Secure Future-Responsive Management Through Systems Thinking and Design? Interfaces, 24(4), 26–37. https://doi.org/10.1287/inte.24.4.26

Ulric... (read more)

I think it would be helpful to not use longtermism in this synonymous way because I think it’s prone to lead to misunderstandings and unproductive conflict. 

For example, there is a school of thought called the person affecting view, which denies that future, non-existing people have moral patient hood but would still be able to have reasonable discussions about intergenerational justice in the sense of children might want to have children, etc. 

In general, I wouldn’t characterize those views as any more or less extreme or flat-footed than weak fo... (read more)

It seems to me that you use “intergenerational justice” and longtermism in somewhat synonymous fashion. I think I would disagree with this sentiment. Longtermism is a specific set of positions whereas I would see intergenerational justice as a more open concept that can be defined and discussed from different positions.

I also think that there are reasonable critiques of longtermism. In the spirit of your post, I hope you stay open to considering those views.

3
Danny Wardle
That's fair. I tend to think of intergenerational justice as synonymous with a weak form of longtermism, although perhaps 'longtermism' is too loaded a term (I'm thinking of a bare bones version of "Future generations are worthy of moral consideration").  I also agree there are reasonable critiques of (stronger forms of) longtermism. I wouldn't call myself a strong longtermist personally, so I suppose I should've been clearer that I'm not making a positive argument for any particular form of longtermism. Rather, my goal is to respond to what I see as some fairly extreme and flat-footed responses to weak longtermism.

I have only read the summarybot comment but based on that I wanted to leave a literature suggestion that could be interesting to people who liked this post and want to think more about how to put a pragmatic approach to ethics into practice.

Ulrich, W. (2006). Critical Pragmatism: A New Approach to Professional and Business Ethics. In Interdisciplinary Yearbook for Business Ethics. V. 1, v. 1,. Peter Lang Pub Inc. 

Abstract: Major contemporary conceptions of ethics such as discourse ethics and neocontractarian ethics are not grounded in a sufficiently p... (read more)

I am wondering if you could say something about how the political developments in the US (i.e., Trump 2.0) are affecting your thinking on AGI race dynamics? It seems like the default assumption communicated publicly is still that the US are "the good guys" and a "western liberal democracy" that can be counted on, when the actual actions on the world stage are casting at least some doubt on this position. In some sense, one could even argue that we are already playing out a high-stakes alignment crisis at this very moment.

Any reactions or comments on this issue? I understand that openness around this topic is difficult at the moment but I also don't think that complete silence is all that wise either. 

I don’t agree with this sentiment. At least for me I really do not see any real cost associated with being vegan that would keep me from earning more or being a better person in any meaningful way.

For example, I am pretty sure I wouldn’t work more if I ate more meat, why would I? There really doesn’t seem to be a causal pathway here. Maybe if you really crave beef and you can’t help yourself thinking about this all the time… yeah, that could be distracting and reduce your performance but I am not sure that something like this occurs all that often. Never h... (read more)

Maybe I am naive but what is the cost that’s associated with not eating meat? Not having the taste of it? What motivates you to donate money to reduce animal suffering if you believe that your taste is more valuable than the life of the animal in the first place? Or are you at a point where you believe that animals matter enough to warrant some small amounts of donations but not to deprive you of their taste?

I mean, of course it’s good to donate but I don’t see why this means that you should continue the practice that you want to offset if you can help it or am I missing something?

Similarly, if I offset pollution, I do not turn around and pollute more because that would defeat the purpose?!

3
sammyboiz🔸
I am not maximally EA and I assume you aren't either. (In the sense that we aren't spending every living second trying to generate impact). We both have some level of commitment towards altruism that we are willing to put in. I believe that spending effort to be vegan has a cost, if I spent that time making more money, I could do more for the world. Therefore yes, my taste is more important the life of the an animal.   I can ask you a similar question: Do you believe spending time on your hobby yesterday is more valuable than the life of the animals that you could have saved?   As for pollution, I can say that my commute is more important that the impact of my emissions because I can outweigh my suffering caused through donations.

Reading your comments, I think we come from different perspectives when reading such a post. 

I read the post as an attempt to highlight a blind spot in "orthodox" EA thinking, which simply tries to make a case for the need to revisit some deeply ingrained assumptions based on alternative viewpoints. This tends to make me curious about the alternative viewpoints offered and if I find them at least somewhat plausible and compelling I try to see what I can do with them based on their own assumptions. I do not necessarily see it as the job of the post to ... (read more)

I think the post was already acknowledging the difference in perspective and trying to make the case that the perspective that you are advocating for seems shortsighted from their perspective.

The key point here seems to be the consideration that is given to interconnectedness. Whereas “traditional” EA assumes stability in the Earth System and focuses “only” on marginal improvements ceteris paribus, the ecological perspective highlights the interconnectedness of “everything” and the need for a systemic focus on sustaining the entire Earth system rather than... (read more)

5
Gemma 🔸
The post emphasizes systemic thinking but doesn't clarify how this would change cause prioritization in practice. The example takes as a given that we should make value judgments favoring ecosystems over human/animal welfare. I've seen various posts from people working on existential risk who try to put estimates on likelihood of systemic failures. While measurement in complex systems is challenging, I'd like to see more concrete proposals from systemic thinkers. What specific interventions do they suggest? How would they evaluate impact, even roughly? I think this post falls into the classic "EA should" trap - it criticizes current approaches but doesn't actually suggest concrete solutions or alternatives. I'm saying this because I'd genuinely be interested in seeing more concrete analysis from this perspective but don't think this post is productive.

I have never said that how we treat nonhuman animals is “solely” due to differences in power. The point that I have made is that AIs are not humans and I have tried to illustrate that differences between species tend to matter in culture and social systems.

But we don’t even have to go to species differences, ethnic differences are already enough to create quite a bit of friction in our societies (e.g., racism, caste systems, etc.). Why don’t we all engage in mutually beneficial trade and cooperate to live happily ever after?

Because while we have mostly con... (read more)

But what makes you think that this can be a longterm solution if the needs and capabilities of the involved parties are strongly divergent as in human vs AI scenarios?

I agree that trading can probably work for a couple of years, maybe decades, but if the AIs want something different from us in the long term what should stop them from getting this?

I don’t see a way around value alignment in the strict sense (ironically this could also involve AIs aligning our values to theirs similar to how we have aligned dogs).

The difference is that a superintelligence or even an AGI is not human and they will likely need very different environments from us to truly thrive. Ask factory farmed animals or basically any other kind of nonhuman animal if our world is in a state of violance or war… As soon as strong power differentials and diverging needs show up the value cocreation narrative starts to lose it’s magic. It works great for humans but it doesn’t really work with other species that are not very close and aligned with us. Dogs and cats have arguably fared quite well but o... (read more)

5
Matthew_Barnett
Animals are not socially integrated in society, and we do not share a common legal system or culture with them. We did not inherit legal traditions from them. Nor can we agree to mutual contracts, or coordinate with them in a meaningful way. These differences seem sufficient to explain why we treat them very differently as you described. If this difference in treatment was solely due to differences in power, you'd need to explain why vulnerable humans are not regularly expropriated, such as old retired folks, or small nations.

This reminds me of the work on the Planungszelle in Germany but with some more bells and whistles. One difference that I see is that afaik the core idea in more traditional deliberation processes is that the process itself is also understandable by the average citizen. This gives it some grounding and legitimacy in that all people involved in the process can cross-check each other and make sure that the outcome is not manipulated. You seem to be diverging from this ideal a little bit in the sense that you seem to require the use of sophisticated statistica... (read more)

6
Odyssean Institute
Thank you for a thoughtful response! Indeed, we have considered these risks and although for the sake of brevity haven't delved heavily into the range of experimental designs for an assembly in the White Paper directly, we have in conversations with strategic partners such as Missions Publiques. We agreed that a model similar to theirs on certain assemblies would be wise. This involves the public deliberating in isolation first, so they aren't overly primed by the horizon scan, before then being introduced to the findings of the panel afterwards. This allows for iterations in the Process, without overly influencing initial values and considerations from the public. So for example, the public would be consulted, help to sculpt the optimalities scan in DMDU, and then incorporate the EEJ panel's findings to refine and deepen engagement. Ultimately the assembly decides, so we are aware of the need to balance these steps to ensure they support rather than subvert this aspect. DMDU has a considerable emphasis on translating findings effectively, and avoiding getting bamboozled by models (such as the emphasis Erica Thompson puts on caution around this in 'Escape from Model Land'). It is a positive sign that DMDU practitioners are well aware of the 'fallacy of misplaced concreteness' and the risks this poses, and a large part of their methodology is devised to keep this explicit. The education phase of an assembly would also involve familiarising participants carefully with the value and limits of the models used, with ranges of uncertainties. It also bears noting that while not all questions will require modelling, done carefully and translated with caution, certain civilisational risks will need this level of rigour. 

The key point that I am trying to make is that you seem to argue against our common sense understanding that animals are sentient because they are anatomically similar to us in many respects and also demonstrate behavior that we would expect sentient creatures to have. Rather you come up with your own elaborate requirements that you argue are necessary for a being able to say something about qualia in other beings but then at some point (maybe at the point where you feel comfortable with your conclusions) you stop following your own line of argument throug... (read more)

But how can you assume that humans in general have qualia if all the talking about qualia tells you only that qualia exist somewhere in the causal structure? Maybe all talking about qualia derives from a single source? How would you know? For me, this seems to be a kind of a reductio ad absurdum for your entire line of argument.

-5
MikhailSamin

Thanks for sharing your thoughts! I think you are onto an interesting angle here that could be worthwhile exploring if you are so inclined. 

One interesting line of work that you do not seem to be considering at the moment but could be interesting is the work done in the "metacrisis" (or polycrisis) space. See this presentation for an overview but I recommend diving deeper than this to get a better sense of the space. What this perspective is interested in is trying to understand and address the underlying patterns, which create the wicked situation we... (read more)

Hey Daniel,

as I also stated in another reply to Nick, I didn’t really mean to diminish the point you raised but to highlight that this is really more of a „meta point“ that’s only tangential to the matter of the issue outlined. My critical reaction was not meant to be against you or the point you raised but the more general community practice / trend of focusing on those points at the expense of engaging the subject matter itself, in particular, when the topic is against mainstream thinking. This I think is somewhat demonstrated by the fact that your comme... (read more)

Hey Nick,

thanks for your reply. I didn’t mean to say that Daniel didn’t have a point. It’s a reasonable argument to make. I just wanted to highlight that this shouldn’t be the only angle to look at such posts. If you look, his comment is by far the most upvoted and it only addresses a point tangential to the problem at hand. Of course, getting upvoted is not his „fault“. I just felt compelled to highlight that overly focusing on this kind of angle only brings us so far.

Hope that makes it clearer :)

Your question reminded me of the following quote:

It Is Difficult to Get a Man to Understand Something When His Salary Depends Upon His Not Understanding It

Maybe here we are talking about an alternative version of this:

It Is Difficult to Get a Man to Say Something When His Salary (or Relevance, Power, Influence, Status) Depends Upon Him Not Saying It

Isn’t your point a little bit pedantic here in the sense that you seem to be perfectly able to understand the key point the post was trying to make, find that point somewhat objectionable or controversial, and thus point to some issues regarding „framing“ rather than really engage deeply with the key points?

Of course, every post could be better written, more thoughtful, etc. but let’s be honest, we are here to make progress on important issues and not to win „argument style points.“ In particular, I find it disturbing that this technique of criticizing sty... (read more)

My problem with the post wasn't that it used subpar prose or "could be written better", it's that it uses rhetorical techniques that make actual exchange of ideas and truth-seeking harder. This isn't about "argument style points", it's about cultivating norms in the community that make it easier for us to converge on truth, even on hard topics.

The reason I didn't personally engage with the object level is I didn't feel like I had anything particularly valuable to say on the topic. I didn't avoid saying my object-level views (if he had written a similar post with a style I didn't take issue with, I wouldn't have responded at all), and I don't want other people in the community to avoid engaging with the ideas either.

6
NickLaing
Thanks Alex. In general I agree with you, if viewpoints are expressed that are outside of what most EAs think, they do sometimes get strawmanned and voted down without good reason (like you say ideas like handing more power to governments and post-growth concepts). In this case though I think the original poster was fairly aggressive with rhetorical tricks, as a pretty large part of making their argument - so I think Daniel's criticism was reasonable.

Thank you for writing this post!

I think it is really important to stay flexible in the mind and to not tie ourselves into race dynamics prematurely. I hope that reasonable voices such as yours can broaden the discourse and maybe even open up doors that were only closed in our minds but never truly locked.

Ok, I acknowledge that I might have misunderstood your intent. If had taken that your point was to dispassionately explain why people (the EA community) don't engage with this topic, I myself might have reacted more dispassionately. However, as I read your comments, I don't think that it was very clear that this is what you were after. Rather, it seemed like you were actively making the case against engaging with the topic and using strawmanning tactics to make your point. I would encourage you to be more clear in this regard in the future, I will try to b... (read more)

2
Davidmanheim
On your first point, I was first clarifying that there has been discussion of this, and there was a pretty clear reason to dismiss this in general - while in my very first post agreeing that "There are other claims that degrowth makes that seem unobjectionable, and worthy of debate." You attacked that, and my position, and I defended it. I don't think I used a strawman at any point - I think that I responded to your general claim about "degrowth" with an accurate characterization of that position, and you retreated to a series of specific analyses that defend specific points.  On your second point, you're incorrectly interpreting what was done in 1972, which I'm very, very familiar with - I've actually read the report, and used the model as a teaching tool. It was absolutely intended to predict consequences of decisions, to support specific decisionmakers, and they explicitly said that while it was imperfect, it was intended to be used as-is in order to make decisions. I can only urge you to read their original work. The patterns it explored didn't hold up, the models were wrong in how the projected the key inputs and factors, and the conclusions they came to were wrong. Recent claims that they got things right are revisionist and wrong - post-hoc justification is possible anywhere, but as I've said for years, it's unsupportable here.  And finally, in general, if you ask others to be more charitable to a position instead of defending it, you're asking for a favor, rather than saying that something stands on its own merits. I did not say there was nothing here worthy of consideration, but I did say that their central claim was wrong. I agree that it's wonderful to be charitable in discussions, but as a general point, no, I don't think it makes sense to try to be charitable to and steelman every opposing viewpoint every time it is brought up, especially after you've looked into it.

I would argue that it is a snarky but honest reflection of my state of mind. I also support my claim with evidence if you continue to read the comment. I am walking a fine line but I think my comment should still pass as constructive and well-intentioned all things considered. If you beg to differ feel free to make your case. 

1
Miquel Banchs-Piqué (prev. mikbp)
A recent project looking into those sort of things in the context of Europe is the MEDEAS project.  For the French speaking around here, Jean-Marc Jancovici has a lot of material in French in his lessons at the Ecole des Mines. I have only seen a couple of his talks which happened to be in English. Hagens has an online course, Reality 101, which I found really good. I find his podcast too "sentimental".
2
Davidmanheim
I'm explaining why people haven't engaged with this - the specifics are missing, or have little to do with degrowth, or are wrong. You can cite the study "justifying" limits to growth, (which I've discussed on this forum before!) but they said that there would be a collapse decades ago, so it's hard to take that seriously. I'm sure there is a steelmanned version of this that deserves some credit, and I initially said that there are some ideas from that movement that deserve credit - but I don't understand what it has to do with the degrowth movement, which is pretty explicit about what it wants and aims for.
1
harfe
I think this is insufficiently kind.

I have to disclaim that I am NOT an expert on degrowth but from everything I know about the topic you are building up a huge strawman and misrepresenting their position in a way that really proves the point I was trying to make. 

Just searching on google scholar for the term "degrowth" and looking at the first result, I come to an open access article "Research on Degrowth" in a reputable outlet with a reviewing discussion of the actual positions held and research being done on the topic. I have not read the entire article but from engaging with it for ... (read more)

4
Davidmanheim
The last sentence in that quote gives away the game. The hypothesis - the one I'm saying is not supported by any evidence, and which has been falsified in the past - is that you can do degrowth without the downsides. The concrete proposals are to stop doing the things that increase economic growth. For example, they are opposed to mining more minerals, regardless of environmental damage, because they want less resource usage. Less isn't more. You say their point is worthy of discussion. Which point? That there are finite limits? No, it's not worth discussing. Yes, there are limits to growth, but they aren't relevant. They are busy telling people energy is finite, so we should use less - ignoring the fact that energy can be plentiful with solar and other renewable sources.  These are the same people - literally the same, in some cases - as the "limits to growth" folks from decades ago, and the fact that they were wrong hasn't deterred them in the least. They are STILL telling people that we will run out of minerals, ignoring the fact that discoverable reserves are orders of magnitude larger than we need in the foreseeable future, and in most cases reserves have been getting larger over time. But sure, you can tell me I haven't engaged with this, and that it needs more thought. I'm even happy to give it more thought - I just need you, or someone else, to point to what you think we should consider that isn't either philosophy about finitude ungrounded in any facts, or that is flat out wrong, instead of saying "consider this general area," one which I'm broadly familiar with already.

Being “agnostic” in all situations is itself a dogmatic position. It’s like claiming to be “agnostic” on every epistemic claim or belief. Sure, you can be, but some beliefs might be much more likely than others. I continue to consider the possibility that pleasure is not the only good; I just find it extremely unlikely. That could change.

If you read what I have written, you will see that I am not taking a dogmatic position but simply advocate for staying open-minded when approaching a situation. I tried to describe that as trying to be "agnostic" about the... (read more)

As above, these conflicting intuitions can only be resolved through a process of reflection. I am glad that you support such a process. You seem disappointed that the result of this process has, for me, led to utilitarianism. This is not a “premature closing of this process” any more than your pluralist stance is a premature closing of this process. What we are both doing is going back and forth saying “please reflect harder”. I have sprinkled some reading recommendations throughout to facilitate this.

I am only disappointed if you stop reflecting and quest... (read more)

1
JBentham
Being “agnostic” in all situations is itself a dogmatic position. It’s like claiming to be “agnostic” on every epistemic claim or belief. Sure, you can be, but some beliefs might be much more likely than others. I continue to consider the possibility that pleasure is not the only good; I just find it extremely unlikely. That could change. I do not think biological and psychological “reasons” are actually reasons, but you’re right that this gets us into a separate meta-ethical discussion. Thank you for the discussion!

One caution I want to add here is that downvoting when a post is fresh / not popular can have strong filter effects and lead to premature muting of discussion. If the first handful of readers simply dislike a post and downvote it, this makes it much less likely for a more diverse crowd of people to find it and express their take on it. We should consider that there are many different viewpoints out there and that this is important for epistemic health. Thus, I encourage anyone to be mindful when considering to further downvote posts that are already unpopular.

1
Miquel Banchs-Piqué (prev. mikbp)
I am curious about the arguments from the person who voted disagree to alexherwix's comment.

I think one point of this post is to challenge the community to engage more openly with the question of degrowth and to engage in argument rather than dismiss it outright. I have not followed this debate in detail but I sympathize with the take that issues which are controversial with EAs are often disregarded without due engagement by the community.

3
harfe
If a point of the article was to get the community to engage with the arguments for degrowth, the author should have engaged with the things that EA has written about degrowth. For example, https://www.vox.com/future-perfect/22408556/save-planet-shrink-economy-degrowth or https://forum.effectivealtruism.org/posts/XQRoDuBBt98wSnrYw/the-case-against-degrowth
3
Davidmanheim
Current gross world income is enough to provide everyone on earth with $10,000 per year. (And that overstates a lot of things because much of that wealth is in forms that aren't distributable.)  But $10,000/year is under the US federal poverty level. It seems a lot like degrowth is embracing a world where everyone lives under the level that a poor person in the developed world lives, which seems in pretty stark contrast to the goals of making the world prosper. The claim of degrowth is that there's no way to have more than this consistent with long-term flourishing of humanity. That seems to fly in the face of every theoretical and observed claim about resource constraints and growth - climate is a real problem, but degrowth doesn't come anywhere close to solving it. There are other claims that degrowth makes that seem unobjectionable, and worthy of debate - global poverty is being perpetuated by debt burden in the developing world, so the developed world needs to forgive that debt, foreign aid doesn't accomplish its stated aims is is a force for neo-colonialism, austerity overwhelmingly harms the poor and should be unacceptable, the enforcement of global intellectual property laws harms the developing world, and similar. But the central claim, that we need to have fewer goods, fewer people, and less prosperity, isn't really worth debate.

I think you are misrepresenting a few things here. 

First, Catholics talk a lot about ethics. Please come up with a better excuse to brush away the critique I made. I am almost offended by the laziness of your argument. 

Second, you are misrepresenting the post. It does not assert that we should "value everything that we already care emotionally about". It argues for reflecting about what values we actually hold dear and have good reason to hold dear. This stands in contrast to your position, which amounts to arguing for a premature closing of this... (read more)

1
JBentham
It wasn’t clear which aspect of Catholic dogma you were referring to. Catholic claims about ethics seem to crucially depend on a bunch of empirical claims that they make. Even so, I view such claims as just a subset of claims about ethics that depend on our intuitions. As above, these conflicting intuitions can only be resolved through a process of reflection. I am glad that you support such a process. You seem disappointed that the result of this process has, for me, led to utilitarianism. This is not a “premature closing of this process” any more than your pluralist stance is a premature closing of this process. What we are both doing is going back and forth saying “please reflect harder”. I have sprinkled some reading recommendations throughout to facilitate this. The post does not mention whether we have reasons to hold certain things dear. It actually rejects such a framing altogether, claiming that the idea that we “should” (in a reason-implying sense) hold certain things dear doesn’t make sense. This is tantamount to nihilism, in my view. The first two points, meanwhile, are psychological rather than normative claims. As Sidgwick stated, the point of philosophy is not to tell people what they do think, but what they ought to think. I am always very happy to examine the plural goods that some say they value, but which I do not, and see whether convergence is possible.

Hey Devin, 

first of all, thanks for engaging and the offer in the end. If you want to continue the discussion feel free to reach out via PM. 

I think there is some confusion about my and also Spencer Greenberg's position. Afaik, we are both moral anti-realists and not suggesting that moral realism is a tenable position. Without presuming to know much about Spencer, I have taken his stance in the post to be that he did not want to "argue" with realists in that post because even though he rejects their position, it requires a different type of argum... (read more)

It is certainly conceivable that I am “under the pernicious influence of utilitarianism”, in which case I would by default become a nihilist and abandon any attempt to reduce the suffering of sentient beings.

You certainly lost me here. All I am asking for is humility regarding our ability to "know" things, in particular regarding ethics. Every part of your argument could have been made by catholic dogmatists, who have likely engaged for much longer and deeper in painstaking reflection. For me that would be a worrying sign but I certainly did not intend for... (read more)

1
JBentham
Catholics make empirical claims about the natural world. Logical and moral truths do not fit into that category, so I disagree with the comparison. The parent post makes no case whatsoever for caring about the things we value! All it does is assert that we ought to value everything that we already care emotionally about. Why should we act on everything we care emotionally about? How do we know that everything we care about is worth acting on? More humility may be required in all quarters! Don’t worry, I still aim to maximise the well-being of all sentient beings because I think the very nature of pleasure gives me strong reason to want to increase it and that there are no other facts about the universe which give me similar reasons for action. The table in front of me certainly doesn’t. “Virtues” and “rights” are man-made fictions, not facts. Conscious experiences in general seem like a better bet, but the ‘redness’ of an object also doesn’t give me reason to act. It is only valenced experiences which do. Hypothetically, though, were I to reject utilitarianism, I would by default become a nihilist precisely because I am humble about our ability to know things! I might still care about the suffering of sentient beings, but my caring about something is not a reason to act on it. Parfit is very good on this.

Call me naive but your argument doesn't go through for me. You write...

As in mathematics and logic, rational intuition is ultimately my yardstick for determining the truth of a proposition. I think it self-evident that the good of any one individual is of no more importance than the good of any other and that a greater good should be preferred to a lesser good. As for what that good is, everything comes down to pleasure on reflection.

So your standard for adjudicating the "truth" of propositions is your "rational intuition". You think your position "self-ev... (read more)

1
JBentham
I don’t consider the intuitions of adherents to competing moral theories to be strong evidence against the detailed, painstaking process of reflection that I and other utilitarians have been through. I also think that utilitarianism best accommodates and explains our common-sense moral intuitions, as Sidgwick argued in detail. Therefore, there is not as much disagreement between the broad mass of people and utilitarians as there might seem to be at first glance. Those who have invented ‘rights’ and ‘virtues’ out of thin air have much more serious disagreements with common-sense morality, which is a problem for them. If most people thought that an object can simultaneously be red and green all over, their intuitions here wouldn’t be strong evidence against the fact that this is self-evidently absurd. For many centuries, Europeans rejected the idea that you could work with negative numbers. In cultures where negative numbers were being used, I don’t think this disagreement would have been good evidence against the self-evidence of negative numbers being useful in mathematics. I fully accept that others can say similar things to me. That is fine. To use the example from your other post, you can say that it’s self-evident that Alice should take the morphine; I will say that it would be self-evidently wrong of Alice to deprive Bob of such a special experience. All utilitarians can do is trust that, in time, reason will prevail. Pinker and Singer have both written about this. This is why we have been ahead of our time, while Kant’s views, for example, on various object-level issues are recognised as having been horribly wrong. It is certainly conceivable that I am “under the pernicious influence of utilitarianism”, in which case I would by default become a nihilist and abandon any attempt to reduce the suffering of sentient beings.

A few provocative questions: What is your yardstick for measuring the effectiveness of your theory compared to other theories? How much work have you done to figure out how to falsify utilitarianism and consider alternatives? How do you deal with the objections to utilitarianism and the fact that there is no expert consensus on what moral theory is "right"?

I mean do what floats your boat as long as you don't hurt other people (and beings) and behave in otherwise responsible ways (i.e., please don't become the next SBF) but I am always pretty surprised and ... (read more)

7
JBentham
Thank you for the reply. As in mathematics and logic, rational intuition is ultimately my yardstick for determining the truth of a proposition. I think it self-evident that the good of any one individual is of no more importance than the good of any other and that a greater good should be preferred to a lesser good. As for what that good is, everything comes down to pleasure on reflection. The objections to this view fall prey to numerous biases (scope insensitivity, status quo bias), depend on knee-jerk emotional reactions, or rest on misunderstandings of the theory (for example, attacking naive as opposed to sophisticated utilitarianism). Some are even concerned with the practicality of the theory, which has no bearing on its truth. There is no consensus in part because philosophers are under great pressure to publish. If Henry Sidgwick had figured most things out in his great 19th Century treatise The Methods of Ethics (the best book on ethics ever written, even according to many non-utilitarians), then that would rather spoil the fun. If you are interested in a painstaking attempt by a utilitarian to consider the alternatives, then have a read. It is extremely dense, but that is what's required. A good companion is the volume published nine years ago by Singer and Lazari-Radek. Hurting other sentient beings is the antithesis of utilitarianism, as you know. Mr Bankman-Fried's alleged actions should serve as a warning against naive utilitarianism and are a reminder that commonly accepted negative duties should almost always be followed (on utilitarian grounds). We don't know whether these alleged actions were the product of his philosophical beliefs, or whether it had more to do with the pernicious influence of money and power. Regardless, that he went down such a career path in the first place was the result of his philosophical beliefs and we should therefore take some responsibility as a community. But I'm far more concerned about avoiding the (in)actions of

This position seems confusing to me. So, either (1) ethics is something "out there", which we can try to learn about and uncover. Then, we would tend to treat all our theories and models as approximations to some degree because similar issues as in science apply. Or (2) we take ethics as something which we define in some way to suit some of our own goals. Then, it's pretty arbitrary what models we come up with, whether they make sense depends mainly on the goals we have in mind. 

This kind of mirrors the question whether a moral theory is to be taken a... (read more)

2
Devin Kalish
Because my draft response was getting too long, I’m going to put it as a list of relevant arguments/points, rather than the conventional format, hopefully not much is lost in the process: -Ethics does take things out there in the world as its subjects, but I don’t take the comparison to empirical science in this case to work, because the methods of inquiry are more about discourse than empirical study. Empirical study comes at the point of implementation, not philosophy. The strong version of this point is rather controversial but I do endorse it, I will return to it in a couple bullets to expand it out -Even in empirical sciences, the idea of theories just being rough models is not always relevant. it comes from both uncertainty and the positive view that the actual real answer is far too complicated to exactly model. This is the difference between say economics and physics – theories in both will be tentative, and accept that they are probably just approximations right now because of uncertainty, but in economics this is not just a matter of historical humility, but also a positive belief about complexity in the world. Physics theories are both ways of getting good-enough-for-now answers, and positive proposals for ways some aspect of reality might actually be. Typically with plurality but not majority credence. -Fully defining what I mean by ethics is difficult, and of less interest to me than doing the ethics. Maybe this seems a bit strange if you think defining ethics is of supreme importance to doing it, but my feeling of disconnect between the two is probably part of why I’m an anti-realist. I’m not sure there’s any definition I could plug into a machine to make an ethics-o-meter I would simply be satisfied taking its word for it on an answer (this is where the stronger version of bullet one comes in). This is sort of related to Brian Tomasik’s point that if moral realism were true, and it turned out that the true ethics was just torturing as many squirrel

Yeah, I think the intuitions it pumps really depends on the perspective and mindset of the reader. For me, it was triggering my desire to exhibit comradery and friendship in the last moments of life. I could also adjust the thought experiment so that nobody is hurt and simply ask whether one of them should take the morphine or whether they should die "being there for each other". I really do believe that we are kidding ourselves when we say that we only value "welfare" narrowly construed. But I get that some people may just look at such situations with a d... (read more)

2
Devin Kalish
I endorse moral uncertainty, but I think one should be careful in treating moral theories like vague, useful models of some feature of the world. I am not a utilitarian because I think there is some "ethics" out there in the world, and being utilitarian approximates it in many situations, I think the theory is the ethics, and if it isn't, the theory is wrong. What I take myself to be debating when I debate ethics isn't which model "works" the best, but rather which one is actually what I mean by "ethics".

Thanks for sharing this post and pointing out some of the inconsistencies and confusions you see around you! I think being curious and inquisitive about such matters and engaging in open and constructive dialog is important and healthy for the community! 

Interestingly, I actually made a related post just slightly earlier today, which was trying to spark some discussion around a thought experiment I came up with to highlight some similar concerns/observations. I think your post is much more fleshed out, so thank you for posting!

I changed the title of the question and made some small changes to the text to make clearer what I am after with this. I would like to encourage reflection on the part of the value monist utilitarians in this forum. There may be instrumentally good reasons to use value monist utilitarian theories for some purposes but we should be open-minded and forthright in acknowledging its limitations and not take it as a "moral theory of everything". Let's not mistake the map for the territory!

1
JBentham
I do not think it has any compelling limitations.

I agree with your general thrust. The thought experiment is a little bit contrived but deliberately designed to make both options look somewhat plausible. A value monist negative utilitarian could also give the medicine to Alice, so it's not even clear what option one would go for.
 
However, what I really wonder though is if "welfare" is the only thing we care about at the end of times? Or is there maybe also the question of how we got there? How we handled ourselves in difficult situations? What values we embodied when we were alive? Are we not at ris... (read more)

I mean, I do get the appeal. But as you say it also has pretty huge drawbacks. I am curious how far people are willing to tie themselves to the mast and argue that value monism is actually a tenable position to take as a "life philosophy" despite it's drawbacks. How far are you willing to defend your "principles" even if the situation really calls them into question? What would your reply to the thought experiment be?

2
Devin Kalish
The scenario given doesn’t seem to pump the intuition for value pluralism so much as prioritarianism. I suppose you could conceptualize prioritarianism as a sort of value pluralism, I.e. the value of helping those worse off and the value of happiness, but you can also create a single scale on which all that matters is happiness but the amount that it matters doesn’t exactly correspond to the amount of the happiness. I at least usually think of it as importantly distinct from most plural value theories. I’m open to the possibility that this is just semantics, but it does seem to avoid some dilemmas typical plural value theories have (though not all). More on the topic of what to do about counterintuitive implications, my approach is fairly controversial, in that I mostly say if you can’t bite the bullet, don’t, but don’t revise your theory to take the bullet away. In part this just seems like a more principled approach to me as a rule, but also there are important areas of ethics, like aggregation or population axiology, where basically no good answers exist, and this is pretty much provable. This is just the nature of ethics once you get really deep into the weeds. My impression is that most philosophers respond to this by not endorsing complete theories, basically they just endorse certain specific principles that don’t come with serious bullets, and put off other questions where they don’t see a way to escape the bullets. I don’t think this ultimately fixes the problem for topics like these where the territory of possibilities has been scoured pretty thoroughly, but for what it’s worth it seems like a more common approach.

I have nothing against that and think it’s a viable position to have if one has actually invested the time to reason through the challenges presented to a degree that they feel comfortable with. I only question whether this justifies downvoting because to some degree it keeps other people from forming their own opinions on the matter.

Maybe our difference in opinion stems from my perception that downvoting is a tool that should be carefully wielded and not be used to simply highlight disagreement. (I mean there is a reason why we have two voting mechanisms for comments after all)

Mhh, I kind of disagree with the sentiment and assignment of responsibility here.

This is a link post to a critical post on EA-related ideas. I would hope this to spark more or less of an discussion of its merits. I get that some people may be tired of Torres but is this reason enough to actively try to prevent such a discussion? I mean nobody is forced to upvote but downvoting (in particular below 0) does limit the traction this gets from other people. To me this feels like trying to bury voices one doesn’t want to hear, which may be helpful in the short r... (read more)

To me this feels like trying to bury voices one doesn’t want to hear, which may be helpful in the short run (less stress) but is probably not the best long term strategy (less understanding).

Time and attention are finite; I think a lot people people think they have spent a lot more time reading Torres and trying to give him the benefit of the doubt than they have given to almost anyone else, and a lot more than is deserved by the quality of the content.

Load more