All of alexherwix's Comments + Replies

I have never said that how we treat nonhuman animals is “solely” due to differences in power. The point that I have made is that AIs are not humans and I have tried to illustrate that differences between species tend to matter in culture and social systems.

But we don’t even have to go to species differences, ethnic differences are already enough to create quite a bit of friction in our societies (e.g., racism, caste systems, etc.). Why don’t we all engage in mutually beneficial trade and cooperate to live happily ever after?

Because while we have mostly con... (read more)

But what makes you think that this can be a longterm solution if the needs and capabilities of the involved parties are strongly divergent as in human vs AI scenarios?

I agree that trading can probably work for a couple of years, maybe decades, but if the AIs want something different from us in the long term what should stop them from getting this?

I don’t see a way around value alignment in the strict sense (ironically this could also involve AIs aligning our values to theirs similar to how we have aligned dogs).

The difference is that a superintelligence or even an AGI is not human and they will likely need very different environments from us to truly thrive. Ask factory farmed animals or basically any other kind of nonhuman animal if our world is in a state of violance or war… As soon as strong power differentials and diverging needs show up the value cocreation narrative starts to lose it’s magic. It works great for humans but it doesn’t really work with other species that are not very close and aligned with us. Dogs and cats have arguably fared quite well but o... (read more)

5
Matthew_Barnett
1mo
Animals are not socially integrated in society, and we do not share a common legal system or culture with them. We did not inherit legal traditions from them. Nor can we agree to mutual contracts, or coordinate with them in a meaningful way. These differences seem sufficient to explain why we treat them very differently as you described. If this difference in treatment was solely due to differences in power, you'd need to explain why vulnerable humans are not regularly expropriated, such as old retired folks, or small nations.

This reminds me of the work on the Planungszelle in Germany but with some more bells and whistles. One difference that I see is that afaik the core idea in more traditional deliberation processes is that the process itself is also understandable by the average citizen. This gives it some grounding and legitimacy in that all people involved in the process can cross-check each other and make sure that the outcome is not manipulated. You seem to be diverging from this ideal a little bit in the sense that you seem to require the use of sophisticated statistica... (read more)

6
Odyssean Institute
5mo
Thank you for a thoughtful response! Indeed, we have considered these risks and although for the sake of brevity haven't delved heavily into the range of experimental designs for an assembly in the White Paper directly, we have in conversations with strategic partners such as Missions Publiques. We agreed that a model similar to theirs on certain assemblies would be wise. This involves the public deliberating in isolation first, so they aren't overly primed by the horizon scan, before then being introduced to the findings of the panel afterwards. This allows for iterations in the Process, without overly influencing initial values and considerations from the public. So for example, the public would be consulted, help to sculpt the optimalities scan in DMDU, and then incorporate the EEJ panel's findings to refine and deepen engagement. Ultimately the assembly decides, so we are aware of the need to balance these steps to ensure they support rather than subvert this aspect. DMDU has a considerable emphasis on translating findings effectively, and avoiding getting bamboozled by models (such as the emphasis Erica Thompson puts on caution around this in 'Escape from Model Land'). It is a positive sign that DMDU practitioners are well aware of the 'fallacy of misplaced concreteness' and the risks this poses, and a large part of their methodology is devised to keep this explicit. The education phase of an assembly would also involve familiarising participants carefully with the value and limits of the models used, with ranges of uncertainties. It also bears noting that while not all questions will require modelling, done carefully and translated with caution, certain civilisational risks will need this level of rigour. 

The key point that I am trying to make is that you seem to argue against our common sense understanding that animals are sentient because they are anatomically similar to us in many respects and also demonstrate behavior that we would expect sentient creatures to have. Rather you come up with your own elaborate requirements that you argue are necessary for a being able to say something about qualia in other beings but then at some point (maybe at the point where you feel comfortable with your conclusions) you stop following your own line of argument throug... (read more)

But how can you assume that humans in general have qualia if all the talking about qualia tells you only that qualia exist somewhere in the causal structure? Maybe all talking about qualia derives from a single source? How would you know? For me, this seems to be a kind of a reductio ad absurdum for your entire line of argument.

-5
MikhailSamin
5mo

Thanks for sharing your thoughts! I think you are onto an interesting angle here that could be worthwhile exploring if you are so inclined. 

One interesting line of work that you do not seem to be considering at the moment but could be interesting is the work done in the "metacrisis" (or polycrisis) space. See this presentation for an overview but I recommend diving deeper than this to get a better sense of the space. What this perspective is interested in is trying to understand and address the underlying patterns, which create the wicked situation we... (read more)

Hey Daniel,

as I also stated in another reply to Nick, I didn’t really mean to diminish the point you raised but to highlight that this is really more of a „meta point“ that’s only tangential to the matter of the issue outlined. My critical reaction was not meant to be against you or the point you raised but the more general community practice / trend of focusing on those points at the expense of engaging the subject matter itself, in particular, when the topic is against mainstream thinking. This I think is somewhat demonstrated by the fact that your comme... (read more)

Hey Nick,

thanks for your reply. I didn’t mean to say that Daniel didn’t have a point. It’s a reasonable argument to make. I just wanted to highlight that this shouldn’t be the only angle to look at such posts. If you look, his comment is by far the most upvoted and it only addresses a point tangential to the problem at hand. Of course, getting upvoted is not his „fault“. I just felt compelled to highlight that overly focusing on this kind of angle only brings us so far.

Hope that makes it clearer :)

Your question reminded me of the following quote:

It Is Difficult to Get a Man to Understand Something When His Salary Depends Upon His Not Understanding It

Maybe here we are talking about an alternative version of this:

It Is Difficult to Get a Man to Say Something When His Salary (or Relevance, Power, Influence, Status) Depends Upon Him Not Saying It

Isn’t your point a little bit pedantic here in the sense that you seem to be perfectly able to understand the key point the post was trying to make, find that point somewhat objectionable or controversial, and thus point to some issues regarding „framing“ rather than really engage deeply with the key points?

Of course, every post could be better written, more thoughtful, etc. but let’s be honest, we are here to make progress on important issues and not to win „argument style points.“ In particular, I find it disturbing that this technique of criticizing sty... (read more)

My problem with the post wasn't that it used subpar prose or "could be written better", it's that it uses rhetorical techniques that make actual exchange of ideas and truth-seeking harder. This isn't about "argument style points", it's about cultivating norms in the community that make it easier for us to converge on truth, even on hard topics.

The reason I didn't personally engage with the object level is I didn't feel like I had anything particularly valuable to say on the topic. I didn't avoid saying my object-level views (if he had written a similar post with a style I didn't take issue with, I wouldn't have responded at all), and I don't want other people in the community to avoid engaging with the ideas either.

6
NickLaing
6mo
Thanks Alex. In general I agree with you, if viewpoints are expressed that are outside of what most EAs think, they do sometimes get strawmanned and voted down without good reason (like you say ideas like handing more power to governments and post-growth concepts). In this case though I think the original poster was fairly aggressive with rhetorical tricks, as a pretty large part of making their argument - so I think Daniel's criticism was reasonable.

Thank you for writing this post!

I think it is really important to stay flexible in the mind and to not tie ourselves into race dynamics prematurely. I hope that reasonable voices such as yours can broaden the discourse and maybe even open up doors that were only closed in our minds but never truly locked.

Ok, I acknowledge that I might have misunderstood your intent. If had taken that your point was to dispassionately explain why people (the EA community) don't engage with this topic, I myself might have reacted more dispassionately. However, as I read your comments, I don't think that it was very clear that this is what you were after. Rather, it seemed like you were actively making the case against engaging with the topic and using strawmanning tactics to make your point. I would encourage you to be more clear in this regard in the future, I will try to b... (read more)

2
Davidmanheim
7mo
On your first point, I was first clarifying that there has been discussion of this, and there was a pretty clear reason to dismiss this in general - while in my very first post agreeing that "There are other claims that degrowth makes that seem unobjectionable, and worthy of debate." You attacked that, and my position, and I defended it. I don't think I used a strawman at any point - I think that I responded to your general claim about "degrowth" with an accurate characterization of that position, and you retreated to a series of specific analyses that defend specific points.  On your second point, you're incorrectly interpreting what was done in 1972, which I'm very, very familiar with - I've actually read the report, and used the model as a teaching tool. It was absolutely intended to predict consequences of decisions, to support specific decisionmakers, and they explicitly said that while it was imperfect, it was intended to be used as-is in order to make decisions. I can only urge you to read their original work. The patterns it explored didn't hold up, the models were wrong in how the projected the key inputs and factors, and the conclusions they came to were wrong. Recent claims that they got things right are revisionist and wrong - post-hoc justification is possible anywhere, but as I've said for years, it's unsupportable here.  And finally, in general, if you ask others to be more charitable to a position instead of defending it, you're asking for a favor, rather than saying that something stands on its own merits. I did not say there was nothing here worthy of consideration, but I did say that their central claim was wrong. I agree that it's wonderful to be charitable in discussions, but as a general point, no, I don't think it makes sense to try to be charitable to and steelman every opposing viewpoint every time it is brought up, especially after you've looked into it.

I would argue that it is a snarky but honest reflection of my state of mind. I also support my claim with evidence if you continue to read the comment. I am walking a fine line but I think my comment should still pass as constructive and well-intentioned all things considered. If you beg to differ feel free to make your case. 

Wow, I am wondering whether to engage further or just let your reply stand as a testament to your "thoughtfulness". Doubling down on stereotyping and mischaracterizing people... great job! (sorry for the sarcasm but I am STILL surprised when I encounter this type of behavior in the EA forum, probably a sign of my naivety...). Nevertheless, for the benefit of the people who are intimidated by this type of behavior, I will try to give a short outline on where you, at least in my opinion, go wrong. 

First, you seem to be upset that some people believe the... (read more)

1
mikbp
7mo
A recent project looking into those sort of things in the context of Europe is the MEDEAS project.  For the French speaking around here, Jean-Marc Jancovici has a lot of material in French in his lessons at the Ecole des Mines. I have only seen a couple of his talks which happened to be in English. Hagens has an online course, Reality 101, which I found really good. I find his podcast too "sentimental".
2
Davidmanheim
7mo
I'm explaining why people haven't engaged with this - the specifics are missing, or have little to do with degrowth, or are wrong. You can cite the study "justifying" limits to growth, (which I've discussed on this forum before!) but they said that there would be a collapse decades ago, so it's hard to take that seriously. I'm sure there is a steelmanned version of this that deserves some credit, and I initially said that there are some ideas from that movement that deserve credit - but I don't understand what it has to do with the degrowth movement, which is pretty explicit about what it wants and aims for.
1
harfe
7mo
I think this is insufficiently kind.

I have to disclaim that I am NOT an expert on degrowth but from everything I know about the topic you are building up a huge strawman and misrepresenting their position in a way that really proves the point I was trying to make. 

Just searching on google scholar for the term "degrowth" and looking at the first result, I come to an open access article "Research on Degrowth" in a reputable outlet with a reviewing discussion of the actual positions held and research being done on the topic. I have not read the entire article but from engaging with it for ... (read more)

2
Davidmanheim
7mo
The last sentence in that quote gives away the game. The hypothesis - the one I'm saying is not supported by any evidence, and which has been falsified in the past - is that you can do degrowth without the downsides. The concrete proposals are to stop doing the things that increase economic growth. For example, they are opposed to mining more minerals, regardless of environmental damage, because they want less resource usage. Less isn't more. You say their point is worthy of discussion. Which point? That there are finite limits? No, it's not worth discussing. Yes, there are limits to growth, but they aren't relevant. They are busy telling people energy is finite, so we should use less - ignoring the fact that energy can be plentiful with solar and other renewable sources.  These are the same people - literally the same, in some cases - as the "limits to growth" folks from decades ago, and the fact that they were wrong hasn't deterred them in the least. They are STILL telling people that we will run out of minerals, ignoring the fact that discoverable reserves are orders of magnitude larger than we need in the foreseeable future, and in most cases reserves have been getting larger over time. But sure, you can tell me I haven't engaged with this, and that it needs more thought. I'm even happy to give it more thought - I just need you, or someone else, to point to what you think we should consider that isn't either philosophy about finitude ungrounded in any facts, or that is flat out wrong, instead of saying "consider this general area," one which I'm broadly familiar with already.

Being “agnostic” in all situations is itself a dogmatic position. It’s like claiming to be “agnostic” on every epistemic claim or belief. Sure, you can be, but some beliefs might be much more likely than others. I continue to consider the possibility that pleasure is not the only good; I just find it extremely unlikely. That could change.

If you read what I have written, you will see that I am not taking a dogmatic position but simply advocate for staying open-minded when approaching a situation. I tried to describe that as trying to be "agnostic" about the... (read more)

As above, these conflicting intuitions can only be resolved through a process of reflection. I am glad that you support such a process. You seem disappointed that the result of this process has, for me, led to utilitarianism. This is not a “premature closing of this process” any more than your pluralist stance is a premature closing of this process. What we are both doing is going back and forth saying “please reflect harder”. I have sprinkled some reading recommendations throughout to facilitate this.

I am only disappointed if you stop reflecting and quest... (read more)

1
JBentham
7mo
Being “agnostic” in all situations is itself a dogmatic position. It’s like claiming to be “agnostic” on every epistemic claim or belief. Sure, you can be, but some beliefs might be much more likely than others. I continue to consider the possibility that pleasure is not the only good; I just find it extremely unlikely. That could change. I do not think biological and psychological “reasons” are actually reasons, but you’re right that this gets us into a separate meta-ethical discussion. Thank you for the discussion!

One caution I want to add here is that downvoting when a post is fresh / not popular can have strong filter effects and lead to premature muting of discussion. If the first handful of readers simply dislike a post and downvote it, this makes it much less likely for a more diverse crowd of people to find it and express their take on it. We should consider that there are many different viewpoints out there and that this is important for epistemic health. Thus, I encourage anyone to be mindful when considering to further downvote posts that are already unpopular.

1
mikbp
7mo
I am curious about the arguments from the person who voted disagree to alexherwix's comment.

I think one point of this post is to challenge the community to engage more openly with the question of degrowth and to engage in argument rather than dismiss it outright. I have not followed this debate in detail but I sympathize with the take that issues which are controversial with EAs are often disregarded without due engagement by the community.

3
harfe
7mo
If a point of the article was to get the community to engage with the arguments for degrowth, the author should have engaged with the things that EA has written about degrowth. For example, https://www.vox.com/future-perfect/22408556/save-planet-shrink-economy-degrowth or https://forum.effectivealtruism.org/posts/XQRoDuBBt98wSnrYw/the-case-against-degrowth
3
Davidmanheim
7mo
Current gross world income is enough to provide everyone on earth with $10,000 per year. (And that overstates a lot of things because much of that wealth is in forms that aren't distributable.)  But $10,000/year is under the US federal poverty level. It seems a lot like degrowth is embracing a world where everyone lives under the level that a poor person in the developed world lives, which seems in pretty stark contrast to the goals of making the world prosper. The claim of degrowth is that there's no way to have more than this consistent with long-term flourishing of humanity. That seems to fly in the face of every theoretical and observed claim about resource constraints and growth - climate is a real problem, but degrowth doesn't come anywhere close to solving it. There are other claims that degrowth makes that seem unobjectionable, and worthy of debate - global poverty is being perpetuated by debt burden in the developing world, so the developed world needs to forgive that debt, foreign aid doesn't accomplish its stated aims is is a force for neo-colonialism, austerity overwhelmingly harms the poor and should be unacceptable, the enforcement of global intellectual property laws harms the developing world, and similar. But the central claim, that we need to have fewer goods, fewer people, and less prosperity, isn't really worth debate.

I think you are misrepresenting a few things here. 

First, Catholics talk a lot about ethics. Please come up with a better excuse to brush away the critique I made. I am almost offended by the laziness of your argument. 

Second, you are misrepresenting the post. It does not assert that we should "value everything that we already care emotionally about". It argues for reflecting about what values we actually hold dear and have good reason to hold dear. This stands in contrast to your position, which amounts to arguing for a premature closing of this... (read more)

1
JBentham
7mo
It wasn’t clear which aspect of Catholic dogma you were referring to. Catholic claims about ethics seem to crucially depend on a bunch of empirical claims that they make. Even so, I view such claims as just a subset of claims about ethics that depend on our intuitions. As above, these conflicting intuitions can only be resolved through a process of reflection. I am glad that you support such a process. You seem disappointed that the result of this process has, for me, led to utilitarianism. This is not a “premature closing of this process” any more than your pluralist stance is a premature closing of this process. What we are both doing is going back and forth saying “please reflect harder”. I have sprinkled some reading recommendations throughout to facilitate this. The post does not mention whether we have reasons to hold certain things dear. It actually rejects such a framing altogether, claiming that the idea that we “should” (in a reason-implying sense) hold certain things dear doesn’t make sense. This is tantamount to nihilism, in my view. The first two points, meanwhile, are psychological rather than normative claims. As Sidgwick stated, the point of philosophy is not to tell people what they do think, but what they ought to think. I am always very happy to examine the plural goods that some say they value, but which I do not, and see whether convergence is possible.

Hey Devin, 

first of all, thanks for engaging and the offer in the end. If you want to continue the discussion feel free to reach out via PM. 

I think there is some confusion about my and also Spencer Greenberg's position. Afaik, we are both moral anti-realists and not suggesting that moral realism is a tenable position. Without presuming to know much about Spencer, I have taken his stance in the post to be that he did not want to "argue" with realists in that post because even though he rejects their position, it requires a different type of argum... (read more)

It is certainly conceivable that I am “under the pernicious influence of utilitarianism”, in which case I would by default become a nihilist and abandon any attempt to reduce the suffering of sentient beings.

You certainly lost me here. All I am asking for is humility regarding our ability to "know" things, in particular regarding ethics. Every part of your argument could have been made by catholic dogmatists, who have likely engaged for much longer and deeper in painstaking reflection. For me that would be a worrying sign but I certainly did not intend for... (read more)

1
JBentham
7mo
Catholics make empirical claims about the natural world. Logical and moral truths do not fit into that category, so I disagree with the comparison. The parent post makes no case whatsoever for caring about the things we value! All it does is assert that we ought to value everything that we already care emotionally about. Why should we act on everything we care emotionally about? How do we know that everything we care about is worth acting on? More humility may be required in all quarters! Don’t worry, I still aim to maximise the well-being of all sentient beings because I think the very nature of pleasure gives me strong reason to want to increase it and that there are no other facts about the universe which give me similar reasons for action. The table in front of me certainly doesn’t. “Virtues” and “rights” are man-made fictions, not facts. Conscious experiences in general seem like a better bet, but the ‘redness’ of an object also doesn’t give me reason to act. It is only valenced experiences which do. Hypothetically, though, were I to reject utilitarianism, I would by default become a nihilist precisely because I am humble about our ability to know things! I might still care about the suffering of sentient beings, but my caring about something is not a reason to act on it. Parfit is very good on this.

Call me naive but your argument doesn't go through for me. You write...

As in mathematics and logic, rational intuition is ultimately my yardstick for determining the truth of a proposition. I think it self-evident that the good of any one individual is of no more importance than the good of any other and that a greater good should be preferred to a lesser good. As for what that good is, everything comes down to pleasure on reflection.

So your standard for adjudicating the "truth" of propositions is your "rational intuition". You think your position "self-ev... (read more)

1
JBentham
7mo
I don’t consider the intuitions of adherents to competing moral theories to be strong evidence against the detailed, painstaking process of reflection that I and other utilitarians have been through. I also think that utilitarianism best accommodates and explains our common-sense moral intuitions, as Sidgwick argued in detail. Therefore, there is not as much disagreement between the broad mass of people and utilitarians as there might seem to be at first glance. Those who have invented ‘rights’ and ‘virtues’ out of thin air have much more serious disagreements with common-sense morality, which is a problem for them. If most people thought that an object can simultaneously be red and green all over, their intuitions here wouldn’t be strong evidence against the fact that this is self-evidently absurd. For many centuries, Europeans rejected the idea that you could work with negative numbers. In cultures where negative numbers were being used, I don’t think this disagreement would have been good evidence against the self-evidence of negative numbers being useful in mathematics. I fully accept that others can say similar things to me. That is fine. To use the example from your other post, you can say that it’s self-evident that Alice should take the morphine; I will say that it would be self-evidently wrong of Alice to deprive Bob of such a special experience. All utilitarians can do is trust that, in time, reason will prevail. Pinker and Singer have both written about this. This is why we have been ahead of our time, while Kant’s views, for example, on various object-level issues are recognised as having been horribly wrong. It is certainly conceivable that I am “under the pernicious influence of utilitarianism”, in which case I would by default become a nihilist and abandon any attempt to reduce the suffering of sentient beings.

A few provocative questions: What is your yardstick for measuring the effectiveness of your theory compared to other theories? How much work have you done to figure out how to falsify utilitarianism and consider alternatives? How do you deal with the objections to utilitarianism and the fact that there is no expert consensus on what moral theory is "right"?

I mean do what floats your boat as long as you don't hurt other people (and beings) and behave in otherwise responsible ways (i.e., please don't become the next SBF) but I am always pretty surprised and ... (read more)

7
JBentham
7mo
Thank you for the reply. As in mathematics and logic, rational intuition is ultimately my yardstick for determining the truth of a proposition. I think it self-evident that the good of any one individual is of no more importance than the good of any other and that a greater good should be preferred to a lesser good. As for what that good is, everything comes down to pleasure on reflection. The objections to this view fall prey to numerous biases (scope insensitivity, status quo bias), depend on knee-jerk emotional reactions, or rest on misunderstandings of the theory (for example, attacking naive as opposed to sophisticated utilitarianism). Some are even concerned with the practicality of the theory, which has no bearing on its truth. There is no consensus in part because philosophers are under great pressure to publish. If Henry Sidgwick had figured most things out in his great 19th Century treatise The Methods of Ethics (the best book on ethics ever written, even according to many non-utilitarians), then that would rather spoil the fun. If you are interested in a painstaking attempt by a utilitarian to consider the alternatives, then have a read. It is extremely dense, but that is what's required. A good companion is the volume published nine years ago by Singer and Lazari-Radek. Hurting other sentient beings is the antithesis of utilitarianism, as you know. Mr Bankman-Fried's alleged actions should serve as a warning against naive utilitarianism and are a reminder that commonly accepted negative duties should almost always be followed (on utilitarian grounds). We don't know whether these alleged actions were the product of his philosophical beliefs, or whether it had more to do with the pernicious influence of money and power. Regardless, that he went down such a career path in the first place was the result of his philosophical beliefs and we should therefore take some responsibility as a community. But I'm far more concerned about avoiding the (in)actions of

This position seems confusing to me. So, either (1) ethics is something "out there", which we can try to learn about and uncover. Then, we would tend to treat all our theories and models as approximations to some degree because similar issues as in science apply. Or (2) we take ethics as something which we define in some way to suit some of our own goals. Then, it's pretty arbitrary what models we come up with, whether they make sense depends mainly on the goals we have in mind. 

This kind of mirrors the question whether a moral theory is to be taken a... (read more)

2
Devin Kalish
7mo
Because my draft response was getting too long, I’m going to put it as a list of relevant arguments/points, rather than the conventional format, hopefully not much is lost in the process: -Ethics does take things out there in the world as its subjects, but I don’t take the comparison to empirical science in this case to work, because the methods of inquiry are more about discourse than empirical study. Empirical study comes at the point of implementation, not philosophy. The strong version of this point is rather controversial but I do endorse it, I will return to it in a couple bullets to expand it out -Even in empirical sciences, the idea of theories just being rough models is not always relevant. it comes from both uncertainty and the positive view that the actual real answer is far too complicated to exactly model. This is the difference between say economics and physics – theories in both will be tentative, and accept that they are probably just approximations right now because of uncertainty, but in economics this is not just a matter of historical humility, but also a positive belief about complexity in the world. Physics theories are both ways of getting good-enough-for-now answers, and positive proposals for ways some aspect of reality might actually be. Typically with plurality but not majority credence. -Fully defining what I mean by ethics is difficult, and of less interest to me than doing the ethics. Maybe this seems a bit strange if you think defining ethics is of supreme importance to doing it, but my feeling of disconnect between the two is probably part of why I’m an anti-realist. I’m not sure there’s any definition I could plug into a machine to make an ethics-o-meter I would simply be satisfied taking its word for it on an answer (this is where the stronger version of bullet one comes in). This is sort of related to Brian Tomasik’s point that if moral realism were true, and it turned out that the true ethics was just torturing as many squirrel

Yeah, I think the intuitions it pumps really depends on the perspective and mindset of the reader. For me, it was triggering my desire to exhibit comradery and friendship in the last moments of life. I could also adjust the thought experiment so that nobody is hurt and simply ask whether one of them should take the morphine or whether they should die "being there for each other". I really do believe that we are kidding ourselves when we say that we only value "welfare" narrowly construed. But I get that some people may just look at such situations with a d... (read more)

2
Devin Kalish
7mo
I endorse moral uncertainty, but I think one should be careful in treating moral theories like vague, useful models of some feature of the world. I am not a utilitarian because I think there is some "ethics" out there in the world, and being utilitarian approximates it in many situations, I think the theory is the ethics, and if it isn't, the theory is wrong. What I take myself to be debating when I debate ethics isn't which model "works" the best, but rather which one is actually what I mean by "ethics".

Thanks for sharing this post and pointing out some of the inconsistencies and confusions you see around you! I think being curious and inquisitive about such matters and engaging in open and constructive dialog is important and healthy for the community! 

Interestingly, I actually made a related post just slightly earlier today, which was trying to spark some discussion around a thought experiment I came up with to highlight some similar concerns/observations. I think your post is much more fleshed out, so thank you for posting!

I changed the title of the question and made some small changes to the text to make clearer what I am after with this. I would like to encourage reflection on the part of the value monist utilitarians in this forum. There may be instrumentally good reasons to use value monist utilitarian theories for some purposes but we should be open-minded and forthright in acknowledging its limitations and not take it as a "moral theory of everything". Let's not mistake the map for the territory!

1
JBentham
7mo
I do not think it has any compelling limitations.

I agree with your general thrust. The thought experiment is a little bit contrived but deliberately designed to make both options look somewhat plausible. A value monist negative utilitarian could also give the medicine to Alice, so it's not even clear what option one would go for.
 
However, what I really wonder though is if "welfare" is the only thing we care about at the end of times? Or is there maybe also the question of how we got there? How we handled ourselves in difficult situations? What values we embodied when we were alive? Are we not at ris... (read more)

I mean, I do get the appeal. But as you say it also has pretty huge drawbacks. I am curious how far people are willing to tie themselves to the mast and argue that value monism is actually a tenable position to take as a "life philosophy" despite it's drawbacks. How far are you willing to defend your "principles" even if the situation really calls them into question? What would your reply to the thought experiment be?

2
Devin Kalish
7mo
The scenario given doesn’t seem to pump the intuition for value pluralism so much as prioritarianism. I suppose you could conceptualize prioritarianism as a sort of value pluralism, I.e. the value of helping those worse off and the value of happiness, but you can also create a single scale on which all that matters is happiness but the amount that it matters doesn’t exactly correspond to the amount of the happiness. I at least usually think of it as importantly distinct from most plural value theories. I’m open to the possibility that this is just semantics, but it does seem to avoid some dilemmas typical plural value theories have (though not all). More on the topic of what to do about counterintuitive implications, my approach is fairly controversial, in that I mostly say if you can’t bite the bullet, don’t, but don’t revise your theory to take the bullet away. In part this just seems like a more principled approach to me as a rule, but also there are important areas of ethics, like aggregation or population axiology, where basically no good answers exist, and this is pretty much provable. This is just the nature of ethics once you get really deep into the weeds. My impression is that most philosophers respond to this by not endorsing complete theories, basically they just endorse certain specific principles that don’t come with serious bullets, and put off other questions where they don’t see a way to escape the bullets. I don’t think this ultimately fixes the problem for topics like these where the territory of possibilities has been scoured pretty thoroughly, but for what it’s worth it seems like a more common approach.

I have nothing against that and think it’s a viable position to have if one has actually invested the time to reason through the challenges presented to a degree that they feel comfortable with. I only question whether this justifies downvoting because to some degree it keeps other people from forming their own opinions on the matter.

Maybe our difference in opinion stems from my perception that downvoting is a tool that should be carefully wielded and not be used to simply highlight disagreement. (I mean there is a reason why we have two voting mechanisms for comments after all)

Mhh, I kind of disagree with the sentiment and assignment of responsibility here.

This is a link post to a critical post on EA-related ideas. I would hope this to spark more or less of an discussion of its merits. I get that some people may be tired of Torres but is this reason enough to actively try to prevent such a discussion? I mean nobody is forced to upvote but downvoting (in particular below 0) does limit the traction this gets from other people. To me this feels like trying to bury voices one doesn’t want to hear, which may be helpful in the short r... (read more)

To me this feels like trying to bury voices one doesn’t want to hear, which may be helpful in the short run (less stress) but is probably not the best long term strategy (less understanding).

Time and attention are finite; I think a lot people people think they have spent a lot more time reading Torres and trying to give him the benefit of the doubt than they have given to almost anyone else, and a lot more than is deserved by the quality of the content.

Yeah, I totally agree with you. This writing style is kind of annoying/cynical/bad-faith. Still it really does raise an interesting point as you acknowledge. I just wish more of the EA community would be able to see both of these points, take the interesting point on board, and take the high road on the annoying/cynical/bad-faith aspect.

For me the key insight in this last section is that utilitarianism as generally understood does not have an appreciation of time at all, it just cares about sums of value. Thus, the title of the book is indeed pretty ironic... (read more)

1
Rhyss
7mo
You might enjoy "On the Survival of Humanity" (2017) by Johann Frick. Frick makes the same point there that you quote Torres as making—that total utilitarians care about the total number and quality of experiences but are indifferent to whether these experiences are simultaneous or extended across time. Torres has favorably cited Frick elsewhere, so I wouldn't be surprised if they were inspired by this article. You can download it here: https://oar.princeton.edu/bitstream/88435/pr1rn3068s/1/OnTheSurvivalOfHumanity.pdf

Yeah, I mean I understand that people don't really like Torres and this style of writing (it is pretty aggressive) but still there are some interesting points in the post, which I think deserve reflection and discussion. Just because "the other side" does not listen to the responses does not mean there is nothing to learn for oneself (or am I too naive in this belief?). So, I still think downvoting into oblivion is not the right move here. 

Just to give an example, I think the end of the post is interesting to contemplate and cannot just be "dismissed"... (read more)

-8
IrenaK
7mo
4
Jackson Wagner
7mo
Kind of a repetitive stream-of-consciousness response, but I found this both interesting as a philosophical idea and also annoying/cynical/bad-faith: This is interesting but also, IMO, kind of a strawman -- what's being attacked is some very specific form of utilitarianism, wheras I think many/most "longtermists" are just interested in making sure that we get some kind of happy long-term future for humanity and are fuzzy about the details.  Torres says that "Longtermists would surely argue...", but I would like to see some real longtermists quoted as arguing this!! Personally, I think that taking total-hedonic-utilitarianism 100% seriously is pretty dumb (if you keep doubling the number of happy people, eventually you get to such high numbers that it seems the moral value has to stop 2x-ing because you've got quadrillions of people living basically identical lives), but I still consider myself a longtermist, because I think society is underrating how bad it would be for a nuclear war or similar catastrophe to wreck civilization. Personally I would also put some (although not overwhelming) weight on the continuity in World B on account of how it gives life more meaning (or at least it would mean that citizens of World B would be more similar to myself -- like me, they too would plan for the future and think of themselves as being part of a civilization that extends through time, rather than World A which seems like it might develop a weird "nothing matters" culture that I'd find alienating).  I think a lot of EAs would agree that something feels off about World A, although the extra 10 billion people is definitely a plus, and that overall it seems like an unsolved philosophical mystery whether it matters if your civilization is stretched out in time or not, or whether there is even an objective "right answer" to that question vs being a matter of purely personal taste.  At the end of the day I'm very uncertain as to whether world A vs B is better; population ethic

Wow, just downvotes without any critical engagement or justification… that’s not what I would have expected. I thought critical takes on longtermism would be treated as potentially helpful contributions to an open debate on an emerging concept that is still not very well understood?

-5
Radical Empath Ismam
7mo
5
Larks
7mo
I didn't downvote, but this article definitely would have warranted a downvote if it had been posted directly to the forum; if you think there are a few redeeming sections you should probably highlight them directly rather than asking people to sort through it all.

I think the downvotes are coming from the fact that Émile P. Torres has been making similar-ish critiques on the concept of longtermism for a while now.  (Plus, in some cases, closer to bad-faith attacks against the EA movement, like I think at one point saying that various EA leaders were trying to promote white supremacism or something?)  Thus, people might feel both that this kind of critique is "old news" since it's been made before, and they furthermore might feel opposed to highlighting more op-eds by Torres.

Some previous Torres content whi... (read more)

I don’t like the question function being used in this way but… it’s probably thought saver by clearer thinking you are looking for. Both are projects by the company sparkwave which is run by Spencer Greenberg.

I think the point of the virtue ethicist in this context would be that appropriate behavior is very much dependent on the situation. You cannot necessarily calculate the „right“ way in advance. You have to participate in the situation and „feel“, „live“ or „balance“ your way through it. There are too many nuances that cannot necessarily all be captured by language or explicit reasoning.

Afaik it is pretty well established that you cannot really learn anything new without actually testing your new belief in practice, i.e., experiments. I mean how else would this work? Evidence does not grow on trees, it has to be created (i.e., data has to be carefully generated, selected and interpreted to become useful evidence). 

While it might be true that this experimenting can sometimes be done using existing data, the point is that if you want to learn something new about the universe like “what is dark matter and can it be used for something?” ... (read more)

I am sorry but I don’t really have time to check the document right now but I would love to get your perspective on the potential value of just giving all people standing to sue on the behalf of future people or even natural habitats against policies that harm their interests? This seems pretty easy to do but could have pretty big consequences if the legal system would need to start consider and weigh those perspectives as well. Any thoughts or reactions?

I think the point is not that it is not conceivable that progress can continue with humans still being alive but with the game theoretic dilemma that whatever we humans want to do is unlikely to be exactly what some super powerful advanced AI would want to do. And because the advanced AI does not need us or depend on us, we simply lose and get to be ingredients for whatever that advanced AI is up to.

Your example with humanity fails because humans have always and continue to be a social species that is dependent on each other. An unaligned advanced AI would... (read more)

2
RobBensinger
1y
I would much more say that it fails because humans have human values. Maybe a hunter-gatherer would have worried that building airplanes would somehow cause a catastrophe? I don't exactly see why; the obvious hunter-gatherer rejoinder could be 'we built fire and spears and our lives only improved; why would building wings to fly make anything bad happen?'. Regardless, it doesn't seem like you can get much mileage via an analogy that sticks entirely to humans. Humans are indeed safe, because "safety" is indexed to human values; when we try to reason about non-human optimizers, we tend to anthropomorphize them and implicitly assume that they'll be safe for many of the same reasons. Cf. The Tragedy of Group Selectionism and Anthropomorphic Optimism. 'Wow, I can't imagine a way to do something so ambitious without causing lots of carnage in the process' is definitely not the argument! On the contrary, I think it's pretty trivial to get good outcomes from humans via a wide variety of different ways we could build WBE ourselves. The instrumental convergence argument isn't 'I can't imagine a way to do this without killing everyone'; it's that sufficiently powerful optimization behaves like maximizing optimization for practical purposes, and maximizing-ish optimization is dangerous if your terminal values aren't included in the objective being maximized. If it helps, we could maybe break the disagreement about instrumental convergence into three parts, like: * Would a sufficiently powerful paperclip maximizer kill all humans, given the opportunity? * Would sufficiently powerful inhuman optimization of most goals kill all humans, or are paperclips an exception? * Is 'build fast-running human whole-brain emulation' an ambitious enough task to fall under the 'sufficiently powerful' criterion above? Or if so, is there some other reason random policies might be safe if directed at this task, even if they wouldn't be safe for other similarly-hard tasks?

I would argue that an important component of your first argument still stands. Even though AlphaFold can predict structures to some level of accuracy based on some training data sets that may already exist, an AI would STILL need to check if what it learned is usable in practice for the purposes it is intended to. This logically requires experimentation. Also hold in mind that most data which already exists was not deliberately prepared to help a machine "do X". Any intelligence no matter how strong will still need to check its hypotheses and, thus, prepare data sets that can actually deliver the evidence necessary for drawing warranted conclusions.

I am not really sure what the consequences of this are, though. 

1
Kenny
1y
I think a sufficiently intelligent intelligence can generate accurate beliefs from evidence, not just 'experiments', and not just its own experiments. I imagine AIs will be suggesting experiments too (if they're not already). It is still plausible that not being able to run its own experiments will greatly hamper AI's scientific agendas, but it's harder to know how much it will exactly for intelligences likely to be much more intelligent than ourselves.

Hey @JohannaE  

interesting idea and project. Are you aware of other players in this space such as http://metabus.org/ or to some degree https://elicit.org ? I think metaBUS in particular aspires to do something similar to you but seems much further along the curve (e.g., https://www.sciencedirect.com/science/article/abs/pii/S1053482216300675). However, when I interacted with them a couple of years ago, they were still struggling to gain traction. This may be a tough nut to crack!

1
JohannaE
1y
Thanks for the recommendations! I did know Elicit but haven't heard about metaBUS yet - will definitely take a closer look!

To me it seems like you have a wrong premise. A wellbeing focused perspective is explicitly highlighting the fact that Sentinelese and the modern Londoners may have similar levels of wellbeing. That's the point! This perspective aims to get you thinking about what is really valuable in life and what the grounds for your own beliefs about what is important are. 

You seem to have a very strong opinion that something like technological progress is intrinsically valuable. Living in a more technically advanced society is "inherently better" and, thus, every... (read more)

Just a short follow up: I just wrote a post on the hedonic treadmill and suggest that it is an interesting concept to reflect about in relation to life in general: 

https://forum.effectivealtruism.org/posts/WMaeBDPdSJLKDzk2d/the-hedonic-treadmill-dilemma-reflecting-on-the-stories-of-1 

I think that it may be helpful to unpack the nature of perceived happiness and wellbeing a little bit more than this post does. I think the idea of hedonic adaptation is pretty well known—most of us have probably heard of the hedonic treadmill (see Brickmann & Campbell, 1971). The work on hedonic adaptation points to the fact that perceived happiness and wellbeing are relative constructs that largely depend on reference points which are invoked. To oversimplify things a little bit, if everyone around me is bad off, I may already be happy if I am only s... (read more)

1
Henry Howard
1y
The way you describe WELLBYs - as being heavily influenced by the hedonic treadmill and so potentially unable to distinguish between the wellbeing of the Sentinelese and the modern Londoner - seems to highlight their problems. There's a good chance a WELLBY analysis would have argued against the agricultural revolution, which doesn't seem like a useful opinion.
0
Stan Pinsent
1y
Well said. I share @Henry Howard 's reservations about WELLBYs, but I would argue that even if WELLBY comparisons are near-meaningless between New Yorkers and Sentinelese, they are probably much more meaningful when comparing one individual's wellbeing before and after treatment, or even comparing control and intervention groups drawn randomly from the same population.
2
alexherwix
1y
Just a short follow up: I just wrote a post on the hedonic treadmill and suggest that it is an interesting concept to reflect about in relation to life in general:  https://forum.effectivealtruism.org/posts/WMaeBDPdSJLKDzk2d/the-hedonic-treadmill-dilemma-reflecting-on-the-stories-of-1 

If you take this as your point of departure, I think that’s worth highlighting that the boundaries between community and organizations can become very blurry in EA. Projects pop up all the time and innocuous situations might turn controversial over time. I think those examples with second-order partners of polyamorous relationships being (more or less directly) involved in funding decisions are a prime example. There is probably no intent or planning behind this but conflicts of interest are bound to arise if the community is tight knit and highly “interco... (read more)

Liv
1y26
8
2

Yeah, totally agreed that it's not that clear and easy. My comment was meant to be a starting point. I purposefully kept it pretty short and focused on one, easy conclusion, as the whole issue is super complex, I don't have it well-thought through and I'm probably missing a lot of information and context.
I think however, that the whole discussion is over-focused on sex and polyamory, and not focused enough on other interpersonal connotations which for sure happen in a community like that (friendships? living together? Ex-partners?). 

I kind of skimmed this post, so hopefully I am not making of fool pf myself but I think you didn’t really address a key point which is raised by „critics“ and that are the challenges associated with the tendency for centralization in EA.

There are basically two to three handful of people who control massive amounts of wealth, many of which are interweaved in a web of difficult to untangle relationships ranging from friendly to romantic. The denser this web is, the more difficult it is for people to understand what is going on. Are rejections or grants based... (read more)

1
Tristan Williams
1y
I'm not really sure this particularly applies here though. I think power being concentrated is not a runoff effect of their being dense relationship clusters within EA, and instead that not having systems of diffuse decision making just is the key problem for the issue of centrality that you mention. Sure, there are probably some cases where friendships have allowed people to bypass more formal and open means of communicating about decisions between orgs, but I still think the effect that has on perpetuating this system is minimal at best.  But to run with your comment a bit further, what do you think might be the best way to solve the centrality issue of EA?
4
Amber Dawn
1y
Yeah, this seems very reasonable. I'd be in favour of less centralization and more transparency. It does seem like there are issues where grantmakers have to decide about whether to give a grant to present or former partners or metamours, or close friends. Maybe there could be a system where people's grant proposals must always be assessed by someone who doesn't live in the same hub as them (if they live in a hub). 

Thanks for the response. I agree that this might not be „pleasant“ to read but I tried to make a somewhat plausible argument that illustrate some of the tensions that might be at play here. And I think this is what the comment that I replied to asked for.

Also I would argue that the comment „holding up“ when we are switching to related phenomena (at least sex positive gay culture) could actually be an indicator of it pointing to some general underlying dynamics regarding „weirdness“ in relation to orthodoxy. Weirdness tends to leave more room for deviance f... (read more)

Just to explain why I downvoted this comment. I think it is pretty defensive and not really engaging with the key points of the response, which made no indication that would justify a conclusion like: „You seem to be prioritising the options based on intuition, whereas I prefer to use evidence from self-reports.“

There is nothing in the capability approach as explained that would keep you from using survey data to consider which options to provide. On the opposite, I would argue it to be more open and flexible for such an approach because it is less limited... (read more)

Load more