But what makes you think that this can be a longterm solution if the needs and capabilities of the involved parties are strongly divergent as in human vs AI scenarios?
I agree that trading can probably work for a couple of years, maybe decades, but if the AIs want something different from us in the long term what should stop them from getting this?
I don’t see a way around value alignment in the strict sense (ironically this could also involve AIs aligning our values to theirs similar to how we have aligned dogs).
The difference is that a superintelligence or even an AGI is not human and they will likely need very different environments from us to truly thrive. Ask factory farmed animals or basically any other kind of nonhuman animal if our world is in a state of violance or war… As soon as strong power differentials and diverging needs show up the value cocreation narrative starts to lose it’s magic. It works great for humans but it doesn’t really work with other species that are not very close and aligned with us. Dogs and cats have arguably fared quite well but o...
This reminds me of the work on the Planungszelle in Germany but with some more bells and whistles. One difference that I see is that afaik the core idea in more traditional deliberation processes is that the process itself is also understandable by the average citizen. This gives it some grounding and legitimacy in that all people involved in the process can cross-check each other and make sure that the outcome is not manipulated. You seem to be diverging from this ideal a little bit in the sense that you seem to require the use of sophisticated statistica...
The key point that I am trying to make is that you seem to argue against our common sense understanding that animals are sentient because they are anatomically similar to us in many respects and also demonstrate behavior that we would expect sentient creatures to have. Rather you come up with your own elaborate requirements that you argue are necessary for a being able to say something about qualia in other beings but then at some point (maybe at the point where you feel comfortable with your conclusions) you stop following your own line of argument throug...
But how can you assume that humans in general have qualia if all the talking about qualia tells you only that qualia exist somewhere in the causal structure? Maybe all talking about qualia derives from a single source? How would you know? For me, this seems to be a kind of a reductio ad absurdum for your entire line of argument.
Thanks for sharing your thoughts! I think you are onto an interesting angle here that could be worthwhile exploring if you are so inclined.
One interesting line of work that you do not seem to be considering at the moment but could be interesting is the work done in the "metacrisis" (or polycrisis) space. See this presentation for an overview but I recommend diving deeper than this to get a better sense of the space. What this perspective is interested in is trying to understand and address the underlying patterns, which create the wicked situation we...
Hey Daniel,
as I also stated in another reply to Nick, I didn’t really mean to diminish the point you raised but to highlight that this is really more of a „meta point“ that’s only tangential to the matter of the issue outlined. My critical reaction was not meant to be against you or the point you raised but the more general community practice / trend of focusing on those points at the expense of engaging the subject matter itself, in particular, when the topic is against mainstream thinking. This I think is somewhat demonstrated by the fact that your comme...
Hey Nick,
thanks for your reply. I didn’t mean to say that Daniel didn’t have a point. It’s a reasonable argument to make. I just wanted to highlight that this shouldn’t be the only angle to look at such posts. If you look, his comment is by far the most upvoted and it only addresses a point tangential to the problem at hand. Of course, getting upvoted is not his „fault“. I just felt compelled to highlight that overly focusing on this kind of angle only brings us so far.
Hope that makes it clearer :)
Your question reminded me of the following quote:
It Is Difficult to Get a Man to Understand Something When His Salary Depends Upon His Not Understanding It
Maybe here we are talking about an alternative version of this:
It Is Difficult to Get a Man to Say Something When His Salary (or Relevance, Power, Influence, Status) Depends Upon Him Not Saying It
Isn’t your point a little bit pedantic here in the sense that you seem to be perfectly able to understand the key point the post was trying to make, find that point somewhat objectionable or controversial, and thus point to some issues regarding „framing“ rather than really engage deeply with the key points?
Of course, every post could be better written, more thoughtful, etc. but let’s be honest, we are here to make progress on important issues and not to win „argument style points.“ In particular, I find it disturbing that this technique of criticizing sty...
My problem with the post wasn't that it used subpar prose or "could be written better", it's that it uses rhetorical techniques that make actual exchange of ideas and truth-seeking harder. This isn't about "argument style points", it's about cultivating norms in the community that make it easier for us to converge on truth, even on hard topics.
The reason I didn't personally engage with the object level is I didn't feel like I had anything particularly valuable to say on the topic. I didn't avoid saying my object-level views (if he had written a similar post with a style I didn't take issue with, I wouldn't have responded at all), and I don't want other people in the community to avoid engaging with the ideas either.
Thank you for writing this post!
I think it is really important to stay flexible in the mind and to not tie ourselves into race dynamics prematurely. I hope that reasonable voices such as yours can broaden the discourse and maybe even open up doors that were only closed in our minds but never truly locked.
Ok, I acknowledge that I might have misunderstood your intent. If had taken that your point was to dispassionately explain why people (the EA community) don't engage with this topic, I myself might have reacted more dispassionately. However, as I read your comments, I don't think that it was very clear that this is what you were after. Rather, it seemed like you were actively making the case against engaging with the topic and using strawmanning tactics to make your point. I would encourage you to be more clear in this regard in the future, I will try to b...
I would argue that it is a snarky but honest reflection of my state of mind. I also support my claim with evidence if you continue to read the comment. I am walking a fine line but I think my comment should still pass as constructive and well-intentioned all things considered. If you beg to differ feel free to make your case.
Wow, I am wondering whether to engage further or just let your reply stand as a testament to your "thoughtfulness". Doubling down on stereotyping and mischaracterizing people... great job! (sorry for the sarcasm but I am STILL surprised when I encounter this type of behavior in the EA forum, probably a sign of my naivety...). Nevertheless, for the benefit of the people who are intimidated by this type of behavior, I will try to give a short outline on where you, at least in my opinion, go wrong.
First, you seem to be upset that some people believe the...
I have to disclaim that I am NOT an expert on degrowth but from everything I know about the topic you are building up a huge strawman and misrepresenting their position in a way that really proves the point I was trying to make.
Just searching on google scholar for the term "degrowth" and looking at the first result, I come to an open access article "Research on Degrowth" in a reputable outlet with a reviewing discussion of the actual positions held and research being done on the topic. I have not read the entire article but from engaging with it for ...
Being “agnostic” in all situations is itself a dogmatic position. It’s like claiming to be “agnostic” on every epistemic claim or belief. Sure, you can be, but some beliefs might be much more likely than others. I continue to consider the possibility that pleasure is not the only good; I just find it extremely unlikely. That could change.
If you read what I have written, you will see that I am not taking a dogmatic position but simply advocate for staying open-minded when approaching a situation. I tried to describe that as trying to be "agnostic" about the...
As above, these conflicting intuitions can only be resolved through a process of reflection. I am glad that you support such a process. You seem disappointed that the result of this process has, for me, led to utilitarianism. This is not a “premature closing of this process” any more than your pluralist stance is a premature closing of this process. What we are both doing is going back and forth saying “please reflect harder”. I have sprinkled some reading recommendations throughout to facilitate this.
I am only disappointed if you stop reflecting and quest...
One caution I want to add here is that downvoting when a post is fresh / not popular can have strong filter effects and lead to premature muting of discussion. If the first handful of readers simply dislike a post and downvote it, this makes it much less likely for a more diverse crowd of people to find it and express their take on it. We should consider that there are many different viewpoints out there and that this is important for epistemic health. Thus, I encourage anyone to be mindful when considering to further downvote posts that are already unpopular.
I think one point of this post is to challenge the community to engage more openly with the question of degrowth and to engage in argument rather than dismiss it outright. I have not followed this debate in detail but I sympathize with the take that issues which are controversial with EAs are often disregarded without due engagement by the community.
I think you are misrepresenting a few things here.
First, Catholics talk a lot about ethics. Please come up with a better excuse to brush away the critique I made. I am almost offended by the laziness of your argument.
Second, you are misrepresenting the post. It does not assert that we should "value everything that we already care emotionally about". It argues for reflecting about what values we actually hold dear and have good reason to hold dear. This stands in contrast to your position, which amounts to arguing for a premature closing of this...
Hey Devin,
first of all, thanks for engaging and the offer in the end. If you want to continue the discussion feel free to reach out via PM.
I think there is some confusion about my and also Spencer Greenberg's position. Afaik, we are both moral anti-realists and not suggesting that moral realism is a tenable position. Without presuming to know much about Spencer, I have taken his stance in the post to be that he did not want to "argue" with realists in that post because even though he rejects their position, it requires a different type of argum...
It is certainly conceivable that I am “under the pernicious influence of utilitarianism”, in which case I would by default become a nihilist and abandon any attempt to reduce the suffering of sentient beings.
You certainly lost me here. All I am asking for is humility regarding our ability to "know" things, in particular regarding ethics. Every part of your argument could have been made by catholic dogmatists, who have likely engaged for much longer and deeper in painstaking reflection. For me that would be a worrying sign but I certainly did not intend for...
Call me naive but your argument doesn't go through for me. You write...
As in mathematics and logic, rational intuition is ultimately my yardstick for determining the truth of a proposition. I think it self-evident that the good of any one individual is of no more importance than the good of any other and that a greater good should be preferred to a lesser good. As for what that good is, everything comes down to pleasure on reflection.
So your standard for adjudicating the "truth" of propositions is your "rational intuition". You think your position "self-ev...
A few provocative questions: What is your yardstick for measuring the effectiveness of your theory compared to other theories? How much work have you done to figure out how to falsify utilitarianism and consider alternatives? How do you deal with the objections to utilitarianism and the fact that there is no expert consensus on what moral theory is "right"?
I mean do what floats your boat as long as you don't hurt other people (and beings) and behave in otherwise responsible ways (i.e., please don't become the next SBF) but I am always pretty surprised and ...
This position seems confusing to me. So, either (1) ethics is something "out there", which we can try to learn about and uncover. Then, we would tend to treat all our theories and models as approximations to some degree because similar issues as in science apply. Or (2) we take ethics as something which we define in some way to suit some of our own goals. Then, it's pretty arbitrary what models we come up with, whether they make sense depends mainly on the goals we have in mind.
This kind of mirrors the question whether a moral theory is to be taken a...
Yeah, I think the intuitions it pumps really depends on the perspective and mindset of the reader. For me, it was triggering my desire to exhibit comradery and friendship in the last moments of life. I could also adjust the thought experiment so that nobody is hurt and simply ask whether one of them should take the morphine or whether they should die "being there for each other". I really do believe that we are kidding ourselves when we say that we only value "welfare" narrowly construed. But I get that some people may just look at such situations with a d...
Thanks for sharing this post and pointing out some of the inconsistencies and confusions you see around you! I think being curious and inquisitive about such matters and engaging in open and constructive dialog is important and healthy for the community!
Interestingly, I actually made a related post just slightly earlier today, which was trying to spark some discussion around a thought experiment I came up with to highlight some similar concerns/observations. I think your post is much more fleshed out, so thank you for posting!
I changed the title of the question and made some small changes to the text to make clearer what I am after with this. I would like to encourage reflection on the part of the value monist utilitarians in this forum. There may be instrumentally good reasons to use value monist utilitarian theories for some purposes but we should be open-minded and forthright in acknowledging its limitations and not take it as a "moral theory of everything". Let's not mistake the map for the territory!
I agree with your general thrust. The thought experiment is a little bit contrived but deliberately designed to make both options look somewhat plausible. A value monist negative utilitarian could also give the medicine to Alice, so it's not even clear what option one would go for.
However, what I really wonder though is if "welfare" is the only thing we care about at the end of times? Or is there maybe also the question of how we got there? How we handled ourselves in difficult situations? What values we embodied when we were alive? Are we not at ris...
I mean, I do get the appeal. But as you say it also has pretty huge drawbacks. I am curious how far people are willing to tie themselves to the mast and argue that value monism is actually a tenable position to take as a "life philosophy" despite it's drawbacks. How far are you willing to defend your "principles" even if the situation really calls them into question? What would your reply to the thought experiment be?
I have nothing against that and think it’s a viable position to have if one has actually invested the time to reason through the challenges presented to a degree that they feel comfortable with. I only question whether this justifies downvoting because to some degree it keeps other people from forming their own opinions on the matter.
Maybe our difference in opinion stems from my perception that downvoting is a tool that should be carefully wielded and not be used to simply highlight disagreement. (I mean there is a reason why we have two voting mechanisms for comments after all)
Mhh, I kind of disagree with the sentiment and assignment of responsibility here.
This is a link post to a critical post on EA-related ideas. I would hope this to spark more or less of an discussion of its merits. I get that some people may be tired of Torres but is this reason enough to actively try to prevent such a discussion? I mean nobody is forced to upvote but downvoting (in particular below 0) does limit the traction this gets from other people. To me this feels like trying to bury voices one doesn’t want to hear, which may be helpful in the short r...
To me this feels like trying to bury voices one doesn’t want to hear, which may be helpful in the short run (less stress) but is probably not the best long term strategy (less understanding).
Time and attention are finite; I think a lot people people think they have spent a lot more time reading Torres and trying to give him the benefit of the doubt than they have given to almost anyone else, and a lot more than is deserved by the quality of the content.
Yeah, I totally agree with you. This writing style is kind of annoying/cynical/bad-faith. Still it really does raise an interesting point as you acknowledge. I just wish more of the EA community would be able to see both of these points, take the interesting point on board, and take the high road on the annoying/cynical/bad-faith aspect.
For me the key insight in this last section is that utilitarianism as generally understood does not have an appreciation of time at all, it just cares about sums of value. Thus, the title of the book is indeed pretty ironic...
Yeah, I mean I understand that people don't really like Torres and this style of writing (it is pretty aggressive) but still there are some interesting points in the post, which I think deserve reflection and discussion. Just because "the other side" does not listen to the responses does not mean there is nothing to learn for oneself (or am I too naive in this belief?). So, I still think downvoting into oblivion is not the right move here.
Just to give an example, I think the end of the post is interesting to contemplate and cannot just be "dismissed"...
Wow, just downvotes without any critical engagement or justification… that’s not what I would have expected. I thought critical takes on longtermism would be treated as potentially helpful contributions to an open debate on an emerging concept that is still not very well understood?
I think the downvotes are coming from the fact that Émile P. Torres has been making similar-ish critiques on the concept of longtermism for a while now. (Plus, in some cases, closer to bad-faith attacks against the EA movement, like I think at one point saying that various EA leaders were trying to promote white supremacism or something?) Thus, people might feel both that this kind of critique is "old news" since it's been made before, and they furthermore might feel opposed to highlighting more op-eds by Torres.
Some previous Torres content whi...
I don’t like the question function being used in this way but… it’s probably thought saver by clearer thinking you are looking for. Both are projects by the company sparkwave which is run by Spencer Greenberg.
I think the point of the virtue ethicist in this context would be that appropriate behavior is very much dependent on the situation. You cannot necessarily calculate the „right“ way in advance. You have to participate in the situation and „feel“, „live“ or „balance“ your way through it. There are too many nuances that cannot necessarily all be captured by language or explicit reasoning.
Afaik it is pretty well established that you cannot really learn anything new without actually testing your new belief in practice, i.e., experiments. I mean how else would this work? Evidence does not grow on trees, it has to be created (i.e., data has to be carefully generated, selected and interpreted to become useful evidence).
While it might be true that this experimenting can sometimes be done using existing data, the point is that if you want to learn something new about the universe like “what is dark matter and can it be used for something?” ...
I am sorry but I don’t really have time to check the document right now but I would love to get your perspective on the potential value of just giving all people standing to sue on the behalf of future people or even natural habitats against policies that harm their interests? This seems pretty easy to do but could have pretty big consequences if the legal system would need to start consider and weigh those perspectives as well. Any thoughts or reactions?
I think the point is not that it is not conceivable that progress can continue with humans still being alive but with the game theoretic dilemma that whatever we humans want to do is unlikely to be exactly what some super powerful advanced AI would want to do. And because the advanced AI does not need us or depend on us, we simply lose and get to be ingredients for whatever that advanced AI is up to.
Your example with humanity fails because humans have always and continue to be a social species that is dependent on each other. An unaligned advanced AI would...
I would argue that an important component of your first argument still stands. Even though AlphaFold can predict structures to some level of accuracy based on some training data sets that may already exist, an AI would STILL need to check if what it learned is usable in practice for the purposes it is intended to. This logically requires experimentation. Also hold in mind that most data which already exists was not deliberately prepared to help a machine "do X". Any intelligence no matter how strong will still need to check its hypotheses and, thus, prepare data sets that can actually deliver the evidence necessary for drawing warranted conclusions.
I am not really sure what the consequences of this are, though.
Hey @JohannaE
interesting idea and project. Are you aware of other players in this space such as http://metabus.org/ or to some degree https://elicit.org ? I think metaBUS in particular aspires to do something similar to you but seems much further along the curve (e.g., https://www.sciencedirect.com/science/article/abs/pii/S1053482216300675). However, when I interacted with them a couple of years ago, they were still struggling to gain traction. This may be a tough nut to crack!
To me it seems like you have a wrong premise. A wellbeing focused perspective is explicitly highlighting the fact that Sentinelese and the modern Londoners may have similar levels of wellbeing. That's the point! This perspective aims to get you thinking about what is really valuable in life and what the grounds for your own beliefs about what is important are.
You seem to have a very strong opinion that something like technological progress is intrinsically valuable. Living in a more technically advanced society is "inherently better" and, thus, every...
Just a short follow up: I just wrote a post on the hedonic treadmill and suggest that it is an interesting concept to reflect about in relation to life in general:
I think that it may be helpful to unpack the nature of perceived happiness and wellbeing a little bit more than this post does. I think the idea of hedonic adaptation is pretty well known—most of us have probably heard of the hedonic treadmill (see Brickmann & Campbell, 1971). The work on hedonic adaptation points to the fact that perceived happiness and wellbeing are relative constructs that largely depend on reference points which are invoked. To oversimplify things a little bit, if everyone around me is bad off, I may already be happy if I am only s...
If you take this as your point of departure, I think that’s worth highlighting that the boundaries between community and organizations can become very blurry in EA. Projects pop up all the time and innocuous situations might turn controversial over time. I think those examples with second-order partners of polyamorous relationships being (more or less directly) involved in funding decisions are a prime example. There is probably no intent or planning behind this but conflicts of interest are bound to arise if the community is tight knit and highly “interco...
Yeah, totally agreed that it's not that clear and easy. My comment was meant to be a starting point. I purposefully kept it pretty short and focused on one, easy conclusion, as the whole issue is super complex, I don't have it well-thought through and I'm probably missing a lot of information and context.
I think however, that the whole discussion is over-focused on sex and polyamory, and not focused enough on other interpersonal connotations which for sure happen in a community like that (friendships? living together? Ex-partners?).
I kind of skimmed this post, so hopefully I am not making of fool pf myself but I think you didn’t really address a key point which is raised by „critics“ and that are the challenges associated with the tendency for centralization in EA.
There are basically two to three handful of people who control massive amounts of wealth, many of which are interweaved in a web of difficult to untangle relationships ranging from friendly to romantic. The denser this web is, the more difficult it is for people to understand what is going on. Are rejections or grants based...
Thanks for the response. I agree that this might not be „pleasant“ to read but I tried to make a somewhat plausible argument that illustrate some of the tensions that might be at play here. And I think this is what the comment that I replied to asked for.
Also I would argue that the comment „holding up“ when we are switching to related phenomena (at least sex positive gay culture) could actually be an indicator of it pointing to some general underlying dynamics regarding „weirdness“ in relation to orthodoxy. Weirdness tends to leave more room for deviance f...
Just to explain why I downvoted this comment. I think it is pretty defensive and not really engaging with the key points of the response, which made no indication that would justify a conclusion like: „You seem to be prioritising the options based on intuition, whereas I prefer to use evidence from self-reports.“
There is nothing in the capability approach as explained that would keep you from using survey data to consider which options to provide. On the opposite, I would argue it to be more open and flexible for such an approach because it is less limited...
I have never said that how we treat nonhuman animals is “solely” due to differences in power. The point that I have made is that AIs are not humans and I have tried to illustrate that differences between species tend to matter in culture and social systems.
But we don’t even have to go to species differences, ethnic differences are already enough to create quite a bit of friction in our societies (e.g., racism, caste systems, etc.). Why don’t we all engage in mutually beneficial trade and cooperate to live happily ever after?
Because while we have mostly con... (read more)