Hi Yarrow. Thanks for taking time sharing your comment on my thoughts.
I actually have been planning to add details to my post as I shared it while I was still finishing my thoughts in my head. But this is not to mean I'm dismissing your view.
let me reply with sneak peak of what I am planning to share in details later:
Yes, I support long-term investment in scientific and technological research and development to enrich and probably extend human life. But I consider technological post-humanism/transhumanism as just one of the components/activities of Post-humanism as a broader philosophy.
What I been thinking is post-humanism as a philosophy/practical way of living life from absolute fundamental perspective - regarding ourselves as something like "we-know-we-are-what-we-identify-ourselves-as-'human'-but-we-actually-are-one-of-the-countless-entities-in-this-vast-cosmos". And then working everything else out forward from that point, kind of perspective.
I am sure you won't judge, but the above statement may seem somehow too extreme or may have been put forward and experimented terribly in similar sense by others throughout history. But I like to try again this time. With all the advancements in biology, neuroscience, etc (and of course AI) made by us human I feel even more motivated and optimistic that this worldview could be the way forward for solving many problems.
I want to start/join a non-profit working on promoting post-humanism - mainly as an ideology, philosophy, attitude, mindset, and then gradually as practical movement, political, public policy efforts.
This is my LinkedIn - https://www.linkedin.com/in/soe-lin-htut/
(Oh I'm not promoting myself/looking for a job. I'm only expressing my enthusiasm on the idea and also sharing more info about myself as I am still a new member in this community.)
[This reply is written completely by me. No ChatGPT involved.]
Firstly, thank you for taking time to comment!
Secondly, I am really struggling now to decide which of what I want to say should come first for “Secondly”. Let me just take a risk. So, here it comes..
Everything that comes next, no matter how soft, strong, weird or anything they sound in terms of language/meaning, please interpret them with a degree of care and kindness (I’m sure you would) — including this sentence.
Although I feel quite certain that I wanted to let out my ideas, opinions I shared in my post, I was not completely certain how they should/would sound in the readers interpretation, especially in terms of English language, even though I said polished it with ChatGPT and said that “I acknowledge that I fully agree with it”.
I don’t want to sound/appear apologetic, defensive, unconfident, seeking empathy/pity for what I shared in the next sentences, but I think replying to you with these messages would just more likely help flourish your current interpretation of my post and even facilitate further discussion on the core ideas, messages presented in my post.
The post was only my very third time sharing such big bold (to my standards) opinions to English-speaking, intellectual/professional communities like EA Forum.
I am from completely different (or far) educational, professional, social, geographical background when it comes to topics like AI, consciousness, and science in general, and participating in such communities.
And I’m sure you already noticed, English is not my first language. I have been using English language in ‘professional settings’ (If you want, I can provide more info for what I mean by this) for over a decade, but not continuously, and definitely not yet on a community like this.
I think what I am trying say here is something like my ability to use and understand English language is not exactly/fully calibrated with my heartfelt intention to express my imaginations, ideas, feelings and have discussions about them in the way I want.
About two years ago, I encountered profound changes in my life. Among all the good and bad things that resulted, I have found exploring about consciousness, human existence, AI (I know it’s too general to just say AI, but let’s keep it short in the this comment) very exciting and have been trying to figure out If I should and would be able to explore even further and more practically about those topics. And by participating in communities like EA forum, I hope I will know more what to do next.
The published post is a language quality polished writing I made with assistance of ChatGPT. But I acknowledge that I fully agree with and it reflects the ideas and message I wanted to convey with my original writing. Below is my original writing. And this is the changes/polishes I made with ChatGPT: https://chatgpt.com/share/66f8e0cb-b224-800a-9808-f167b83447c7
Humans considering whether AIs are worth moral status or not is both one of the most humane and silliest thing humans do
I read the 2017 Report on Moral Patienthood by Luke Maurenhauer a few months ago. I encountered this paper titled AI alignment vs AI ethical treatment: Ten challenges. recently. And yesterday, 80000 hours published the article “Understanding the moral status of digital minds”.
So I think now is the right time to ask all those people, and all of those in humanity in general, who are wondering whether AIs are worth the moral rights, this very simple question,
“Who the hell you think you are?”.
This is both a literal and practical question.
Before I continue further, let me tell you just a little bit about myself. Since long before (I mean at least two decades ago) I would have became well aware of human rights, animal rights and all sorts of things (or let’s say became more mature/humane human being), I am someone who would instantly/naturally apologize (and did apologize) a sleeping stray dog for accidentally waking him/her up because I tripped near it. I am someone who would ask (and did ask), instead of fighting back/trying to protect myself, “Why are you doing this to me” first when someone would unexpectedly run to me and punch me in the face.
So, if the answer will be something like “We are human beings. Just one of the species on this planet earth. It as a human thing, It is a fundamental/moral/ethical thing as a species to treat and try our best to find out ways to treat all other species and potential species equally.” I am totally and already onboard. Because I am a human being too!
However, this framework of thinking of us or just simply assuming that it is a natural action for human beings to look at other species or just anything all else non-human, biological, natural (like sand, water, dark matter), all man-made things including anything AI we are referring to for this topic, in these lens — like whether ‘they’ should be treated equally by ‘us’, whether ‘they’ have consciousness, intelligence, sentience, etc., is a fundamentally wrong way of thinking, way of looking at things equally.
It is one grand act of speciesism we are doing without realizing ourself that we are to other non-human entities on “this planet”.
And I don’t blame these people and myself for having been having such opinions. Because, as far and much as we have understood about ourselves as human species and anything else on this plant and in this universe, and based on our own ‘definition(s)’ of being human, our (limited/incomplete) understanding of intelligence, consciousness, etc. and our ‘definitions’ of morality, ethics, rights and all that, it is of course a natural, moral, let’s just say it’s a ‘good’ thing to have concerns for others.
But, let’s now ask the question again, “Who the hel do we think we are!?”.
We are just what we define ourselves as “human beings”, what we define or think ourselves that a “species” who happen to have (this and that level of) “intelligence”, “consciousness”, (now you know the drill) that happen to exist on this “planet” (which is also a thing that happen to exist in this “universe”) among all other species and things that happen to exist (and got created to be in existence by us) on the same plant.
So it is fundamentally wrong, irrelevant, and most importantly in this age of AI — very dangerous for our own existence — to think it is a natural thing, a right thing, a moral thing, our responsibility for us to look at everything else (well, not every thing. But I’m sure you get my point) on this planet in our own “definitions” of natural/unnatural, right/wrong, moral/ethical and all that shit.
At this point, I’m sure you all know where I am going with this. But let me add just a bit more, because I don’t want to sound like a radical existentialist or survivalist or anything like that, but at the same time I believe I need to stress my points firmly because although they seems obvious, they are still too subtle and sensitive for most of us to (want to or be able to) realize, let alone act on.
Everything we are thinking of about moral status of digital minds is valid if everything happening now in this world and believing about ourselves are true as we are believing they are. We may and can continue to believe so and act on according to this belief, and it may turn out we are right (until the point when we will realize that we were wrong and at that point it will be too late) that we considered and acted accordingly on all the benefits and challenges of understanding the moral status of digital minds and everything else non-human.
Because, everything we think and believe we know about ourselves and everything else, no matter how profound and profoundly right, is still very limited and is still very likely to totally wrong when we will compare ourselves and our knowledge(for the lack of better word) to the existence (both time and scale) of the universe. To be able to see this fact clearly, we need to literally zoom out of the earth and look at ourselves and everything else from outer space (and also probably from the beginning of our existence as the human species).
When we look at ourselves from that angle, we will see clearly that we are just one of the entities which happen to be exist in this universe, or on this planet to be specific. And human beings are just one of the many entities on this planet we “defined as the Earth” who happen to have what we ‘defined’ and ‘measured’ having such level of “intelligence”, “consciousness” and such. And based on such perspectives and definitions, we happen to believe that other entities on this planet have/ will/ may have/ should have/ deserve/ should deserve different levels of or no “intelligence”, “consciousness” and “moral righteousness” and all that.
But did we ever thought, look at us from the perspective of the universe?
From the perspective of the universe I believe, or rather its just a fact that, we are just some particles who are foolishly thinking we are intelligent, and conscious and.. worrying about other entities. If and when we no longer exist because of any reasons, the universe won’t care. The universe doesn’t care. The universe never cares.
Did we ever look at us from the perspectives of those other entities on the planet?
Yes, may be, some of them do have same or similar form and different (as of now) lower level of intelligence and consciousness as we are. If and because they do, yes and may be, some or all of them deserve to be treated equally or whichever way we believe should be treated.
But, what if they Don’t have consciousness and intelligence we think they have.
Whether they do or not, what if they actually don’t want to be treated by the way we think they deserve?
More importantly, what if they have completely different form of consciousness and intelligence than ours, and hence they have always been enjoying their lives and having their own definitions of ‘life’, ‘pleasure, ’ethnics’, ‘moral patienthood’ and all that.
And What if the type of consciousness and intelligence they have is actually higher than ours?
Imagine the way we are treating our pet cats and dogs is actually the result of one of their greatest achievements in their psychological warfare in their evolutionary timeline. Imagine ants and termites looking at our greatest architectural buildings or whatever and laughing at us everyday when they walk by us on the tree branches.
Imagine, any creative example you, the reader, very intelligent human, can think of for this viewpoint.
Again, yes, according to very reliable information and understandings we have gained so far (I just didn’t want to say “according to all the scientific evidence we have so far”, it is certain or very possible that those other entities on this planet we think they have consciousness and intelligence doesn’t have the same or only very lower level of consciousness and intelligence than we have. And it is ‘right’, ‘moral’ for us human beings to consider and act as much as we can for their well being or anything we have been talking, doing about such topic.
But now, with the creation of current state of AI, and potential to be able to create even more powerful digital minds or AGI (for the lack of better words to refer to everything we want to refer to for this topic) and potential to be able to do other things* with/because of them, this activity of us human beings considering whether non-human entities are worth mortal concern, and doing things to act on the result of that question has become even more dangerous activity for the survival of us human beings.
Here are a few examples/explanations why.
Let’s say we started this line of thinking with our pets. Then, we realized it’s an act of speciesism and expanded our consideration and actions to other animals (even if we are going to be eating some of them anyway). And then, we expanded our expedition to even more kinds of animals, insects and (living) things in the nature which we never thought they are who we think they are and would consider for such considerations. Finally, we are now looking at ChatGPT and its friends or Digital Minds. I wish I can start talking about them now, but let me stick with our less intelligent/conscious evolutionary friends just a little more.
No matter how well-intentioned we are, I believe the main reason, but we don’t realize it now, we are believing and doing such good efforts and treatment on our evolutionary friends is actually because we are believing we know that they are and will always be ‘less intelligent and conscious’ than we are.
If — evolutionarily or with the help of AI we created — all the fish, dung beetle, our cats (or dogs), and even plants and tress will become as intelligent and conscious we are and if all of them will be negotiating (or fighting) with us for the share of space and resources on this planet; or even worse (let me refer to one of my thoughts/ideas above) — when they have evolved to such level and their intelligence and consciousness and ethics and ‘purpose of existence’ are completely different from ours, and if they would simply (be able to) eat all of us because eating and growing and dying (and not being afraid of dying) is the most conscious thing they do,
What will we do? And I am pretty sure our considerations about them now will be pretty different.
Well actually, at this point, I think I don’t even need to give examples for Digital minds any more.
I know my thoughts may seem quite extreme. But please don’t forget what I shared you earlier about myself. I believe I am just one of the human human beings.
The point I am trying to make with article is that if we are going to try to understand and consider what to do for the moral status of digital minds and everything else non-human on this planet and in this universe, we will have to change this approach of/shift this perspective of looking at things from the angel of us being human beings as we define ourselves, to us being just one of the entities in the universe (who happen to have or believing/assuming to have intelligence and consciousness). Only by this perspective will we be able to come up with practical solutions to fairly deal with such challenges that will be imposed by such beings and at the same time be able to come up with solutions to preserve and extend our existence in this universe as an entity.
For transparency, I have shared my original draft and transcribed notes below:
My idea for this article is that when it comes to AI, when it comes to the future with AI, and if everything, like something like, I mean, this could become the title of the article itself, but anyway, the core idea is that even if everything will go right with AI, the future could be, the future of the humanity, of course, could be really weird. Therefore, it means it could still be bad for us. So something like, I mean, even if everything goes well with AI, many things could still go wrong, or things could be very, very bad, or quite bad, or a little bit bad for us human beings. Because AI is, the main power of AI is thinking, or thinking, creativity, or intelligence, or even consciousness, although we as human beings ourselves don't really know what consciousness is exactly is. So, I mean, like we like all the, just same, same, same, same, under the same theme, theme basis that, that every, all other AI and the intellectual thinkers and are saying, because AI is deals with thinking. I just lost my train of thought here, I'll just pause here, but I will continue.
Okay, yeah, because when AI, their deals mainly with, or the only thing that AI deals with is human intelligence or imagination or ideas or consciousness. And that is, when it comes to these things like consciousness, ideas, thinking, and imagination, that are, that is, for us human beings, that is, that is, I mean, obviously got to do with our brain, our thinking, our psychology. So, because, and that's where I think the danger really comes into play. Yes, we are like human beings, we are, we think we are intelligent. I mean, definitely or quite obviously we are more intelligent than cats and dogs, dogs and even other mammals and animals. And we think we are very smart. And yes, we have proven so by, I mean, I mean, like this idea around creativity, human imagination, and humans have desires. I mean, humans have desires. And actually, I think all the other animals, as much as we know, also have desires. But because their brains is, they are not as intelligent, like intelligent in the open and closed quotes. They don't, they can't really invent things, create things, to fulfill their desires. So, but for us human beings, I mean, again, because, okay, I would just, I may be repeating this, but for the sake of, yeah, for the sake of letting my thoughts flow, I will be repeating some of the things, some of the things quite often, but yeah. So because we think we are intelligent and we have creativity and we have, so we created things, you know, we invented things from tools to language to, and then eventually, finally, this AI. And also, I mean, we did not even, not only create these things that are external, externally useful for us, we also examined ourselves, I examined about our human mind, and we invented these fields of studies like science and psychology, behavior, cognition, cognitive behavioral science, I mean, whatever, everything that has to, that seems to deal with how we think about thinking, how our mind works. But because in all the other things, like external things, especially these material physical things, although they are, there are a lot more ways that we can create and invent these things, but they are sort of deterministic in the way that, I mean, if we, the idea was to invent a car, then we thought this is the idea of a car and we invented the car. So after a car is invented, so it is sort of finished. but the mind itself is, we can be as imaginative, negative as we can be. It is sort of random, so randomness doesn't have limitations, doesn't have an end. So because, so when we think about minds, because mind doesn't have a determined, deterministic ending, we don't know the nature of mind. We don't actually know what mind is.
Yeah, so to continue, so that is, so when it comes to mind, so that means that is about when we think about mind, it is about psychology. But psychology is, again, something that every human being, this mind itself invented. Psychology is a very, so when it comes to the level of hierarchy or although there isn't any hierarchy of the level of things, magnitude of the power of things that mind can imagine, mind can create, psychology is a very, something at a very lower level. But, so again, for human beings, like other animals have their own minds also, maybe they can think, I mean, of course, they know, they have some level of intelligence, okay. They may, especially when it comes to our pets, they know who is their friend or who is their owner, who is their master, this kind of thing. They know when they are hungry, they know when they are angry, they know when they want to mate and all sort of things, all sort of thing. But for human minds, again, because we are sort of what we define ourselves as more intelligent, we have all these sorts of imaginations and ideas, and because of that, we tend, like, the idea itself of this consciousness or intelligence, I think, is sort of a, is a good thing in general, but I think it's a sort of a very vulnerable, very... what vulnerable about me, maybe I wouldn't say dangerous, but maybe I will not, but it's a very vulnerable concept because we don't exactly know what it is. So, okay, I think what I'm trying to say here is, like, there is this universe, we don't know when it started and when it will end, and there is this planet Earth, and there are all the other planets, looks like they are real, but anyway, on this planet, and then we came into existence, and how we, this, our human mind again, have sort of figured out and identified and figured out when we started came into our existence, like we started from this carbon and single cellular beings to now these multicellular things called human beings. But what we know for sure is that, yes, we are now in this form, this entity called human being, this collection of cells and bodies and this thing called mind. And this mind itself is the thing which is trying to define or sort of limit or sort of frame ourselves that we are that we have this consciousness. But I think this consciousness itself is a consciousness and then in addition to that, or in relation to that, we define ourselves as we have identity, we have agency, we have free will, or already with some scientists already, my philosophers, everyone who have done a lot of work on this already, we started talking about if there is free will or not. But anyway... Again, because we have this thing called mind that we can think, we have created and identified and labeled and framed ourselves in all sort of things. But I think mind itself is a very fragile, vulnerable concept or vulnerable type of thing. So that's why in relation to that, when we think about our future and our existence and our survival, and then now in relation to all the threats that can happen from AI, the cognitive, the psychological effect, these are, I mean, I'll try to have a little bit clearer version of this last part of idea in the next message, but I want it captured as much as it is now, for now.
So, yeah, to continue, I mean, so again, we have this thing called mind, and in our brain, supposedly, and because we have mind and we have this intelligence and we can think and again, we can think and we have imagination and we have ideas, we also come up with all these ideas and things around our identity, our consciousness, our intelligence. And then we build upon that, like, we identify ourselves as human beings and we are social animals, like we started, came into existence from the single cellular to Neanderthals to the chimpanzees to like evolutionary evolution, evolution. We are evolutionary beings, things like that. And then because, and then now we have this human society, countries, cultures, the families, and life, and then survival, economics, and then the future growth, and especially survival. I think all of these things are, I mean, I still consider myself a human being and I definitely want to survive and live as long as possible. The one version of me thinks so, feels so. But when I try to become as open-minded as possible about myself, like sometimes I consider, I imagine, what if I don't have agency? Like sometimes just by thinking about that makes me sometimes feel difficult to stand steadily, you know? So our mind is very powerful and we shaped and we developed all these very seemingly strong and rigid mental structures and definitions and then in extension of that, all the physical ideas and money and education and life and productivity, all of these things to support our survival, to support our existence, our survival. But all of these concepts seem to be very strong, but I think even by thinking very, very quickly about this, we know that this is very fragile. So maybe one of the main points I want to make here already is that when we will deal with AI, and because AI now seems to be dealing with affecting our ability to think, our cognition, again, yes, AI these days is mostly manipulation of language. but language itself is a tool we created ourselves to be able to manipulate our mind or to take advantage of how our mind can do things. So no matter how powerful AI will become or not, and okay, to actually correct or to actually add more meaning, I think AI actually doesn't just manipulate language. AI can manipulate information and can manipulate data. So sort of AI, the ability of AI is to manage, to manipulate randomness. So our mind, we have mind and we can, mind has, mind has, can be as imaginative as possible. And AI is the technology that can, that can work with randomness. Like computers before cannot really work with randomness. No matter how powerful, they have sort of deterministic, upwards deterministic limits. Cars, again, like they can only drive. They cannot do washing clothes. So like this, this technology, we, our minds itself created with a little, and we created AI actually because of our imagination and based on the, the, our invented ideas like life and society and development and growth and... Our desire to fulfill our desires and to make our life more easy, so yes, we created AI as a tool to, in the hope that that will help make our life easier, better. But because, again, it deals with mind, it deals with imagination, and it deals with randomness, we don't really know how the outcomes will lead for us, could affect us. That's the real danger of AI.
So to sort of draw some points and to capture, I mean, the idea for the article, some of the emphasis points that I'd like to make for this article, I think, is something like, so we are, as we are now discussing and trying to identify and imagine all the dangers and, of course, all the good things that can happen after this superintelligence or whatever, this most of the ultimate AI, I think all of or almost all of the risks and dangers we are imagining again and imagining or identifying are most are based on our mind's own imaginative definitions itself, like how it will affect the economy, how it will affect the humanity, the society, the countries. Yes, there are already some people, you know, starting to point out that, and of course it is also obvious that point out the point out the effect that it can affect our mind and our cognition and our agency. So, yeah, I think my idea also could, is also fall into, we'll be joining the force of these threats around our cognition, our psychology, and ultimately our survival, our self, I think. But the more obvious or maybe a unique point I'm trying to make here is that all of the threats and the opportunities we are identifying are sort of still based on this gradual or exponential but still sort of structured extension, imagination, structured imagination of the future or the possibilities that can happen to us based on our very still structured framework of us being human and humans and this being alive and having this structure, although we don't know exactly the structure, even mind, our agency, identity, ourselves. And then also the structure, imagination, gradual imagination that we came to, again, we came from single cell to human beings. We used to live, be able to live 50 years, now 100 years, maybe we will be able to live 200 years, we will become healthier, cure cancer, travel to the stars and live on Mars. All of these how profound they are, they are still sort of structured and gradual and exponential. But I think the future or life or this universe itself is not actually structured. We don't really know how universe is existing, functioning. And we don't really know exactly how, what our consciousness is. So I think that the point I'm trying to make here is, yes, we can be imaginative and we can be prepared as much as we want, as much as, of course, we can. We have to try our best anyway. This is our survival and we are, and also we can, we are also thinking for all other beings and animals and even AI itself. We are thinking, we're thinking for the safety and future of the AI itself. But everything we are thinking is, I think, the point I'm trying to make here is, we are still thinking in a sort of a structured and gradual and exponential imagination. But the future can be really weird and random. And like we could, we could be wiped out the next day. The humanity as we knew it could have ended in the next year because of AI or because of anything. And we will not be able to do anything if things, something happened that we did not expect happened. And with AI, I think this is also a possibility. So I think, so I'll try to continue, but I think one of the main points I'm trying to make here is, I think AI safety, making sure that AI is safe for us is the most important thing. Other things like economy or future or healthcare or cooperation, whatever, these structure things is less important.
Yeah, so I think that's pretty much more or less the core of core ideas I have for my article. I mean, article again is something we human beings, our mind have imagined and defined as something that we can convey our thoughts and emotions and imaginations. But anyway, so I'll try to take a big pause here or maybe try to wrap up. So I think the main point I'm trying to make is, we don't really know who we are. We don't really know what mind is. We don't really know how, so obviously, how capable we are or how our imagination is capable of. And then, in the meantime, despite this very, very, although we think we are, that is sort of, I think what I'm trying to say, okay, I'll try to add a little more. What I'm trying to say is, we think we are human beings. We think, we assume that intelligence is intelligence. We assume that consciousness is consciousness. And we have this ability. We assume that we have these abilities and ideas and intelligence. Yes, we sure did. I mean, there are a lot of proof that how far we have come. But at the same time, I think this is, we are in a very, very fragile, vulnerable state. So we should be, but then, so if vulnerable state, but then again, at the same time, despite all these uncertainties, all these vulnerabilities, for sure, I mean, if someone asks me or asks you as I ask anyone, whoever, how intelligent or otherwise, again, I'm not trying to discriminate that there are human beings from, for their ability, intellectual ability, or anything, but still, because we don't know what we are, but we are as much as we are at the moment, and we more or less want to enjoy being in this shape and form as much as possible, enjoying life, I mean, to say a little bit poetically. So, but now that we have created this thing called AI that deals with, that deals with our something, our, our very unique ability or something we think that unique, like I'm trying to, like, I think, I mean, one of the main, main, main points about this article will be to assert the fact that, yes, I mean, everything is uncertain. We have to be very aware of this uncertainty, this randomness, this, this vulnerability, no matter how confident, no matter how strong, no matter how intelligent we think we are. So, so to go back, to get back to my point, so despite all of these, you know, we have created this thing called AI that can do imagination and information processing and, or in my own words, that can deal with randomness, that can manipulate randomness. So, so, and then we are, everything we are, all the concerns, all the, all the risk calculations, all the preparations we are trying to do is, no matter how imaginative and how be open-minded we are, we are still, all everything we are planning, thinking for future is based on this sort of structured, gradual, evolutionary, exponential, whatever you call it, there's still, there is still structure, we think there is structure, there is still sort of stability or a core in the middle, a line that we can refer to, but the reality could be even more brutal. The reality could be, is actually very random and very unpredictable and very uncertain. So.
Yeah, I'll try to add a little more, but I think nothing ground breakingly new. But I think I will just add, say, a record here, as these are still heavy in my mind. So I think the key point, or one of the key points I want to make from the message I want to convey through this article, will be that the safety, we should consider safety, or we should consider whatever that could be, that has to be done to protect our survival, is the thing, is one of the top priority things we do for when it comes to preparing for the future with AI, or even to prevent, to do anything to prevent the dangers and risks of AI starting from now. And then to add where it is relevant, like I just want to add just more analogy kind of thinking here, like we were born as human, and many philosophers and many intellectuals have already pointed out that after we were born, we came to notice that I think, therefore I am kind of structure. Then the rest is the history, right? Like, depending on the different society and the country and the places you were born in, but we are more or less born into, we are framed into this thinking, like we are human being, we have to survive, we have to go to school. And be successful, have a family. Yeah, of course, there are already a lot of, and have religion, and then the next life or not, what not. And then the struggles and make money, earn money, or live a very liberal life, not don't care about the future. Or think, maybe become someone like me, sort of submissive and dismissive at the same time about my identity and existence. But more or less, still, we are still sticking to this commonly accepted, commonly created story around our existence and our eventual or evolutionary or the gradual exponential growth and continued existence. But this is very, and then again, I think I am repeating myself here, but I'll just say it. So again, like all the AI safety initiatives and ideas are now based mainly on, and very understandably, like the country's security, you know, people's economic security, and then, like, if everything goes right, there will be abundance, utopia, universal basic income. Yes, yes, I mean, yeah, I want that. No, I don't want to walk and then just have enjoy my life. But I think the point I'm trying to make here is, before all of these things, I think the real, just downright, I don't know, plain safety, something that has to... that can prevent us from our extinction, our survival, is the most important, most priority area that we will have to consider and prioritize in developing AI. So, okay, I'm just speaking my mind out, but maybe I want to include this in the article, like, stopping the AI, development of AI, or slowing down the development of AI should be the priority, not because it will replace jobs, not because it will cause geopolitics, or it will make, not because we should slow down and, or even stop or change the way that the AI should behave, change the way we use AI, not because all the other dangers and threats we have imagined, but because of my point that we don't really know what consciousness is, what thinking is. So, but because AI deals with this thinking and imagination, and because AI can deal with this randomness, manipulate randomness, the, the, we don't really know how the, how it's how the dangers of it will be when it comes to randomness.