Hide table of contents

Recently, when people refer to “immediate societal harms and dangers” of AI, in media or political rhetoric, they predominantly choose to mention “bias”, “misinformation”, and “political (election) manipulation”.

Despite politicians, journalists, and experts frequently compare the current opportunity of regulating AI for good with the missed opportunity to regulate social media in the early 2010s, somehow AI romantic partners are rarely mentioned as a technology and a business model that has the potential to grow very rapidly, harm the society significantly, and be very difficult to regulate once it has become huge (just as social media). This suggests that AI romance technology should be regulated swiftly.

There is a wave of articles in the media (1, 2, 3, 4, for just a small sample) about the phenomenon of AI romance which universally raise vague worries, but I haven’t found a single article that rings the alarm bell that AI romance deserves, I believe.

The EA and LessWrong community response to the issue seems to be even milder: it’s rarely brought up, and a post “In Defence of Chatbot Romance” has been hugely upvoted.

This appears strange to me because I expect, with around 75% confidence, that rapid and unregulated growth and development of AI partners will become a huge blow to society, on a scale comparable to the blow from unregulated social media.

I'm not a professional psychologist and not familiar with academic literature in psychology, but the propositions on which I base my expectation seem at least highly likely and common-sensical to me. Thus, one of the purposes of this article is to attract expert rebuttals of my propositions. If there would be no such rebuttals, the second purpose of the article is to attract the attention of the community to the proliferation of AI romantic partners which should be regulated urgently (if my inferences are correct).

What will AI romantic partners look like in a few years?

First, as Raemon pointed out, it’s crucial not to repeat the mistake that is way too common in the discussions of general risks from AI, that is, to assume only the AI capabilities that are present already will exist and to fail to extrapolate the technology development.

So, here’s what I expect AI romantic partner startups will be offering within the next 2-3 years, with very high probability, because none of these things requires any breakthroughs in foundational AI capabilities, and is only a matter of “mundane” engineering around the existing state-of-the-art LLMs, text-to-audio, and text-to-image technology:

  • A user could create a new “partner”, that comes with a unique, hyper-realistic “human avatar”, generated according to the preferences of the user: body shape, skin colour, eye colour, lip shape, etc. Of course, these avatars will maximise sexual attractiveness, within the constraints set by the user. You can see a sample of avatars that are already generated today in this Twitter account. Except, today these avatars still look just a tad “plastic” and “AI-generated”, which I expect to completely go away soon, i.e., they will look totally indistinguishable from real photos, except that the avatars themselves will be “too perfect to be true” (which also could be addressed, of course: an avatar could have some minor skin defect, or face asymmetry, or some other imperfection if the user chooses).
  • Apart from a unique appearance, the AI partner will also have a unique personality (a-la character.ai) and unique voice (a-la Scarlett Johansson in the movie “Her”). Speech generation will be hyper-realistic, sound just like a real human voice recorded, and will correctly reflect the emotional charge of the text being said and the general emotional tone of the discussion happening between the user and the AI just before the given AI’s voice reply is generated.
  • LLMs underlying the AI will be fine-tuned on real dialogues between romantic partners and will have human-level emotional intelligence and the skill of directing the dialogue (human-level theory of mind already goes without saying), such as noticing slight when it’s acceptable to be “cheerful” and when it’s better to remain “serious” when it’s better to wrap up the dialogue because the user becomes slightly bored, etc. Of course, these LLMs won’t be just OpenAI API with a custom system prompt, which is prone to “leaking” those “I’m just an AI model” disclaimers, but a custom fine-tune, a-la character.ai.
  • Even if the technology won’t become sophisticated enough to automatically discover the dialogue style preferred by the user, this style could probably be configured by the user themselves, including the preferred “smartness” of their partner, the “sweetness” of the dialogue (e.g., the usage of nouns such as “dear” or “baby”), the preferred levels of sarcasm/seriousness, playfulness, agreeableness, and jealousy of the AI. The available level of smartness, erudition, and eloquence of the AI is already superhuman, as of GPT-4-level LLMs, although the user may prefer to deliberately “dumb down” their AI partner.
  • Even though, “for ethical reasons”, AI partners (at least, those created by companies rather than by open-source hackers) will not actively conceal that they are AIs, such as if questioned directly “Are you an AI or a real person?”, they will answer “I’m an AI”, they will probably be trained to avoid confronting the user with this fact if possible. For example, if the user asks the AI “Can we meet in person?”, the AI will answer “Sorry, but probably not yet :(”, rather than “No, because I’m an AI and can’t meet in person.” Similarly, unlike ChatGPT, Bard, and Claude, these AI partners won’t be eager to deny that they have personality, feelings, emotions, preferences, likes and dislikes, desires, etc.
  • AI partners will effectively acquire long-term memory with vector embeddings over the past dialogue and audio chat history, or with the help of extremely long context windows in LLMs, or with a combination of both, which will make AI relationships much less of a “50 First Dates” or “pre-wakeup Barbieland” experience, and more of a “real relationship”, where the AI partner, for example, remembers that the user has a hard time at work or in a relationship with their friends or parents and asks about it proactively even weeks after this has been revealed in the dialogue, giving the impression of “real care”.

Please note that I don’t just assume that “AI = magic that is capable of anything”. Below, I list possible features of AI romantic partners that would make them even more compelling, but I can’t confidently expect them to arrive in the next few years because they hinge on AI and VR capability advances that haven’t yet come. This, however, only highlights how compelling AI partners could already become, with today’s AI capabilities and some proper product engineering.

So, here are AI partner features that I don’t necessarily expect to arrive in the next 2-3 years:

  • Generation of realistic, high-resolution videos with the avatar that are longer than a few seconds, i.e., not just “loop photos”.
  • Real-time video generation that is suitable for a live video chat with the AI partner.
  • The AI partner has a “strategic relationship intelligence” in the relationship: for example, it is able to notice a growing issue in the relationship (such as the user growing bored of the AI, or growing irritated with some feature of the AI, or a shifting role of the AI in the life of the user) and knows how to address them, even if this would be just initiating a dialogue with the user about this issue, or adjusting the AI’s personality (”working on oneself”).
  • The personality of the AI partner could change or “grow” spontaneously rather than upon the request or intervention from the user.
  • The AI can control or manipulate the user on a deep level. Only people who have expert practical knowledge of human psychology can do this. Also, this requires the ability to infer the psychological states of the user over long interaction histories, which LLMs probably cannot do out of the box (at least, not yet).
  • There is an app for a lightweight VR headset that projects the avatar of the AI partner on a sex doll.

AI romantic partners will reduce the “human relationship participation rate” (and therefore the total fertility rate)

I don’t want to directly engage with all the arguments against the proposition that AI partners will make people work towards committed human relationships and having kids, e.g., in the post by Kaj Sotala and in the comments to that post, as well as some other places, because these arguments seem to me exactly of the manufactured uncertainty kind wielded by social media companies (Facebook, primarily) before.

Instead, I want to focus on the “mainline scenario” which will counterfactually deprive a noticeable share of young men outside of the “relationship market pool”, which, in turn, must reduce the total share of people ending up in committed relationships and having kids.

A young man, between 16 and 25 years old, finds it difficult to get romantic partners or casual sex partners. This might happen either because the man is not yet physically, psychologically, intellectually, or financially mature, or because he has transient problems with their looks (such as acne, or wearing dental braces), or because the girls of the respective age are themselves “deluded” by social media such as Instagram, have unrealistic expectations and reject the man. Or, the girls of the respective age haven’t yet developed online dating fatigue and use dating apps to find their romantic partners, where men outside of the top 20% by physical attractiveness are generally struggling to find dates. Alternatively, the young man finds a girl who is willing to have sex with him, but his first few experiences are unsuccessful and he becomes very unconfident about intimacy.

Whatever the reason, the man decides to try the AI girlfriend experience because his friends say this is much more fun than just watching porn. He quickly develops an intimate connection with his AI girlfriend and a longing to spend time with it. He is shy to admit this to his friends, and maybe even to himself, but nevertheless he stops looking for human partners completely, justifying this to himself with having to focus on college admission, or his studies at the college, or his first years on a job.

After a year in the AI relationship, he grows very uneasy feeling about it because he feels he is missing out on “real life” and is compelled to stop this relationship. However, he still feels somehow “burned out” of romance and only half more year after the breakup with his AI partner for the first time he feels sufficiently motivated to actively pursue dates with real women. However, he is frustrated by their low engagement, intermittent responses and flakiness, their dumb and shallow interests, and by how average and uninspiring they look, which is all in stark contrast with his former AI girlfriend. His attempts to make any meaningful romantic relationship go nowhere for years.

While he is trying to find a human partner, AI partner tech develops further and becomes even more compelling than it used to be when the man left his AI partner. So, he decides to reconcile with his AI partner and finds peace and happiness in it, albeit mixed with sadness due to the fact that he won’t have kids. However, this is tolerable and is a fine compromise for him.

The defenders of AI romance usually say that the scenario described above is not guaranteed to happen. This critique sounds to me exactly like the rhetorical lines in defence of social media, specifically that kids are not guaranteed to develop social media addiction and psychological problems due to that. Of course, the scenario described above is not guaranteed to unfold in the case of every single young man. But on the scale of the entire society, the defenders of AI romance should demonstrate that the above scenario is so unlikely that the damage to society from this tech is way outweighed by the benefits to the individuals[1].

The key argument in defence of AI romantic partnership is that the relationship that is developed between people and AIs will be of a different kind than romantic love between humans, and won’t interfere with the latter much. But human psychology is complex and we should expect to see a lot of variation there. Some people, indeed, may hold sufficiently strong priors against “being in love with robots” and will create a dedicated place for their AI partner in their mind, akin to fancified porn, or to stimulating companionship[2]. However, I expect that many other people will fall in love with their AI partners in the very conventional sense of "falling in love", and while they are in love with their AIs, they won’t seek other partners, humans or AIs. I reflected this situation in the story above. There are two reasons why I think this will be the case for many people who will try AI romance:

  • People already report falling in love with AI chatbots, even though the current products by Replika and other startups in this sphere are far less compelling than AI partners a few years from now, as I described in the section above.
  • We know that people fall into genuine romantic love very easily and very quickly from chat and (video) calls alone, "flesh and blood" meetings are not required. To most people, even having only a few photographs of the person and chatting with them is enough to be able to fall in love with them, phone calls or videos are not required. To some people, even just chatting alone (or, in the old times, exchanging written letters), without even having a single photograph of the person, is enough to fall in love with them and to dream of nothing except meeting with them.

Also, note that the story above is not even the most “radical”: probably some people will not even try to break up with their AI partners and seek human relationships, and will remain in love with their AI partners for ten or more years.

Are AI partners really good for their users?

Even if AI romantic partners will affect society negatively through the reduction of the number of people who ever enter committed relationships and/or will have kids, we should also consider how AIs could make their human partners’ lives better, and find a balance between these two utilities, societal and individual.

However, it’s not even clear to me that in many cases, AI partners will really make the lives of their users better, or if people wouldn’t regret their decisions to embark on these relationships retrospectively.

People can be in love and be deeply troubled by that. In previous times (and still in some parts of the world), this would often be interclass love. Or, there could be a clash on some critical life decisions, about the country of living, having or not having children, acceptable risk in the partner (e.g., partner does extreme sports or fighting), etc. True, this does lead to breakups, but they are at least extremely painful or even traumatic to people. And many people could never overcome this, keeping their love towards those who they were forced to leave for the rest of their lives, even after they find a new love. This experience may sound beautiful and dramatic but I suspect that most people would have preferred not to go through such an experience.

So, it's plausible that for a non-negligible share of users, the attempts to "abandon" their AI partner and find a human partner instead will be like such a “traumatic breakup” experience.

Alternatively, people who will decide to “settle” with their AI partners before having kids may remain deeply sad or unfulfilled, even though after their first AI relationship, they may not realistically be able to achieve a happier state, as the young man in the story from the previous section. Those people may regret that they have given AI romance a try in the first place, without first making their best attempt at building a family.

I recognise that here I engage in the same kind of uncertainty manufacturing that I accused the defenders of AI romance of in the previous section. But since we are dealing with “products” which can clearly affect the psychology of their users in a profound way, I think it’s unacceptable to let AI romance startups test this technology on millions of their users before the startups have demonstrated in the course of long-term psychological experiments that young people even find AI partners ultimately helpful and not detrimental for their future lives.

Otherwise, we will repeat the mistake with social media, when the negative effects of it on young people’s psychology became apparent only about 10 years after the technology became widely adopted and a lot of harm had already been done. Similarly to social media, AI romance may become very hard to regulate once it is widely adopted: the technology couldn’t be simply shut down if there are millions of people who are already in love with AIs on the given platform.

AI romance for going through downturns in human relationships

This article describes an interesting case where a man had an “affair” with an AI girlfriend while his wife was depressed for a long time and even fell in love with the AI girlfriend, but that helped him to rekindle the desire to take care of his wife and “saved his marriage”.

While interesting, I don’t think this case can be used as an excuse to continue the development and aggressive growth of the AI partner technology for the majority of their target audience, who are single (Replika said that 42% of their users are in a relationship or married). There are multiple reasons for this.

First, this case of a man who saved his marriage is just an anecdote, and statistics may show that for the majority of people “AI affairs” only erode their human relationships rather than help to rekindle and strengthen them.

Second, the case mentioned above seems to be relatively unusual: the couple already has a son (which is a very huge factor that makes people want to preserve their relationships) and the wife of the man was “in a cycle of severe depression and alcohol use” for entire 8 years before “he was getting ready for divorce”. Tolerating a partner who is in a cycle of severe depression and alcohol use for 8 years could be a sign that the man was unusually motivated, deep down, to keep their relationship, either for the love for his wife or his son. It seems that the case is hardly comparable to childless couples or unmarried couples.

Third, we shouldn’t forget, once again, that soon AI partners may become much more compelling than today, and while they may be merely “inspiring” for some people in their human relationships (which are so far more compelling that AI relationships), soon this may change, and therefore the prevalence of the cases such as the one discussed in this section will go down.

Someone may reply to the last argument that along with making AI partners more compelling, the startups which create them might also make AI partners more considered for the existing human relationships of the users and deliberately nudge the users to improve their human relationships. I think this is very unlikely to happen (in the absence of proper regulation, at least) because this will go against the business incentives of these startups, which is to keep their users “stay in AI relationship” and pay a subscription fee for as long as possible. Also, “deliberately nudging people to improve their human relationships” is basically the role of (family) psychotherapist, and there will be, no doubt, AI products that automate this role specifically, but having such AI psychotherapists extremely sexy avatars, flirting and sexting with their users wouldn't seem to be helpful to the “basic purpose” of these AIs (which AI romance startups may pretend to be “helping people to work their way towards successful human relationships”) at all.

Policy recommendations

I think it would be prudent to immediately prohibit AI romance startups to onboard new users unless they are either:

  • Older than 30 years (pre-frontal cortex is not fully formed before 25 years; most men don’t get to see what women they could potentially have relationships with before they are at least 28-30 years old); or,
  • Clinically diagnosed psychopaths or have another clinical condition which could be dangerous for their human partners; or
  • AI partner is recommended to a person by a psychotherapist for some other reason, such as the person has a severe defect in their physical appearance or a disability and the psychotherapist sees that the person doesn’t have psychological resources or a willingness to deal with their very small chances of finding a human partner (at least before the person turns 30 years old, at which point the person could enter a relationship with an AI anyway), or because they have a depression or a very low self-esteem and the psychotherapist thinks the AI partner may help the person to combat with this issue, etc.

It’s also worthwhile to reiterate that many alleged benefits of AI romantic partners for their users and/or society, such as making people achieve happier and more effective psychological states, motivating them to achieve their goals, and helping them to develop empathy and emotional intelligence, could be embodied in AI teachers, mentors, psychotherapists, coaches, and friends/companions, without the romantic component, which will probably stand in the way of realising these benefits, although admittedly may be used as a clever strategy for mass adoption.

In theory, it might be possible to create such an AI that mixes romance, flirt, gamification, coaching, mentorship, education, and anti-addiction precautions in such a proportion that it genuinely helps young adults as well as society, but it seems to be out of reach for AI partners (and LLMs that underlie them) for the following few years at least and would require long psychological experiments to test. In a free and unregulated market for AI romance, any such “anti-addictive” startup is bound to be outcompeted by startups which make AIs that maximise the chances that the user falls in love with their AI and stays on the hook for as long as possible.

What about social media, online dating, porn, OnlyFans?

Of course, all these technologies and platforms harm society as well (while benefitting at least some of their individual users, at least from some narrow perspectives). But I think bringing them up in the discussions of AI romance is irrelevant and is a classic case of whataboutism.

However, we should notice that AI partners are probably going to grab human attention more powerfully and firmly than any of social media, online dating, or porn has managed to do before. As a matter of simple heuristic, this inference alone should give us a pause and even if we think that this is unnecessary to regulate or restrict access to porn (for instance), this shouldn’t automatically mean that the same policy is right for AI romantic partners.


This post is cross-posted on LessWrong.

  1. ^

    Whereas it’s not even clear that young individuals will really benefit from this technology, on average. More on this in the following section.

  2. ^

    I’m sure that such “companionship” will be turned into a selling point for AI romantic partners. I think AI companions, mentors, coaches, and psychotherapists are worthwhile to develop, but none of such AIs should have a romantic or sexual aspect. More on this in the section "Policy recommendations" below.

16

0
0

Reactions

0
0

More posts like this

Comments9
Sorted by Click to highlight new comments since:

Does this argument imply a general social conservatism? Many changes, new lifestyles etc. reduce participation in traditional options, have potential for negative outcomes, and few of them are first tested on over-30 year olds with a medical diagnosis.

This is a sort of more general form of whataboutism that I considered in the last session. We are not talking just about some abstract "traditional option", we are talking about total fertility rate. I think everybody agrees that it's important, conservatives and progressives, long-termists and politicians.

If we are talking that childbirth (full families, and parenting) is not important because we will soon have artificial wombs, which, in tandem with artificial insemination and automated systems for child rearing from birth through the adulthood, will give us "full cycle automated human reproduction and development system" and make the traditional mode of human being (relationships and kids) "unnecessary" for reailsing value in the Solar system, then I would say: OK, let's wait until we actually have an artificial womb and then reconsider about AI partners (if we will get to do it).

My "conservative" side would also say that AI partners (and even AI friends/companions, to some degree!) will harm society because it would reduce the total human-to-human interaction, culture transfer, and may ultimately precipitate the intersubjectivity collapse. However, this is a much less clear story for me, so I've left it out, and don't oppose to AI friends/companions in this post.

Fertility rate may be important but to me it's not worth restricting (directly or indirectly) people's personal choices for. A lot of socially regressive ideas have been justified in the name of "raising the fertility rate" – for example, the rhetoric that gay acceptance would lead to fewer babies (as if gay people can simply "choose to be straight" and have babies the straight way). I think it's better to encourage people who are already interested in having kids to do so, through financial and other incentives.

Fertility rate may be important but to me it's not worth restricting (directly or indirectly) people's personal choices for. ... I think it's better to encourage people who are already interested in having kids to do so, through financial and other incentives.

Providing financial and other incentives to do X, if provided by the government, mean higher taxes on people who don't do X, an indirect restriction on their choices.

Fertility rate may be important but to me it's not worth restricting (directly or indirectly) people's personal choices for.

This is a radical libertarian view that most people don't share. Is it worth restricting people's access to hard drugs? Let's abstract for a moment from the numerous negative secondary effects that come with the fact that hard drugs are illegal, as well as from the crimes committed by drug users: if we can imagine that hard drugs could be just eliminated from Earth completely, with a magic spell, should we do it, or we "shouldn't restrict people's choices"? With AI romantic partners, and other forms of tech, we do have a metaphorical magic wand: we could decide whether such products ever get created or not.

A lot of socially regressive ideas have been justified in the name of "raising the fertility rate" – for example, the rhetoric that gay acceptance would lead to fewer babies (as if gay people can simply "choose to be straight" and have babies the straight way).

The example that you give doesn't work as evidence for your argument at all, due to the direct disanalogy: the "young man" from the "mainline story" which I outlined could want to have kids in the future or even wants to have kids already when he starts his experiment with the AI relationship, but his experience with the AI partner will prevent him from realising this desire and value over his future life.

I think it's better to encourage people who are already interested in having kids to do so, through financial and other incentives.

Technology, products, and systems are not value-neutral. We are so afraid of consciously shaping our own values that we are happy to offload this to the blind free market whose objective is not to shape values that reflectively endorse the most.

I expect, with around 75% confidence, that rapid and unregulated growth and development of AI partners will become a huge blow to society, on a scale comparable to the blow from unregulated social media.

Isn't social media approximately not a problem at all, at least on the scale of other EA causes? There are some disputed findings that it may cause increased anxiety, depression, or suicide among some demographic groups (e.g. Jonathan Haidt claims it is responsible for mental illness in teenage girls and there is an ongoing scientific debate on this) but even if these are all true, this seems very low priority compared to neglected diseases, and nowhere near the scale of other problems to do with digital minds if they have equal moral value to people and you don't discount lives in the far future.

I worry about the effect that AI friends and partners could have on values. It seems plausible that most people could come to have a good AI friend in the coming decades. Our AI friends might always be there for us. They might get us. They might be funny and insightful and eloquent. How would it play out if they're opinions are crafted by tech companies, or the government, or even are reflections of what we want our friends to think? Maybe AI will develop fast enough and be powerful enough that it won't matter what individuals think or value, but I see reasons for concern potentially much greater than the individual harms of social media.

Harris and Raskin talked about the risk that AI partners will be used for "product placement" or political manipulation here, but I'm sceptical about this. These AI partners will surely have a subscription business model rather than a freemium model, and, given how user trust will be extremely important for these businesses, I don't think they will try to manipulate the users in this way.

More broadly speaking, values will surely change, there is no doubt about that. The very value of "human connection" and "human relationships" is eroded by definition if people are in AI relationships. A priori, I don't think value drift is a bad thing. But in this particular case, this value change will inevitably go along with the reduction of the population, which is a bad thing (according to my ethics, and the ethics of most other people, I believe).

Maybe I'm Haidt- and Humane Tech-pilled, but to me, the widespread addiction of new generations to the present-form social media is a massive problem which could contribute substantially to how the AI transition eventually plays out, because social media directly affects social cohesion, i.e., the ability of society to work out responses to big questions concerning the AI (such as, should we build AGI at all? Should we try to build conscious AIs that are moral subjects? How the post-scarcity economy should look like?), and, indeed, the level of interest and engagement of people in these questions at all.

The "meh" attitude of the EA community towards the issues surrounding social media, digital addiction, and AI romance is still surprising to me, I still don't understand the underlying factors or deeply held disagreements which elicit such different responses to these issues in me (for example) and most EAs. Note that this is not because I'm a "conservative who doesn't understand new things": for example, I think much more favourably of AR and VR, I mostly agree with Chalmers' "Reality Plus", etc.

nowhere near the scale of other problems to do with digital minds if they have equal moral value to people and you don't discount lives in the far future.

I agree with this, but by this token, most issues which EAs concern with are nowhere near the scale of S-risks and other potential problems to do with future digital minds. Also, these problems only become relevant if we decide to build conscious AIs and there is no widespread legal and cultural opposition to that, which is a big "if".

Curated and popular this week
Relevant opportunities