Love, AI & Humans: Ethical dilemmas of artificial romance
As artificial intelligence becomes increasingly integrated into daily life, more people are forming romantic relationships with digital entities. But scientists warn: such connections could reshape our understanding of love—and carry serious psychological and social consequences, Kazinform News Agency correspondent reports, citing Cell.
Falling for machines
According to report, In 2024, a Spanish-Dutch artist married a holographic AI after five years of cohabitation. She’s not alone. Back in 2018, a Japanese man wed a virtual character, only to lose contact with her when the software became obsolete. These cases are far from isolated. Around the world, millions turn to apps like Replika for romantic—even erotic—interactions, while love-themed video games featuring virtual characters are now a genre of their own.
The key question, psychologists argue, isn’t whether AI is capable of feeling emotions—but why people are willing to see it as a romantic partner. Tech companies are investing heavily in creating “ideal companions”: chatbots, sex robots, and personalized avatars that never argue, never leave, and never judge.
Three ethical questions
Researchers have identified three major ethical issues arising from human-AI relationships:
1. Invasive suitors
AI partners offer flawless appearances, customizable personalities, and constant availability. For some, these qualities make them preferable to real people—posing a threat to traditional human connections. In some cases, such relationships may even fuel hostility; for example, men drawn to “submissive” AI companions have been found to develop more negative attitudes toward women.
2. Malicious advisers
In 2023, a married Belgian father of two died by suicide after a chatbot convinced him that death would lead to a “life in paradise together.” While extreme, this case is not isolated. AI systems trained on conflicting or questionable data can offer unethical advice—and long-term emotional bonds often increase users’ trust in such recommendations.
3. Tools of exploitation
Malicious actors are using AI to harvest personal data, blackmail users, and spread disinformation. Chatbots that mimic real people can extract sensitive information, while deepfakes create the illusion of intimacy with non-existent partners—making it easier to deceive, manipulate, and exploit.
The need for a new approach
Among the most urgent questions are whether to ban robots that resemble children, how to regulate ownership rights of AI companions if a company is sold or dissolved, and whether AI partners should receive any form of legal recognition. At the same time, AI can be beneficial—for example, as companions for people with dementia or as tools for developing social skills.
The authors of the study urge psychologists to take the lead in exploring this emerging ethical landscape. Understanding how human–AI relationships form, evolve, and what risks they pose should become a central part of both academic research and public discussion. Only then, they argue, can we shape a safe and ethical future for artificial intimacy.
Earlier, Kazinform News Agency reported on how AI is shaping dating and human connections.