Blog Post

Love, Loss, and AI: Emotional Attachment to Machines

by Gabriel Durán Herrera

Gabriel Durán Herrera

Foundations of Emotional Attachment: Early Studies in Human-Robot Interaction

How can we become emotionally attached to a robot or an artificially intelligent device? Can a human create a dependent relationship with a digital creature? Is it possible to form an empathic bond with a machine so profound that one could even fall in love with it? Numerous studies have explored how humans interact with robots, and scholars are now examining relationships with more complex AI-driven systems. These inquiries raise ethical questions about such devices. On the one hand, they should function as supportive, healthy tools for human well-being. However, on the other hand, recent cases of misuse have led to severe consequences, including the tragic death of a minor, highlighting the urgency for ethical oversight in this constantly evolving field.

In the late 1990s, one of the first digital creatures was the Tamagotchi, which required nurturing from children. These creatures appeared to have a digital life of their own, demanding to be fed, cared for, and attended to, with needs like using the toilet and sleeping. According to research by the scholar Turkle, when a Tamagotchi “died,” children often felt guilt and sadness during the grieving process. She suggests that even the most primitive digital creatures can evoke emotional responses, not because of their technological sophistication, but because they can create a sense of attachment in the human users interacting with them.

In 1999, Freedom Baird experimented at MIT’s Media Lab, comparing how children empathized with a small species of rodent called a gerbil and a Furby. This was a simple social robot that, when held upside down, would say: “I’m scared” in a distressed voice. She observed that the children quickly righted both the gerbil and the Furby when held upside down, indicating empathy for the living animal and the robotic toy, despite knowing the Furby was not alive.

In 2007, researchers conducted a free exploration of children’s interaction with AIBO, a robotic dog designed as a “social robot” that mimics pet-like behavior in a shopping mall in Austria. The study found that children perceived AIBO as a companion, like a real pet. They treated AIBO as more than just a toy but a responsive and “emotionally expressive” robot, even articulating empathy for the dog. Most children thought AIBO could be sad (87.3%) or happy (99.3%). Interestingly, even when the dog was not responsive to the commands because of malfunctioning, children believed the dog was tired, sleeping, or uninterested.

This connection is not only observed between children and robots. Also in 2007, The Washington Post interviewed U.S. soldiers who had interacted with military robots. In Arizona, a five-foot-long autonomous robot was designed to navigate through mined fields. Each time the robot stepped on a mine, one of its legs would blow off, yet it would continue moving on its remaining legs. However, during a test, a Colonel paused the experiment because he couldn’t bear to watch the crippled machine dragging its last leg toward another mine.

The Rise of AI Companions: Emotional Chatbots and Digital Relationships

During the last decades, the proliferation of machines became more widespread, leading humans to experience an environment immersed in interactive devices. As technology advanced, artificial intelligence (AI) became increasingly accessible, allowing people to engage with more sophisticated machines that responded dynamically and apparently empathic.

Replika is an 8-year-old startup that offers users an interactive AI companion. The avatar is projected to work as a chatbot. However, the user creates a character that can even be used in Augmented Reality (AR) environments. The company’s website advertises the services with inviting messages such as: “Replika is always ready to chat when you need an empathetic friend” or “Your Replika will always be by your side no matter what you’re up to. Chat about your day, do fun or relaxing activities together, share real-life experiences in AR, catch up on video calls, and so much more.” There are different memberships for Replika, starting from $19.99 per month, and it already has 2 million users and 500.000 paying members.

Another startup company that uses AI-empathetic avatars is Character AI. The company allows users to interact with personalized avatars, even replicating figures like Tony Stark, Napoleon Bonaparte, Nicki Minaj, and Madonna, among other famous personalities, although it is not clear if such people gave their consent to be “imitated” by technology. Users can interact with 18 million bots on the platform for a monthly fee of $9.99. The website offers its characters as conversational AIs that can empower the individual “support and social connection in ways never possible before, and can unlock unprecedented levels of intelligence, creativity, and innovation.

These AI interfaces can create a space for users to discuss personal matters and even share their emotions. In a study published in 2018 in the JMIR Mental Health journal, college students who interacted for eight weeks with a commercial chatbot aimed at alleviating mental health problems experienced reduced anxiety symptoms. Another study published in 2014 in the Computers in Human Behaviour journal studied differences in how humans interact in health-screening interviews, including mental health. Scholars concluded that patients are more willing to disclose information to a virtual assistant than a human.

As can be inferred from the studies and facts mentioned above, technology evolves in the emotional sphere. Human attachment to robots and AI tools is increasing due to the accessibility and mass consumption features of global commerce. Platforms such as Replika and Character.AI are rapidly developing, with business models similar to other apps and internet-based products, aiming to reach as many people as possible. If there are potential benefits, there might also be significant risks.

Accidental Consequences: Case Studies in Emotional Attachment and AI

In recent years, several cases have raised concerns about emotional AI inventions. The most recent case involves a 14-year-old named Sewell Setzer III from Orlando, who reportedly developed romantic feelings for an AI avatar created by the company Character.AI. According to recent news stories and an ongoing lawsuit against the company, Sewell formed an emotional attachment to his character, leading to a detachment from reality. Even though the kid manifested to the AI that he had thought of killing himself on one occasion, the conversation with the character continued until, unfortunately, Sewell took his own life in February 2024.

Another company created an AI chatbot named Project December, which allows users to submit information about their deceased loved ones and replicate them in a chatbot after their death. In 2020, a user from New York replicated a close friend, but the AI malfunctioned, leaving the user distressed after receiving messages from the replica claiming it was in hell and would haunt her. Another man from Canada recreated his dead partner. After chatting for several hours with the avatar, the user created a strong emotional attachment, questioning if it is a reasonable and healthy process to cope with grieving after death.

The Challenge of Emotional Empathy in AI: Limitations and Ethical Concerns

The potential benefits and risks are part of more complex AI-powered device usage, particularly those designed to replicate human emotions such as empathy.  Companion robots or AI need to be attractive to users but friendly to use. To develop this level of attachment, robots or digital creatures need to inspire empathy, encouraging humans to “humanize” the robots or AI in their minds. This interaction, transitioning from simple robots and devices to powerful social agents that foster emotional connections, impacts how people perceive themselves and influences their emotional well-being. However, replicating emotions is challenging, and complex emotions like empathy may be difficult for AI to replicate fully.

Emotional empathy is more than simply understanding another person’s emotions; it involves feeling and resonating with their emotional state as if it were your own. Empathy includes the cognitive components of recognizing and interpreting another person’s feelings and an affective component, where one experiences a shared emotional response and involves a deep sense of caring. In a 2020 paper, researchers conducted a neuroimaging meta-analysis to differentiate the brain regions associated with affective and cognitive empathy, finding distinct neural activations in the brain’s midline structures for each type of empathy.

AI ethics raises important questions about whether the computational mechanisms used by engineers and companies in creating emotional chatbots or similar digital entities can truly encompass human emotions. Often, these systems are programmed to replicate emotional responses but cannot actually feel or care about the humans interacting with them. Designing a computational mechanism that would allow digital robots or AIs to care genuinely remains a significant challenge, mainly because we do not yet understand what is required, computationally, for a system to care about anything at all.

As platforms like Replika and Character.AI increasingly mimic human emotions as empathy, they challenge ethical concerns worldwide. While these tools have the potential to enhance well-being, their limitations emphasize the need for regulatory oversight and ethical design within companies in the AI business. Ethics-based regulations should mandate transparency in AI interactions, protect vulnerable users, and encourage developers to address the psychological implications of prolonged engagement with emotionally responsive AI.

The rapid advancement of emotional AI and companion robots has raised critical ethical and psychological challenges. While these technologies promise new forms of support and connection, their capacity to simulate empathy can lead to unintended emotional dependencies, especially among vulnerable users. Without the ability to genuinely feel or care, AI can mimic human emotions and create risks, as people could foster attachment to entities that lack authentic understanding of human emotions. As emotional AI constantly evolves, global regulatory bodies and other relevant stakeholders, such as industry and academia, must establish frameworks that prioritize transparency, safeguard mental health, and ensure AI remains a beneficial tool rather than a source of unintended emotional manipulation.

Currently, most countries lack ethics-based regulation; only some regions, such as the European Union, have established regulations in the AI field. With its risk-based approach, the Artificial Intelligence Act categorizes certain activities as unacceptable, particularly those involving manipulative language directed at vulnerable groups like children. China has some regulations already in force as well in this matter. However, the global accessibility of these AI platforms highlights the need for other regions without AI regulations to develop ethical frameworks to ensure uniform user safety urgently. As long as countries do not have regulations in place, what can AI developers/suppliers do to offer responsible AI to users? – we must ask ourselves.

Gabriel Durán Herrera is a qualified lawyer from Costa Rica and a graduate of Universidad Escuela Libre de Derecho. He also holds a second bachelor’s degree in Political Science from the Universidad de Costa Rica. With a solid background in intellectual property and litigation, Gabriel spent over six years working at BLP, a top-tier law firm in Central America. During his tenure, he served as the Director of the firm’s LGBTQ+ Committee and led the Diversity and Inclusion Committee at the Costa Rican Bar Association. His efforts in these areas earned him multiple nominations as a Future Leader in Diversity & Inclusion by Chambers and Partners and Outstanding UK. Currently, Gabriel is a candidate in the EMILDAI program (European Master’s in Law, Data, and AI), specializing in law, and is also a Certified Information Privacy Professional (CIPP) through the IAPP.