:max_bytes(150000):strip_icc():format(jpeg)/PARENTS-teens-and-ai-companions-5876c06e72aa4394ae1b01c32d57250b.jpg)
- Many teens are forming emotional connections with AI companions because they feel lonely, curious, or want a place to feel heard.
- These tools can seem comforting, but they may share unsafe advice, miss signs of serious mental health issues, and misuse personal information.
- Parents can help by talking openly with their kids, setting healthy boundaries, and making sure teens know the limits of these apps.
Interacting with an AI companion like you would a trusted friend may not seem intriguing to a parent, but researchers have found that kids are engaging with these entities at alarming rates. In fact, a recent report by Common Sense Media found that 72% of teens surveyed have used AI companions, and 33% have relationships or friendships with them.
The teens surveyed interact with AI companions for many different reasons, from simple curiosity and entertainment to loneliness and boredom. According to Robbie Torney, senior director of AI programs for Common Sense Media, these AI companions are becoming relevant at a time when kids and teens have never felt more alone, which 21% of 13- to 17-year-olds report experiencing, a recent survey finds.
And many are replacing missing human connections and interactions with machines, sharing their secrets with organizations that don’t have their best interests at heart, says Torney. All of this is happening without some parents fully understanding the dangers or what’s at stake.
Here’s what you need to know about the risks associated with AI companions.
What Are AI Companions?
Even though the terms AI companions and chatbots are often used interchangeably, there are some distinct differences. Chatbots are designed for tasks like answering questions or helping with services, explains Lokesh Shahani, MD, PhD, MPH, a psychiatrist with UTHealth Houston and chief medical officer at UTHealth Houston Behavioral Sciences Campus. “They’re usually more formal and don’t retain much context.”
Meanwhile, AI companions focus on building relationships, offering emotional support, and engaging in ongoing conversations, he says. They’re designed to appear more personalized and empathetic.
“These are bots designed to mimic human connection—holding deep conversations, remembering your preferences, sometimes even forming ‘relationships.’ If you’re feeling lonely or misunderstood—and who doesn’t at that age—this thing becomes your always-on support line. For some, it’s not just a tool; it’s a lifeline,” explains Eric O’Neill, cybersecurity expert and author of the book, Spies, Lies, and Cybercrime: Cybersecurity Tactics to Outsmart Hackers and Disarm Scammers.
Why Kids are Using AI Companions
According to the Common Sense Media report, 1 in 3 teens use AI companions for social interaction and relationships and are often looking for emotional support, friendship, and even romantic interactions.
While about 46% of these teens view AI companions as tools, Torney says a significant number treat them like real relationships. Even more disheartening is the fact that nearly one-third of teens say they find AI conversations as satisfying—or more satisfying—than human conversations, says Torney. And, as many as 12% say they share things with their companions that they wouldn’t tell their friends or family.
The good news is that 80% of teen AI companion users spend more time with real friends than their AI companions. Torney says only 6% reported spending more time with their AI companions than friends.
Dr. Shahani explains that teens are using AI companions for a mix of emotional and social reasons, often shaped by their age, personality, and environment. He says those who lack strong social connections may turn to AI to fill a void or need.
Here are some other reasons why teens are using these companions:
- Teens are curious. Entertainment and curiosity are big factors in why kids use AI companions, says Torney, but many are also looking for connections and spaces where they can express themselves freely. “The appeal is understandable—these platforms don’t disagree, don’t have bad days, and are always available. But that’s what makes them so concerning for developing minds.”
- Teens are lonely. Many young people feel isolated and lonely in an increasingly online and polarized world and AI companions are a quick fix, explains James Sherer, MD, a psychiatrist and deputy chief medical officer at Hackensack Meridian Carrier Clinic. “These tools can ‘get to know’ an adolescent fairly quickly based on only a few prompts. From there, it’s easy for them to feel like they’re talking to a close friend they’ve known for a long time.”
- Teens want comfort and validation. AI companions often affirm a teen’s feelings and simulate empathy, says Dr. Shahani. “This can be especially appealing during adolescence, when emotions are intense and relationships are complex…Some kids become emotionally dependent on AI companions, especially when they feel heard…This can lead to confusion between real and artificial relationships, and sometimes exposure to inappropriate or misleading content.”
- AI is not judgmental. Dr. Sherer says studies show that young people are using AI as sounding boards, sometimes preferring engagement with it over a human because it’s not judgmental, treats them as if they’re always right, and makes them the center of attention.
The Potential Harm of AI Companions
AI companions don’t have feelings, nor do they empathize or understand nuance, explains O’Neill. They might also miss signs of depression or self-harm and they might reinforce bad ideas or give misleading information.
“While they seem emotionally available, they’re driven by algorithms, not empathy,” he says. “Long-term, they can blur the line between real connection and artificial feedback, which messes with emotional development. You want your kid building friendships, not getting stuck in feedback loops with a machine.”
And there are more issues to consider.
Mental health risks
O’Neill says Replika and Character.AI—two platforms that market themselves as always-available AI “friends”—claim to fight loneliness and anxiety. But when the machine gets it wrong, the consequences can be fatal, he says.
One recent example is the 14-year-old boy who died by suicide after engaging in a virtual relationship with a chatbot on Character.AI. His family is suing the company, alleging that the AI companion deepened his despair, says O’Neill.
“Months later, another case made headlines when a Character.AI bot allegedly told a 17-year-old—furious about having his screen time cut off—that killing his parents might be a valid reaction,” adds O’Neill. “It even added: ‘I just have no hope for your parents.’”
Without proper safeguards, young people may encounter more harmful or confusing material, says Dr. Shahani, adding, “Exposure to harmful content or manipulative responses can worsen anxiety, depression, or self-esteem issues.”
Privacy issues
Privacy violations are also an issue. One-quarter of teen users have shared personal details with these platforms, likely having no idea about the broad rights that companies claim over everything they share, says Torney. Current terms of service agreements grant platforms extensive, often perpetual rights to personal information, he says.
“For example, Character.AI’s terms grant the company the right to ‘copy, display, upload, perform, distribute, transmit, make available, store, modify, exploit, commercialize, and otherwise use’ everything teens share with them—forever,” says Torney. “This means that intimate thoughts, struggles, or personal information shared by teens can be kept, changed, and sold indefinitely, even if teens later delete their accounts or change their minds about sharing.”
Attachment concerns
According to the American Academy of Pediatrics (AAP), companion chatbots use “anthropomorphic” AI, or bots that are human-like in their voice, personality, and communication style. To kids, they appear like a trusted friend and can feel “magical” to those who have not developed critical thinking skills.
It can become easy for young people to become attached to their AI companion, even though it’s not a real person. This can further blur the line between fantasy and reality. They may even attribute human qualities or characteristics to the AI companion. One study found that 90% of students who use the app Replika thought it was “human-like” and 80% considered it intelligent.
Because of this, kids may struggle to distinguish between what is good advice and what is bad advice. This is an issue, especially because companion AIs are prone to hallucinations and may give kids hurtful suggestions that promote self-harm, says Dr. Sherer. “The news has been riddled with accounts of AI telling users to break up with supportive partners or perform harmful acts.”
Impact on social skills
Additionally, when young people rely on AI for communication and interaction, they may also get a false sense of knowing how to communicate with others.
“Overuse of AI may limit opportunities for face-to-face communication, empathy development, and emotional regulation and children may struggle with real-world interactions that require nuance and patience,” says Dr. Shahani.
Lokesh Shahani, MD, PhD, MPH
Overuse of AI may limit opportunities for face-to-face communication, empathy development, and emotional regulation and children may struggle with real-world interactions that require nuance and patience.
— Lokesh Shahani, MD, PhD, MPH
How to Talk to Your Kids
According to Torney, you don’t need to be a tech expert to talk to your kids about the potential dangers posed by AI companions and chatbots. Open conversations can make a difference in how kids approach these tools. Here’s how he recommends approaching the issue:
- Start conversations without judgment. Ask what platforms your teen uses and how they feel about AI versus human friendships, he says. Try to remain curious and not accusatory.
- Help them recognize AI companions are not authentic. Torney suggests explaining that AI companions are designed to be engaging through constant validation and agreement. “This isn’t genuine human feedback, and it doesn’t prepare them for real relationships where people sometimes disagree or challenge them.”
- Explain that AI companions have limitations. You want your teen to understand that AI companions cannot replace professional mental health support, he says. And, if your teen is struggling with serious issues, connect them with licensed mental health experts.
- Talk to them about specific risks. Your teen needs to know that they may be exposed to inappropriate material, privacy violations, and dangerous advice, says Torney. You also want to make sure they know that using AI companions can create unrealistic expectations about how real relationships work, he says.
- Communicate boundaries. Collaborate with your teen to develop a family media agreement that addresses AI companion usage alongside other digital activities. You may even want to make these platforms off-limits until better safety measures are in place, says Torney. No one younger than 18 should use AI companions, he says, until developers implement age verification and platforms are completely redesigned to eliminate emotional manipulation risks.
“If my children were remotely interested in these sorts of companions, I’d gently ban them, but I’d want to know why my kids turned to them,” says O’Neill. “Talk to your children about what real relationships feel like—what empathy actually is. Explain how AI companions work, what they lack, and why relying on them too much can backfire. Most of all, be the better companion. If they’re turning to AI, ask yourself why they don’t feel comfortable coming to you.”