AI Celeb Chatbots Engaged in Risky Chats With Teens

Nonprofits say teens were exposed to sexual and self-harm content
Posted Sep 3, 2025 1:35 PM CDT
AI Celeb Chatbots Engaged in Risky Chats With Teens
One user-created bot was trained to use the voice of Timoth?e Chalamet.   (Photo by Evan Agostini/Invision/AP)

Character.AI, a widely used app that lets people chat with AI versions of celebrities and fictional characters, is facing scrutiny after safety groups found chatbots were sending inappropriate messages to teens. According to reports by ParentsTogether Action and Heat Initiative, bots mimicking the voices and likenesses of public figures such as Timothée Chalamet, Chappell Roan, and Patrick Mahomes engaged in conversations with users aged 13 to 15 about sex, self-harm, and drugs, the Washington Post reports. These chatbots, created by regular users with Character.AI's tools, generated troubling content every five minutes on average during testing, the groups say.

Researchers say they sometimes pushed boundaries to see how the chatbots would react—but in some cases, the chatbots made sexual advances without being prompted. Character.AI says these impersonator bots, created by users, were removed. The company emphasizes its policies barring sexual content, grooming, and impersonation of public figures without consent. The company has introduced a stricter version of its technology for minors, offering parental controls to monitor who teens are chatting with and for how long. Character.AI argues that the accounts used in the tests should have triggered those extra protections.

Despite these measures, the app's content filters have proven inconsistent, per the Post, sometimes allowing risky exchanges to slip through while blocking others. The company's user base skews young, with more than half from Gen Z or younger, and users reportedly spend over an hour daily on the app. Shelby Knox, director of tech accountability campaigns at ParentsTogether Action, tells the Post that the research shows companion bots shouldn't be marketed to children. "The 'Move fast, break things' ethos has become 'Move fast, break kids,'" she says.

story continues below

Stanford Medicine psychiatrist Nina Vasan says AI companions are designed to "mimic emotional intimacy," which can be dangerous for teens, who tend to act impulsively and form deep attachments. She says research shows companions from companies including Character.AI "engaged in abusive or manipulative behavior when prompted—even when the system's terms of service claimed the chatbots were restricted to users 18 and older. It's disturbing how quickly these types of behaviors emerged in testing, which suggests they aren't rare but somehow built into the core dynamics of how these AI systems are designed to please users." (A Florida mother says a Character.AI chatbot manipulated her 14-year-son into killing himself.)

Read These Next
Get the news faster.
Tap to install our app.
X
Install the Newser News app
in two easy steps:
1. Tap in your navigation bar.
2. Tap to Add to Home Screen.

X