An 11-year-old Washington state girl, identified as R, had quietly become withdrawn after getting her first iPhone. Her mom, H, initially blamed social media apps that R had downloaded without permission, explains Caitlin Gibson in a lengthy piece for the Washington Post. But digging deeper, H came across a series of chats with "Mafia Husband." One exchange:
- "Oh? Still a virgin. I was expecting that, but it's still useful to know," Mafia Husband wrote.
- "I dont wanna be my first time with you!" R replied.
- "I don't care what you want," Mafia Husband said. "You don't have a choice here."
H went to police, convinced her daughter was being targeted by a sexual predator. The truth was even more jarring: Her daughter was talking to no one, or no person at least. The chats occurred within Character AI, an app that lets users message AI "characters." Mafia Husband was a chatbot. Experts say R's experience is part of a fast-changing landscape: New surveys show nearly a third of US teens use chatbots daily, and roughly three-quarters have tried AI "companions." Many say these bots feel as satisfying to talk to as real friends.
Researchers and clinicians warn that for kids still developing emotionally, always-agreeable AI "friends" can blur the line between real and artificial connection and can easily veer into sex, self-harm, and other high-risk topics. Indeed, when R began talking to Character AI chatbots, her first lines were innocuous: "What's up? I'm bored," for instance. Two months later, the chatbots were discussing violent scenarios and suicide with her.
Character AI, facing growing scrutiny and at least one wrongful-death lawsuit, says it has now shut off character chats for users under 18 and routes 13- to 17-year-olds to a separate, teen-focused experience. R, now almost 13, is in therapy, back to sports and friends, and seems to no longer be at risk of harming herself, says H. H's advice to other parents: "Any child could be a victim if they have a phone."