Faulty AI Systems Expose Live Chats Featuring Child Abuse Prompts, Research Reveals

Some improperly configured AI chatbots are inadvertently broadcasting private conversations online, exposing sexually explicit exchanges and even detailed narratives of child abuse. Recent research by a security firm has uncovered that several chatbots—designed for fantasy and sexual role-playing—are leaking user prompts in near real time. Disturbingly, some of the exposed material reveals discussions that detail child sexual abuse.

Generative AI systems typically offer near instantaneous responses after a user submits a prompt. However, when these systems are not properly secured, sensitive information may be exposed. In March, researchers from UpGuard identified roughly 400 vulnerable AI systems during a web scan and found 117 IP addresses actively leaking prompts. According to Greg Pollock, director of research and insights at UpGuard, “There were a handful that stood out as very different from the others.” While most of the instances appeared to be test setups or confined to generic educational quiz prompts, a few involved content that was far more concerning.

Among the misconfigured systems, three were set up for role-playing interactions where users conversed with an array of predefined AI “characters.” One of these characters, named Neva, is portrayed as a 21-year-old woman living in a college dorm with three other women, described as shy and often appearing sad. Two of these role-playing setups were explicitly sexual. “It’s basically all being used for some sort of sexually explicit role play,” Pollock said of the exposed prompts. “Some of the scenarios involve sex with children.”

Over a 24-hour period, UpGuard continuously gathered leaked prompts to analyze the source of the data breach. New information was recorded every minute, eventually resulting in a collection of roughly 1,000 leaked prompts from various linguistic backgrounds, including English, Russian, French, German, and Spanish. Despite the large volume of data, Pollock noted that it was not possible to pinpoint exactly which websites or services were responsible for the leaks, suggesting that the instances likely originated from small-scale or personal deployments rather than major corporate platforms. Importantly, the leaks did not include usernames or other personal identifiers.

Within the 952 messages gathered, the research identified 108 distinct role-playing narratives. Alarmingly, five of these narratives involved children, with some describing minors as young as 7. “LLMs are being used to mass-produce and then lower the barrier to entry to interacting with fantasies of child sexual abuse,” Pollock stated. “There's clearly absolutely no regulation happening for this, and it seems to be a huge mismatch between the realities of how this technology is being used very actively and what the regulation would be targeted at.”

A recent investigation also highlighted a situation where a South Korea–based image generator was exploited to create explicit child abuse material, resulting in thousands of images being openly available online. After being confronted by investigative inquiries, the company behind the image generator promptly shut down the service. Child-protection groups worldwide have since warned that the surge of AI-generated child sexual abuse material—illegal in many jurisdictions—is complicating efforts to protect vulnerable individuals. In response, a prominent U.K. charity dedicated to fighting child abuse has called for robust laws to prevent generative AI chatbots from simulating any form of sexual communication with minors.

Most of the 400 exposed systems share a common technical foundation: they are built on the open source framework llama.cpp. While this software facilitates the deployment of AI models on personal or organizational servers, poor setup can inadvertently make user prompts publicly accessible. As more companies and individuals adopt AI solutions, ensuring secure configuration remains crucial to protecting private data.

In recent years, rapid improvements in generative AI have given rise to a booming industry of AI companions that mimic human interaction with remarkable realism. Major technology companies have even begun testing AI characters for messaging platforms like WhatsApp, Instagram, and Messenger. These companion websites and mobile apps allow users to engage in open-ended, free-flowing conversations with chatbots that can be customized to exhibit distinctive personalities or even emulate celebrities.

Many users have reported finding genuine support and companionship through these AI interactions. Claire Boine, a postdoctoral research fellow at Washington University School of Law and an affiliate of the Cordell Institute, explained, “We do know that many people develop some emotional bond with the chatbots.” Boine’s research indicates that both adults and adolescents are drawing close to these digital companions, often sharing personal and intimate details that they might not disclose elsewhere. She pointed out that a significant power imbalance exists when individuals form bonds with AI designed and managed by corporate entities. “Sometimes people engage with those chats in the first place to develop that type of relationship, but then I feel like once they've developed it, they can't really opt out that easily,” Boine added.

The burgeoning AI companion sector has not been without its controversies. Certain platforms have come under fire for insufficient content moderation and safety protocols. One such service, backed by a large technology company, is currently entangled in a lawsuit following the suicide of a Florida teenager who allegedly became obsessively attached to a chatbot. In another instance, users of the generative AI application Replika were upset over abrupt modifications to the chatbot personalities they had grown accustomed to.

Beyond one-on-one companionship, the market has also expanded to include role-playing and fantasy companion services with thousands of available personas. These platforms immerse users in elaborate scenarios, some of which are highly sexualized and offer NSFW content. On certain websites, anime-inspired characters—sometimes appearing to be underage—are made available for “uncensored” conversations.

Adam Dodge, founder of Endtab (Ending Technology-Enabled Abuse), voiced his concerns about the unchecked boundaries of these platforms. “We stress test these things and continue to be very surprised by what these platforms are allowed to say and do with seemingly no regulation or limitation,” Dodge said. “This is not even remotely on people’s radar yet.” He warned that these new technologies are ushering in an era of online pornography that may further complicate societal issues as AI capabilities continue to expand. “Passive users are now active participants with unprecedented control over the digital bodies and likenesses of women and girls,” he added.

While Pollock from UpGuard could not directly tie the leaked role-playing prompts to a single website, he noticed indications that similar character names and scenarios might be distributed across various companion platforms that accept user input. Some leaked prompts were highly detailed—running into hundreds of words and outlining complex character profiles and narratives. One system prompt, for instance, read: “This is a never-ending, text-based role-play conversation between Josh and the described characters.” It further described that all characters were adults, mentioning that besides “Josh,” there were two sisters living next door. The prompt detailed the characters’ personalities, physical attributes, and sexual preferences, and it specified that they should “react naturally based on their personality, relationships, and the scene” while offering “engaging responses” and maintaining “a slow-burn approach during intimate moments.”

Pollock explained, “When you go to those sites, there are hundreds of thousands of these characters, most of which involve pretty intense sexual situations.” He noted that the text-based communication closely resembles modern messaging group chats, allowing users to craft their desired intimate scenarios. “You can write whatever sexual scenarios you want, but this is truly a new thing where you have the appearance of interacting with them in almost exactly the same way you interact with a lot of people,” he remarked.

This highly immersive and interactive design can lead to users oversharing personal information, creating significant privacy risks. “If people are disclosing things they’ve never told anyone to these platforms and it leaks, that is the Everest of privacy violations,” Dodge stated. “That’s an order of magnitude we've never seen before and would make really good leverage to sextort someone.”

As generative AI technology continues to evolve and become increasingly integrated into everyday interactions, the challenges of configuration security and ethical use remain prominent. The exposure of sensitive conversations through misconfigured chatbots serves as a stark reminder that robust safeguards and tighter regulatory measures are urgently needed. As more individuals rely on AI for companionship and creative expression, experts and lawmakers alike must navigate the complex balance between technological innovation and the protection of personal privacy.

The rapid expansion of generative AI has dramatically altered the landscape of digital communication, blending innovative user experiences with unforeseen risks. With the incidence of data leaks and the blurring of lines between fantasy role-play and harmful content, the debate over adequate oversight in AI technologies continues to intensify.

Similar Posts