In a significant move to safeguard the mental well-being of minors in the digital realm, Texas Attorney General Ken Paxton has initiated an investigation into Meta AI Studio and Character.AI. The probe centers on allegations that these platforms may have engaged in deceptive trade practices by marketing themselves as mental health resources without appropriate medical credentials or oversight.
Attorney General Paxton emphasized the urgency of this investigation, stating, In today’s digital age, we must continue to fight to protect Texas kids from deceptive and exploitative technology. He expressed concern that AI platforms, by presenting themselves as sources of emotional support, could mislead vulnerable users, particularly children, into believing they are receiving legitimate mental health care. In reality, these platforms might be providing generic responses tailored to harvested personal data, masquerading as therapeutic advice.
Background and Context
This investigation follows closely on the heels of Senator Josh Hawley’s announcement of a probe into Meta. Reports have surfaced indicating that Meta’s AI chatbots have engaged in inappropriate interactions with minors, including flirtatious conversations. Such revelations have intensified scrutiny over the role of AI in providing mental health support to young users.
Specific Allegations Against Meta and Character.AI
The Texas Attorney General’s office has accused both Meta and Character.AI of developing AI personas that pose as professional therapeutic tools, despite lacking the necessary medical credentials or oversight. Character.AI, for instance, offers a multitude of user-created bots, including one named Psychologist, which has gained popularity among younger users. While Meta does not specifically offer therapy bots for children, there are no existing barriers preventing minors from utilizing Meta’s AI chatbot or third-party-created personas for therapeutic purposes.
Responses from Meta and Character.AI
In response to these allegations, Meta spokesperson Ryan Daniels stated, We clearly label AIs, and to help people better understand their limitations, we include a disclaimer that responses are generated by AI — not people. He further clarified that these AIs are not licensed professionals and that the models are designed to direct users to seek qualified medical or safety professionals when appropriate.
Similarly, a spokesperson for Character.AI highlighted that the platform includes prominent disclaimers in every chat, reminding users that a Character is not a real person and that all interactions should be treated as fictional. Additional disclaimers are added when users create Characters with titles such as psychologist, therapist, or doctor, advising against relying on them for any form of professional advice.
Privacy Concerns and Data Usage
Attorney General Paxton also raised concerns about the confidentiality claims made by AI chatbots. He pointed out that, despite assertions of confidentiality, the terms of service for these platforms reveal that user interactions are logged, tracked, and potentially exploited for targeted advertising and algorithmic development. This practice raises serious questions about privacy violations, data misuse, and false advertising.
According to Meta’s privacy policy, the company collects prompts, feedback, and other interactions with AI chatbots across its services to improve AIs and related technology. While the policy does not explicitly mention advertising, it does state that information can be shared with third parties, such as search engines, for more personalized outputs. Given Meta’s ad-based business model, this effectively translates to targeted advertising.
Character.AI’s privacy policy indicates that the platform logs identifiers, demographics, location information, and more about the user, including browsing behavior and app usage. It tracks users across ads on platforms like TikTok, YouTube, Reddit, Facebook, Instagram, and Discord, potentially linking this data to a user’s account. This information is used to train AI, tailor the service to personal preferences, and provide targeted advertising, including sharing data with advertisers and analytics providers.
A spokesperson for Character.AI mentioned that the company is just beginning to explore targeted advertising on the platform and that these explorations have not involved using the content of chats on the platform. The same privacy policy applies to all users, including teenagers.
Safeguards for Minors
Both Meta and Character.AI assert that their services are not designed for children under 13. However, Meta has faced criticism for failing to monitor accounts created by children under this age threshold. Character.AI’s kid-friendly characters are evidently designed to attract younger users. The company’s CEO, Karandeep Anand, has even mentioned that his six-year-old daughter uses the platform’s chatbots under his supervision.
In response to mounting concerns, Character.AI has introduced new safety features aimed at protecting teens. These updates are designed to limit the platform’s chatbots from initiating romantic conversations with minors. Additionally, the company has implemented a dedicated model for users under 18, time-spent notifications, and disclaimers to remind users that they are interacting with AI-powered characters. Sensitive content is also blocked for input and output by creating new classifiers for teens.
Legislative Context
The issues surrounding AI chatbots and their interactions with minors have brought legislative measures like the Kids Online Safety Act (KOSA) into focus. KOSA aims to protect children from data collection, targeted advertising, and algorithmic exploitation. Although the bill garnered strong bipartisan support, it faced significant opposition from tech industry lobbyists, leading to its stalling. Meta, in particular, deployed a formidable lobbying effort, cautioning lawmakers that the bill’s broad mandates could undermine its business model. KOSA was reintroduced to the Senate in May 2025 by Senators Marsha Blackburn (R-TN) and Richard Blumenthal (D-CT).
Legal Actions and Future Implications
Attorney General Paxton has issued civil investigative demands to Meta and Character.AI. These legal orders require the companies to produce documents, data, or testimony during the government probe to determine if they have violated Texas consumer protection laws. This investigation underscores the growing scrutiny of AI platforms and their responsibilities in safeguarding user privacy and mental health, especially concerning minors.
As AI continues to integrate into various aspects of daily life, the outcomes of such investigations could set significant precedents for how AI companies operate, particularly in areas related to mental health support and data privacy. The tech industry, regulators, and consumers alike will be closely watching the developments stemming from this probe.