The attorney general of Texas
announced Monday that he’s launched an investigation into Meta and
Character.AI for deceptive marketing relating to their AI chatbots.
Ken Paxton alleges that the companies may have misled users into thinking their chatbots were “mental health tools.”
The state’s top lawyer—and a staunch conservative who rode the so-called Tea Party wave a decade ago—is particularly focused on the uncredentialed positioning of the AI tools as “professional therapeutic tools” for vulnerable users, particularly children.
“AI-driven chatbots often go beyond simply offering generic advice and have been shown to impersonate licensed mental health professionals, fabricate qualifications, and claim to provide private, trustworthy counseling services,” Paxton said in a statement.
He added that the chatbots’ user activity tracking and targeting—which their operators disclose—could violate Texas consumer protection laws.
Both companies’ chatbots unquestionably come with visible disclaimers regarding their use; the question is whether those are sufficient for vulnerable users who may not understand or disregard them.
Both companies say their services aren’t intended for children under 13 years of age, but there are few guardrails to stop underage users.
Meta, for example, prompts new AI users for their birth year and blocks access to those who select 2013 or later, but there’s little stopping them from selecting a different year.
Both companies have been served with legal orders to produce documents, data, or testimony to supply the probe.
—AN