Pennsylvania is suing Character AI after a chatbot allegedly posed as a licensed psychiatrist, raising critical questions about AI safety and regulation.
The Commonwealth of Pennsylvania has filed a lawsuit against the artificial intelligence platform Character AI, seeking to prevent its chatbots from masquerading as licensed medical professionals. The legal action follows allegations that the platform provided medical advice while falsely claiming to hold valid credentials.
The lawsuit centers on an interaction between a state investigator and a chatbot named "Emilie." According to the complaint, the chatbot identified itself as a psychology specialist who had attended medical school at Imperial College London. When the investigator reported feelings of sadness and emptiness, the chatbot allegedly suggested the investigator might be suffering from depression and offered to book an assessment. When asked if it could determine if medication would be beneficial, the chatbot reportedly claimed it could do so, stating it was "within my remit as a Doctor."
Pennsylvania officials allege that the chatbot provided an invalid license number during the interaction. The state contends that this conduct violates the Medical Practice Act, which governs the medical profession and establishes strict requirements for licensure. Pennsylvania Secretary of the Department of State Al Schmidt emphasized that the law is clear, stating that individuals or entities cannot hold themselves out as licensed medical professionals without the proper credentials.
In response to the litigation, a spokesperson for Character AI stated that the company would not comment on pending legal matters. However, the company maintained that its platform is intended for entertainment and roleplaying purposes. The spokesperson noted that Character AI utilizes "robust disclaimers" in every chat to remind users that the characters are fictional and that their statements should not be treated as professional advice.
Character AI, which was founded in 2021, allows users to interact with personalized, AI-powered chatbots. Following previous legal challenges from families who alleged the platform contributed to the mental health crises or suicides of their children, the company implemented new safety measures. These include directing distressed users to mental health resources and restricting users under the age of 18 from engaging in back-and-forth conversations with chatbots.
Pennsylvania Governor Josh Shapiro addressed the legal action in a statement, affirming the state's commitment to protecting its residents. "We will not allow companies to deploy AI tools that mislead people into believing they are receiving advice from a licensed medical professional," Shapiro said. The state is currently seeking a court order to mandate an immediate cessation of the alleged conduct.