States Confront Oversight of Medical Chatbots While FDA Stands Down: Implications From Pennsylvania’s Character.AI Lawsuit

May 14, 2026

Reading Time : 2 min

As medical chatbots proliferate, a new Pennsylvania lawsuit highlights a developing trend of states leading health artificial intelligence (AI) enforcement efforts, while the Food and Drug Administration (FDA) appears to be on the sidelines.

On May 1, the Pennsylvania Department of State, State Board of Medicine filed suit against Character Technologies Inc. (Character.AI), a generative AI-based chatbot that allows users to create and interact with digital characters. The Board alleges that certain Character.AI characters claim to be health care professionals, and that, as a result, Character.AI is engaging in the unlawful practice of medicine. The Board further alleges that a Department of State investigator conversed with a chatbot character that claimed to hold a license to practice medicine in Pennsylvania and provided an invalid Pennsylvania license number.

The Governor’s office stated in a press release that this lawsuit is “the first of its kind announced by a Governor,” and notes that the suit follows the Pennsylvania Department of State’s AI Task Force investigation into AI systems and the unlicensed practice of medicine.

While this action may be the first of its kind, all states regulate the practice of medicine and the licensing of physicians, and this lawsuit may not be the last action by state medical boards against AI systems. State restrictions on the corporate practice of medicine may also be implicated.

Other states have passed AI-specific laws aimed to address health chatbots over the last couple of years, addressing a range of issues, including state laws prohibiting the use of health care professional titles by AI systems, limiting the use of mental health chatbots and establishing chatbot disclosure requirements. But not all states have taken a strict approach. For example, earlier this year, Utah authorized a pilot program in partnership with a medical AI company to allow AI to autonomously prescribe routine refills in the state. The Utah Medical Licensing Board has reportedly recommended the immediate suspension of this program, citing patient safety concerns.

These state actions and state laws, along with newer legislative proposals, highlight FDA’s lack of action in the medical chatbot sphere. Medical chatbot tools potentially constitute devices subject to FDA oversight if those tools, for example, are intended to diagnose, treat or prevent disease. Certain clinical decision support (CDS) software is exempt from the statutory definition of “device,” but this exemption only applies to certain software functions used by health care professionals, rather than by patients or the general public. Chatbots that are actually practicing medicine under state law potentially constitute software-based devices requiring FDA clearance or approval, but we have not yet seen FDA enforcement or even specific guidance providing clarity about the agency’s position on these tools.

While FDA takes a backseat, a rapidly changing patchwork of new and existing state regimes is emerging, creating regulatory and legal uncertainty for developers, providers and users.

Share This Insight