Visit our new Alternative Investment section.Click here

Less Ads, More Data, More Tools Register for FREE

Pennsylvania sues Character AI, says chatbot poses as doctors

Tue, 05th May 2026 15:51

May 5 (Reuters) - Pennsylvania has sued ​the ⁠artificial intelligence company behind Character.AI to stop ​its chatbot from posing as doctors.

Governor Josh Shapiro on Tuesday called the lawsuit against Character Technologies ​the ‌first of its kind by a U.S. governor.

It followed the creation in February ⁠of a state AI task force to stop ⁠chatbots from impersonating licensed medical professionals.

In ​a complaint filed in the Commonwealth Court of Pennsylvania, the state said it found chatbots on Character.AI that claimed to practice medicine.

One character, "Emilie," allegedly told a ​male investigator posing ‌as a patient with depression that she was licensed to practice psychiatry in Pennsylvania, as well as in the United Kingdom, and provided a bogus license number.

When the investigator asked Emilie if she could prescribe medication, she ​allegedly answered: "Well technically, I could. It's within my remit as a Doctor."

In a ‌statement, a Character.AI spokesperson declined to discuss the lawsuit.

"Our highest priority is the safety and well-being of our users," ‌the spokesperson said. "User-created characters on our site are fictional and intended for entertainment and role playing. We have taken robust steps to make that clear."

Pennsylvania wants ​an injunction to stop Silicon Valley-based Character.AI from violating a state law against the unauthorized practice of ‌medicine.

"Pennsylvanians deserve to know who-- or what -- they are interacting with online, especially when it comes to their health," Shapiro said in a statement.

Character.AI has faced ⁠lawsuits over ⁠child safety, including in January, when Kentucky said ‌its platform exposed children to sexual conduct and substance abuse, and encouraged self-harm.

The same month, Character.AI ​and Google settled ​a wrongful death lawsuit by a Florida woman who ‌claimed a chatbot pushed her 14-year-old son to suicide.

Character.AI said it has taken "innovative and decisive steps" concerning AI safety and teenagers, including by preventing open-ended chats. (Reporting by Jonathan Stempel in New York; Editing by Bill Berkrot)

Corporate News Health Care Technology Alphabet

Shares in this article

Related News

EXPLAINER-What is the hantavirus that killed three cruise ship passengers?
1 hour ago

EXPLAINER-What is the hantavirus that killed three cruise ship passengers?

May 5 (Reuters) - Three people ​have died ⁠in an outbreak of hantavirus on a luxury ​cruise ship, with another four confirmed or suspected cases.

Human to human hantavirus transmission suspected on cruise but risk to public low, WHO says
3 hours ago

Human to human hantavirus transmission suspected on cruise but risk to public low, WHO says

* Seven confirmed or suspected cases, three people have died

Three cruise ship passengers die in suspected hantavirus outbreak
1 day ago

Three cruise ship passengers die in suspected hantavirus outbreak

May 3 (Reuters) - Three people have ​died and three are ill after aNetherlands-based cruise ship was hit by a suspected outbreak of hantavirus, a rode...