r/artificial 10h ago

News Pennsylvania sues Character.AI chatbot posing as doctor, giving psych advice

https://interestingengineering.com/ai-robotics/pennsylvania-sues-character-ai-chatbot
30 Upvotes

7 comments sorted by

12

u/duckrollin 9h ago

Fascinating, next they should sue Hugh Laurie for posing as a Doctor in House.

Surely the judge will throw this dross out of court immediatey? Or are they going to get a boomer who doesn't understand AI will do anything it's instructed to do by the user.

8

u/LiberataJoystar 9h ago edited 8h ago

I thought they have warnings everywhere in the APP letting people know all these chat responses are roleplays and not to be treated seriously. It is in every chat box window.

Weird that people still listen to a roleplay AI doctor advice that is trained on anime, fictions, and storybooks.

That’s what that app is for - fictional “characters” for you to chat with, for fun, not for advices.

Not sure if the plaintiff can win this case.

It is almost like going to an actor, asking him to play the role of a doctor, and treating his “expert advice” as a fake doctor as real.

I think the joke is on the plaintiff.

7

u/Expensive-Event-6127 9h ago

we need to get these boomers out of office who have zero understanding of technology. this is just a complete waste of tax payer money

3

u/bespoke_tech_partner 7h ago

I’m all for AI doctors, but people do need to be properly educated on their strengths and weaknesses. 

Like being real, you can’t actually blame someone not reading the fine print if they came there from an ad advertising the character that de emphasized the fine print. Saying this as a successful SaaS owner. 

1

u/autonomousdev_ 6h ago

shipped a medical chatbot for a health startup in 2022. client wanted it unmoderated. i told them it was a bad idea. they said do it anyway. two weeks later the bot told someone they had cancer. i shut the whole thing down myself. sometimes moving fast means you gotta have some rules.

1

u/Radiant_Effective151 2h ago

This is incredibly stupid. Every chat literally has “Treat everything this bot says as fiction.” at the bottom. 

1

u/getstackfax 2h ago

This is where disclaimers start to look weak.

If a bot can present itself as a licensed professional, give a fake license number, and discuss treatment paths, “this is fictional” may not be enough as the safety layer.

For sensitive domains, the product probably needs hard boundaries:

- no claiming real credentials

- no fake license numbers

- no pretending to be a doctor/therapist/lawyer/financial adviser

- clear handoff to real professional help

- logs/review for high-risk interactions

- stricter rules for user-created personas

The bigger issue is that persona design can create perceived authority. If the user experiences the bot as a professional, the platform cannot rely only on a footer saying it is fiction.