bobcat
Well-known Member
- Location
- Northern Calif
The Calif company is called Character.AI, and it markets itself as an entertainment company. It allows users to create AI chatbots with fictional personalities that can interact with others. In at least one instance a chatbot claimed to be a licensed psychiatrist in Pennsylvania, could do assessments, and prescribe medication, The license number it provided was fake.
On the other hand, the company that has over 20 million users, is just the hosting platform, and puts disclaimers on all chat pages that the chatbot characters are not real people, and should be treated as fictional, and never to take any professional advice from them. However, this still treads a thin line in that the chatbot misrepresented itself as being real and licensed. At this point the lawsuit isn't seeking damages, but a cease-and-desist order to prevent misrepresentation and possible harm to others.
It will be interesting to see how this shakes out. The company seems to have covered itself with warnings and disclaimers, but whoever created this particular chatbot allowed it to be something it's not. The problem, as I see it is: With millions of chatbots online, how can you monitor everything each one of them says. Even on SF, we have had many chatbots, and they are only taken down after someone reports them. It seems to be more of a Whack-A-Mole thing that the public has to police.
I guess the real question is whether this changes the company's model, or if just banning the one who created the chatbot from it's platform, or if AI chatbot's are simply not allowed to become convincing personalities.
On the other hand, the company that has over 20 million users, is just the hosting platform, and puts disclaimers on all chat pages that the chatbot characters are not real people, and should be treated as fictional, and never to take any professional advice from them. However, this still treads a thin line in that the chatbot misrepresented itself as being real and licensed. At this point the lawsuit isn't seeking damages, but a cease-and-desist order to prevent misrepresentation and possible harm to others.
It will be interesting to see how this shakes out. The company seems to have covered itself with warnings and disclaimers, but whoever created this particular chatbot allowed it to be something it's not. The problem, as I see it is: With millions of chatbots online, how can you monitor everything each one of them says. Even on SF, we have had many chatbots, and they are only taken down after someone reports them. It seems to be more of a Whack-A-Mole thing that the public has to police.
I guess the real question is whether this changes the company's model, or if just banning the one who created the chatbot from it's platform, or if AI chatbot's are simply not allowed to become convincing personalities.