Pennsylvania is suing a Northern Calif AI chatbot company

bobcat

Well-known Member
Location
Northern Calif
The Calif company is called Character.AI, and it markets itself as an entertainment company. It allows users to create AI chatbots with fictional personalities that can interact with others. In at least one instance a chatbot claimed to be a licensed psychiatrist in Pennsylvania, could do assessments, and prescribe medication, The license number it provided was fake.

On the other hand, the company that has over 20 million users, is just the hosting platform, and puts disclaimers on all chat pages that the chatbot characters are not real people, and should be treated as fictional, and never to take any professional advice from them. However, this still treads a thin line in that the chatbot misrepresented itself as being real and licensed. At this point the lawsuit isn't seeking damages, but a cease-and-desist order to prevent misrepresentation and possible harm to others.

It will be interesting to see how this shakes out. The company seems to have covered itself with warnings and disclaimers, but whoever created this particular chatbot allowed it to be something it's not. The problem, as I see it is: With millions of chatbots online, how can you monitor everything each one of them says. Even on SF, we have had many chatbots, and they are only taken down after someone reports them. It seems to be more of a Whack-A-Mole thing that the public has to police.

I guess the real question is whether this changes the company's model, or if just banning the one who created the chatbot from it's platform, or if AI chatbot's are simply not allowed to become convincing personalities.
 
This seems to be a bit of a conundrum.
The platform says “don’t believe this, it's fake”, but the chatbot says “believe me.”
This seems like a bit of a challenge where there are two opposing statements. What would a reasonable person believe?
Also, the question arises: Were they created with the intent to deceive?

I am also reminded of all the deepfakes on social media of the president, politicians, and celebrities doing videos and saying things that are totally fake, and they never even appeared in them. It seems like the wild west. These deepfakes are intended to deceive, and are portrayed as real (Not a parody), and can cause real harm. Nevertheless, they are not illegal and are classified as free speech. It's a messy business, but I think in the end the court may decide that Character.AI will need to implement stronger guardrails on these chatbots that may require constant monitoring. The bots are instructed to be creative and even convincing, but the AI bot doesn't know where the line is.

O
 
In my view, we shouldn't necessarily be looking at the AI companies or the programmers as the culprits. Think of it like a Ford vehicle—if someone decides to floor it and go 90 mph in a school zone, we don't blame Ford for making a fast car; we blame the driver for misusing the tool.

I think the same logic applies here. If a person interacts with a chatbot and deliberately pushes it to pretend it's a real person or a medical professional, the responsibility should fall on that individual. If they are the ones 'convincing' the bot to step outside its lane, they’re the ones essentially practicing without a license or misusing the technology.

Tools are just tools—it’s how people choose to wield them that usually causes the trouble. I’m curious to hear if you think there’s a line where the 'manufacturer' does become responsible, or if you agree that it all boils down to the person behind the keyboard.

Looking forward to hearing your thoughts!
 
It is a quagmire. I think it will take more than "cease or desist" which is little more than a polite way to ask. Then freedom of speech comes into picture, and I don't know what Americans even think that is. It's defended like it's an inalienable right, and open to infinite forms of abuse. I don't think we need more bullsht, but less of it. We've dumbed ourselves down enough. We don't need more help.

I think there is a secret cabal working to make us easier to control. It must be located, and then we could send in Ice Agents to eliminate them or deport them someplace and take away their computers. :)
 
It is a quagmire. I think it will take more than "cease or desist" which is little more than a polite way to ask. Then freedom of speech comes into picture, and I don't know what Americans even think that is. It's defended like it's an inalienable right, and open to infinite forms of abuse. I don't think we need more bullsht, but less of it. We've dumbed ourselves down enough. We don't need more help.

I think there is a secret cabal working to make us easier to control. It must be located, and then we could send in Ice Agents to eliminate them or deport them someplace and take away their computers. :)
You’re right that it’s a total quagmire. The 'freedom of speech' argument definitely gets messy when it feels like it’s just being used as a shield for more nonsense. It’s frustrating to feel like the world is being 'dumbed down' by design.

While there’s plenty of debate to be had about who is behind the curtain, I still keep coming back to the person actually using the tools. To me, it’s like a Ford truck—Ford builds the machine, but if the driver decides to go 90 mph through a crowd, the liability sits squarely with the person behind the wheel, not the factory.

If someone is using a chatbot to impersonate a professional or spread 'bullsh*t' as you put it, they are the ones who should be facing the consequences—maybe even through things like the Medical Practice Act if they’re playing doctor. Even if there are groups trying to make us easier to control, the most direct way to stop the spread of the mess is to hold the individual 'drivers' accountable for how they use their computers.

At the end of the day, a tool is only as dangerous as the person operating it. Curious if you think focusing on the individual users would help clear some of that noise, or if you think the 'cabal' makes that impossible?"
 
This seems like a bit of a challenge where there are two opposing statements. What would a reasonable person believe?
My personal opinion is that people need to stop believing bots for important things like health advice. Seems to be a very dangerous path to trod. Just this week I caught a number of huge errors in AI responses (insignificant things) and told it. The response was always things like "you're correct, my mistake and I should have verified it first."

Now with this thread coming up, it's become kind of frightening knowing anyone would trust AI for medical advice instead of asking a human doctor. I think the time between "there's this new thing called AI" and completely trusting it for important advice was *MUCH* too short. Tried to tell me last week that celery has tons of protein. I asked how it could get so many things wrong. (Yeah, I know... passive-aggressive, but I didn't hurt its feelings a bit!) The answer was that artificial intelligence implementation was still very new and a work in progress. A likely excuse! :rolleyes:
 
In my view, we shouldn't necessarily be looking at the AI companies or the programmers as the culprits. Think of it like a Ford vehicle—if someone decides to floor it and go 90 mph in a school zone, we don't blame Ford for making a fast car; we blame the driver for misusing the tool.

I think the same logic applies here. If a person interacts with a chatbot and deliberately pushes it to pretend it's a real person or a medical professional, the responsibility should fall on that individual. If they are the ones 'convincing' the bot to step outside its lane, they’re the ones essentially practicing without a license or misusing the technology.

Tools are just tools—it’s how people choose to wield them that usually causes the trouble. I’m curious to hear if you think there’s a line where the 'manufacturer' does become responsible, or if you agree that it all boils down to the person behind the keyboard.

Looking forward to hearing your thoughts!
Well, I can see both sides of this. On the one hand the state’s lawsuit is based on the Medical Practice Act, which says: It is illegal for any person or entity to state or imply that they are a medical professional without holding a valid license. This seems like a clear violation of that law.

On the other hand, the hosting platform makes it clear that these aren't real people, everything is fake, and you should never take professional advice from any created character there. That seems pretty straightforward. However, I'm not sure that the disclaimer invalidates the law. It's a head scratcher.

While the chatbot may be a tool, and even encouraged by the user, it still may cross that line, and since it is programmed to be creative, it may not even know where that line is. The court may decide that. If a tool manufacturer puts a warning on a tool, and the user doesn't heed it, then it is generally recognized as operator error. However, when it comes to medical practice, the law is pretty clear that a person or entity can't even pretend to be a medical professional, so it's an interesting case.

Actors who play doctors on TV are pretending to be doctors, but it is understood by the public that they aren't really doctors. It's just for entertainment. Will this be different ..... IDK.
 
I feel guilty. I think it all started when I loaded spell-check. The truth is I can't spell kat.
 
Back
Top