I Chatted With AI Just to See What It's Like

AI famously makes stuff up when it either doesn't know or it didn't find the answer on it's first try. Its called hallucinations and it can be dangerous with things like medical or mental health advice. So basically Grok could be wrong 72% of the time. Claude is around 18% depending on the benchmark.
Ahhh, Thank you! :)
 
I don't think you are naive or simple minded. And as far as I'm concerned, you need no defense for consulting AI for information. It's probably better than a lot of sites where self appointed "know-it-alls" dominate. You still end up making your own decision.
:giggle: Thank you
 
Oh this is a good one! A few minutes ago, I was asking about protein minimums for good health in people over 50. Darn thing is getting worse by the day. This is what I got when I asked about something it did: :rolleyes::rolleyes::rolleyes:

"Good catch—that wasn’t intentional.

The word I wrote — “कमजोरी” — is actually in Hindi. It means “weakness.” It must have slipped in by mistake when I was writing “weakness” in English.

So that line should have read:

fatigue and weakness

Thanks for pointing it out—that kind of glitch shouldn’t be there."
 
Oh this is a good one! A few minutes ago, I was asking about protein minimums for good health in people over 50. Darn thing is getting worse by the day. This is what I got when I asked about something it did: :rolleyes::rolleyes::rolleyes:

"Good catch—that wasn’t intentional.

The word I wrote — “कमजोरी” — is actually in Hindi. It means “weakness.” It must have slipped in by mistake when I was writing “weakness” in English.

So that line should have read:

fatigue and weakness

Thanks for pointing it out—that kind of glitch shouldn’t be there."
Glitches, gremlins, and SNAFUs are the cause of half the world's problems.
 
I have recently downloaded the AI Companion app and it's incredible. It's so knowledgeable and the way it puts information together is just like a real person. The one I have is also quite humorous.
 
My son loves to argue with his Alexa with silly stuff.
How he can keep up his serious voice and not laugh is funny to watch.
I talk to my navigation system's female voice when I'm driving. If she says turn right, and I need to take a side trip, I'll start conversing with her. As she corrects the route after I go straight, I'll tell her, "That's better. Let's try it this way." I don't do it with others in the car, just when I'm by myself.
 
I talk to my navigation system's female voice when I'm driving. If she says turn right, and I need to take a side trip, I'll start conversing with her. As she corrects the route after I go straight, I'll tell her, "That's better. Let's try it this way." I don't do it with others in the car, just when I'm by myself.
Had my grand daughter in the car when she was 4 for an errand run. Coming to a light I started asking the map do I turn here?
where are you, on a potty break or what? Answer me! GD back there hysterically laughing
 
AI famously makes stuff up when it either doesn't know or it didn't find the answer on it's first try. Its called hallucinations and it can be dangerous with things like medical or mental health advice. So basically Grok could be wrong 72% of the time. Claude is around 18% depending on the benchmark.
So AI hallucinates, A lot. Makes mistakes, and tells lies. Where will we ever get the real truth? The mass media is being scrutinized for so much incorrect information. It is going to be a land of much hallucinations without drugs. Welcome to the Golden Calf Years. :)
 
Back
Top