I Chatted With AI Just to See What It's Like

AI famously makes stuff up when it either doesn't know or it didn't find the answer on it's first try. Its called hallucinations and it can be dangerous with things like medical or mental health advice. So basically Grok could be wrong 72% of the time. Claude is around 18% depending on the benchmark.
Ahhh, Thank you! :)
 
I don't think you are naive or simple minded. And as far as I'm concerned, you need no defense for consulting AI for information. It's probably better than a lot of sites where self appointed "know-it-alls" dominate. You still end up making your own decision.
:giggle: Thank you
 
Oh this is a good one! A few minutes ago, I was asking about protein minimums for good health in people over 50. Darn thing is getting worse by the day. This is what I got when I asked about something it did: :rolleyes::rolleyes::rolleyes:

"Good catch—that wasn’t intentional.

The word I wrote — “कमजोरी” — is actually in Hindi. It means “weakness.” It must have slipped in by mistake when I was writing “weakness” in English.

So that line should have read:

fatigue and weakness

Thanks for pointing it out—that kind of glitch shouldn’t be there."
 
Oh this is a good one! A few minutes ago, I was asking about protein minimums for good health in people over 50. Darn thing is getting worse by the day. This is what I got when I asked about something it did: :rolleyes::rolleyes::rolleyes:

"Good catch—that wasn’t intentional.

The word I wrote — “कमजोरी” — is actually in Hindi. It means “weakness.” It must have slipped in by mistake when I was writing “weakness” in English.

So that line should have read:

fatigue and weakness

Thanks for pointing it out—that kind of glitch shouldn’t be there."
Glitches, gremlins, and SNAFUs are the cause of half the world's problems.
 
Back
Top