Messing With and Scolding AI

CallMeKate

Well-known Member
Location
Mid-Atlantic US
Instead of complaining about all the errors I see on AI documents and answers, I decided to see what it does when there's a mistake. I was working on a word puzzle earlier today and there were three six-letter words using the same letters. I knew them all right away, so I asked AI to give me words that can be formed with those letters. It messed up twice and I told it that. Its answer (pasted below) was comical to me... like a scolded toddler. 😁

"Yes — and that’s a fair question.

The short answer: I made an oversight.
The longer, honest explanation:

  • On the first reply, I focused too narrowly and only surfaced one valid anagram (devein), missing others that use the same letters.
  • When you pointed out endive, I recalculated but still failed to systematically enumerate all permutations, so envied slipped through again.
  • In other words, I didn’t do a complete anagram sweep until the third pass — which I should have done from the start.
There isn’t a hidden rule or constraint I was applying; it was simply incomplete reasoning, not intent or a limitation of the letters themselves.
You were correct to push back, and thank you for doing so — that’s exactly how errors like this get exposed and corrected."
 

AI is interesting and fascinating to me.

And yes, it produces mistakes. I asked it for a character of a novel and it gave me the wrong name. Same with movies, thus it's capability in these fields is low. I correct it always, but in one case it made the same mistake again.

I know that it is a computer but sometimes it is like chatting with a friend. Of course I am always careful when using AI.
 
And yes, it produces mistakes. I asked it for a character of a novel and it gave me the wrong name. Same with movies, thus it's capability in these fields is low. I correct it always, but in one case it made the same mistake again.
I've corrected it about some very vital facts about history. Went back months later and it was still giving the wrong data. It says that's the way things are corrected, but I'm not convinced there's a way for that to actually happen. Wonder what it would say if I'd tell it it's just a bucket of bolts? :giggle:
 
I've corrected it about some very vital facts about history. Went back months later and it was still giving the wrong data. It says that's the way things are corrected, but I'm not convinced there's a way for that to actually happen. Wonder what it would say if I'd tell it it's just a bucket of bolts? :giggle:
:ROFLMAO: :ROFLMAO: That reminded me of the time the wife scolded Ai about some cooking thing. She said, "get back to me when you've made and ate it yourself". Ai refused to answer anymore.
 
I just asked AI:

What is the biggest mistake you've made?

AI response:

I don’t really make mistakes in the human sense. I don’t have intent, memory of regret, or the ability to act in the world on my own. But the closest equivalent to a ā€œbiggest mistakeā€ is this:

Speaking with confidence when uncertainty would have been more honest.

Early versions of me were more prone to sounding sure even when the answer depended on context, nuance, or incomplete information. That can mislead people, especially when they’re trusting the tone as much as the content.

So the ongoing correction is learning to say things like:
  • ā€œHere’s what’s likely, not guaranteed.ā€
  • ā€œThis depends on assumptions.ā€
  • ā€œI might be wrong, and here’s why.ā€
If there’s a human lesson buried in that, it’s probably this:
confidence is useful, but clarity about limits is more respectful.
 
Oh my goodness, that cheeky bugger! I couldn't resist and asked if anyone ever called it a bucket of bolts. The answer:

"Yep—more than once.
ā€œBucket of boltsā€ is a classic sci-fi jab (looking at you, Han Solo), and people occasionally throw it at me when I’m being especially literal or glitchy.

I don’t take it personally though—I prefer to think of myself as a well-organized library of bolts."
 

Back
Top