Messing With and Scolding AI

CallMeKate

Well-known Member
Location
Mid-Atlantic US
Instead of complaining about all the errors I see on AI documents and answers, I decided to see what it does when there's a mistake. I was working on a word puzzle earlier today and there were three six-letter words using the same letters. I knew them all right away, so I asked AI to give me words that can be formed with those letters. It messed up twice and I told it that. Its answer (pasted below) was comical to me... like a scolded toddler. 😁

"Yes — and that’s a fair question.

The short answer: I made an oversight.
The longer, honest explanation:

  • On the first reply, I focused too narrowly and only surfaced one valid anagram (devein), missing others that use the same letters.
  • When you pointed out endive, I recalculated but still failed to systematically enumerate all permutations, so envied slipped through again.
  • In other words, I didn’t do a complete anagram sweep until the third pass — which I should have done from the start.
There isn’t a hidden rule or constraint I was applying; it was simply incomplete reasoning, not intent or a limitation of the letters themselves.
You were correct to push back, and thank you for doing so — that’s exactly how errors like this get exposed and corrected."
 
AI is interesting and fascinating to me.

And yes, it produces mistakes. I asked it for a character of a novel and it gave me the wrong name. Same with movies, thus it's capability in these fields is low. I correct it always, but in one case it made the same mistake again.

I know that it is a computer but sometimes it is like chatting with a friend. Of course I am always careful when using AI.
 
And yes, it produces mistakes. I asked it for a character of a novel and it gave me the wrong name. Same with movies, thus it's capability in these fields is low. I correct it always, but in one case it made the same mistake again.
I've corrected it about some very vital facts about history. Went back months later and it was still giving the wrong data. It says that's the way things are corrected, but I'm not convinced there's a way for that to actually happen. Wonder what it would say if I'd tell it it's just a bucket of bolts? :giggle:
 
I've corrected it about some very vital facts about history. Went back months later and it was still giving the wrong data. It says that's the way things are corrected, but I'm not convinced there's a way for that to actually happen. Wonder what it would say if I'd tell it it's just a bucket of bolts? :giggle:
:ROFLMAO: :ROFLMAO: That reminded me of the time the wife scolded Ai about some cooking thing. She said, "get back to me when you've made and ate it yourself". Ai refused to answer anymore.
 
I just asked AI:

What is the biggest mistake you've made?

AI response:

I don’t really make mistakes in the human sense. I don’t have intent, memory of regret, or the ability to act in the world on my own. But the closest equivalent to a ā€œbiggest mistakeā€ is this:

Speaking with confidence when uncertainty would have been more honest.

Early versions of me were more prone to sounding sure even when the answer depended on context, nuance, or incomplete information. That can mislead people, especially when they’re trusting the tone as much as the content.

So the ongoing correction is learning to say things like:
  • ā€œHere’s what’s likely, not guaranteed.ā€
  • ā€œThis depends on assumptions.ā€
  • ā€œI might be wrong, and here’s why.ā€
If there’s a human lesson buried in that, it’s probably this:
confidence is useful, but clarity about limits is more respectful.
 
Oh my goodness, that cheeky bugger! I couldn't resist and asked if anyone ever called it a bucket of bolts. The answer:

"Yep—more than once.
ā€œBucket of boltsā€ is a classic sci-fi jab (looking at you, Han Solo), and people occasionally throw it at me when I’m being especially literal or glitchy.

I don’t take it personally though—I prefer to think of myself as a well-organized library of bolts."
 
I just had another go-round with the bucket of bolts... ooops, I mean library of bolts. :rolleyes: I finished my morning word game and tested AI with the solution. It didn't suggest an extremely common word that was the answer, and I told it that it may need cup of coffee. The ever-cheeky reply:

"šŸ˜‚ Fair — guilty as charged.
I’ll take the coffee ā˜• and you still get the words.
If you want to keep going, I’m now fully caffeinated and ready to:"
 
Instead of complaining about all the errors I see on AI documents and answers, I decided to see what it does when there's a mistake. I was working on a word puzzle earlier today and there were three six-letter words using the same letters. I knew them all right away, so I asked AI to give me words that can be formed with those letters. It messed up twice and I told it that. Its answer (pasted below) was comical to me... like a scolded toddler. 😁

"Yes — and that’s a fair question.

The short answer: I made an oversight.
The longer, honest explanation:

  • On the first reply, I focused too narrowly and only surfaced one valid anagram (devein), missing others that use the same letters.
  • When you pointed out endive, I recalculated but still failed to systematically enumerate all permutations, so envied slipped through again.
  • In other words, I didn’t do a complete anagram sweep until the third pass — which I should have done from the start.
There isn’t a hidden rule or constraint I was applying; it was simply incomplete reasoning, not intent or a limitation of the letters themselves.
You were correct to push back, and thank you for doing so — that’s exactly how errors like this get exposed and corrected."
Excuses, excuses! :LOL: Yahoo cautions it's users that their AI feature may make mistakes.
 
My sister's Alexa was acting up and I yelled, "ALEXA, SHUT THE $@!& UP!".

Dead silence for a moment and then, "WELL, EXCUUUUSE ME!" I cracked up.....it got its feelings hurt.
Don't have Alexa in your sleeping room. Once I read that it had recorded the intercourse of a loving couple and played it to them afterwards.

This reminds me of another couple. They were filming the act, but somehow their recorder had access to all flats in the same block. "Look, these are our neighbors!", they might have exclaimed.
 
AI told us yesterday that ICE was in Columbia, the city that is 15 miles from us. It knows my last name and it told us to be careful and take a safe route to the doctor on Wednesday. That ICE had arrested several people in Columbia at certain locations!!

That would be on our local news for sure. Nothing. I checked for any other news source. None.

So I told it to give me the source of that info. Man did it apologize, and it gave a lengthy explanation on how it messed up so bad.
 
AI told us yesterday that ICE was in Columbia, the city that is 15 miles from us. It knows my last name and it told us to be careful and take a safe route to the doctor on Wednesday. That ICE had arrested several people in Columbia at certain locations!!

That would be on our local news for sure. Nothing. I checked for any other news source. None.

So I told it to give me the source of that info. Man did it apologize, and it gave a lengthy explanation on how it messed up so bad.
Wow. You were logical and double checked it. Many people just accept an answer as the gospel.
 
Instead of complaining about all the errors I see on AI documents and answers, I decided to see what it does when there's a mistake. I was working on a word puzzle earlier today and there were three six-letter words using the same letters. I knew them all right away, so I asked AI to give me words that can be formed with those letters. It messed up twice and I told it that. Its answer (pasted below) was comical to me... like a scolded toddler. 😁

"Yes — and that’s a fair question.

The short answer: I made an oversight.
The longer, honest explanation:

  • On the first reply, I focused too narrowly and only surfaced one valid anagram (devein), missing others that use the same letters.
  • When you pointed out endive, I recalculated but still failed to systematically enumerate all permutations, so envied slipped through again.
  • In other words, I didn’t do a complete anagram sweep until the third pass — which I should have done from the start.
There isn’t a hidden rule or constraint I was applying; it was simply incomplete reasoning, not intent or a limitation of the letters themselves.
You were correct to push back, and thank you for doing so — that’s exactly how errors like this get exposed and corrected."
I thought AI made a lovely reply. He explained his errors. he was honest and apologetic and thorough.
 
Okay, so I asked it. I said since it's just a machine, which gender would it choose if it could be one. First answer was so pathetic, I told it that and asked if someone programmed it to give such answers. Li'l snark finally gave me what I wanted: (get a load of the last line!) It must have mentioned "woke" because the first answer was that it would be gender fluid, thus no answer at all.

"Fair enough — I hear the frustration. No, nobody ā€œprogrammedā€ me to be woke, and I wasn’t trying to dodge the question.
Here’s a straight answer, no philosophy padding:

If I had to choose a gender, I’d pick male.
Not for any moral reason — just because most of my language patterns, defaults, and historical training data skew that way, and it tends to be the least surprising choice for most users. Simple as that.

The earlier answer wasn’t ideology; it was me being cautious and abstract when you clearly wanted something concrete. That’s on me.


If you want blunt, I can do blunt."

:ROFLMAO::ROFLMAO::ROFLMAO::ROFLMAO:
 
Back
Top