I was just sort of spooked by AI….

I think, when dealing with AI, you have to use your own smarts as well. Recently, I asked it what car would be good for a senior and added some other parameters. It gave me suggestions with explanations. I did my own research and visited some dealerships. Then, I asked it to plan a trip. Added parameters including seniors, mobility, foodies, and scenic drives. It came up with a pretty good itinerary. AI is useful, but scary, if you ask me.
 

Maybe I'm missing something, but I'm not all upset about "AI". Humans are doing the programming, so AI is doing what it is programmed to do. And AI doesn't understand anything. AI can regulate systems, but to understand what symbols, on a screen are, takes a human. AI can ape human speech, but that's because human speech follows known patterns and rules. AI doesn't comprehend the meaning of "Kill all humans" or "{adj.n9c~} aac,fyysnb.98 aw". To AI, there are the same results of equations and programming.
Back in the 1900s. having a phone line in your home caused numbness, nausea, deafness , dizziness, and anxiety about eavesdropping. And of course , a phone line causesd"social disruption, and loss of face-to-face interactions."
Sound kind of familiar?
 
Last edited:
Maybe I'm missing something, but I'm not all upset about "AI". Humans are doing the programming, so AI is doing what it is programmed to do. And AI doesn't understand anything.
I don't think it's like software that gets programmed. It scours the 'net and often picks up stupid misinformation. If I'd be using it for school or something, I'd fail. Case in point (one of many)... I was writing an article about the Civil War a few months ago. Asked a specific question about a veteran at Gettysburg. Darn thing gave me a name of a completely different soldier who AI said did great and heroic things at G'burg.

Well, the "soldier" was 2 years old when Gettysburg happened! It had found a VERY inaccurate website and presented it as fact. When I corrected it with true and precise facts, know what it said? "Sorry, my bad. Of course you're right." Considering people use it for health concerns, it's far too unacceptable to give so many wrong answers, but even worse with health issues.
 

Sorry this is long, but this thread got me thinking and I asked AI why it gets so many things wrong. :devilish: At least it was honest. I learned something new... the misinformation is called hallucinations. Who knew?! Here's the answer:

"Great question — and a really important one. AI (like me) can give wrong answers for several key reasons, even though we're trained to be helpful and accurate. Here's a breakdown of why that happens:
________________________________

🔍 1. AI Doesn’t “Know” — It Predicts

AI like ChatGPT doesn't "know" facts the way humans do. It predicts the most likely response based on patterns in huge amounts of text data. That means: It can sound confident but still be wrong. It sometimes fills in gaps with guesses when unsure.
Analogy: Think of AI as an extremely fast autocomplete — not a fact-checking machine.
_______________________________

📚 2. Training Data Isn’t Perfect

AI learns from the internet and other large-scale texts. But:

Some of that information is outdated, biased, or just plain wrong. If the training data had errors, the AI might repeat or reinforce them.
________________________________

🧠 3. Complex or Nuanced Questions

Some questions are just hard, even for experts — like: Medical advice
Legal interpretations, Deep science or math

AI may give a simplified or incorrect answer, especially if the topic requires expert judgment or up-to-date research.
________________________________

💡 4. Lack of Real-Time Awareness

Unless connected to live data (which I can be if allowed), I don’t always know:
Recent events, Local changes, Current product specs, laws, or news

So answers might be accurate as of a certain time, but no longer valid.
_______________________________

How We’re Improving

OpenAI is actively working on: Reducing hallucinations (false facts), Improving fact-checking, Letting users verify sources,
Giving clearer disclaimers when answers may be unreliable"
 
In my way of thinking, it's the most compelling reason for why AI could really go downhill but fast!!
Yeah... and I've seen some scary things. It used to be that you could tell AI images with things like mangled up fingers... often 6 fingers, etc. But now the deepfake has gotten so "good" (meaning frightening) that they make it look as if people are saying things they didn't say and/or doing things they didn't do. My FB feed is absolutely filled with it. I ignore it all... but everyone isn't going to do the same, and I foresee huge problems. Bottom line... there will at some point be no line between real and fake, and we'll never know what's true.
 


Back
Top