I was just sort of spooked by AI….

I think, when dealing with AI, you have to use your own smarts as well. Recently, I asked it what car would be good for a senior and added some other parameters. It gave me suggestions with explanations. I did my own research and visited some dealerships. Then, I asked it to plan a trip. Added parameters including seniors, mobility, foodies, and scenic drives. It came up with a pretty good itinerary. AI is useful, but scary, if you ask me.
 

Maybe I'm missing something, but I'm not all upset about "AI". Humans are doing the programming, so AI is doing what it is programmed to do. And AI doesn't understand anything. AI can regulate systems, but to understand what symbols, on a screen are, takes a human. AI can ape human speech, but that's because human speech follows known patterns and rules. AI doesn't comprehend the meaning of "Kill all humans" or "{adj.n9c~} aac,fyysnb.98 aw". To AI, there are the same results of equations and programming.
Back in the 1900s. having a phone line in your home caused numbness, nausea, deafness , dizziness, and anxiety about eavesdropping. And of course , a phone line causesd"social disruption, and loss of face-to-face interactions."
Sound kind of familiar?
 
Last edited:
Maybe I'm missing something, but I'm not all upset about "AI". Humans are doing the programming, so AI is doing what it is programmed to do. And AI doesn't understand anything.
I don't think it's like software that gets programmed. It scours the 'net and often picks up stupid misinformation. If I'd be using it for school or something, I'd fail. Case in point (one of many)... I was writing an article about the Civil War a few months ago. Asked a specific question about a veteran at Gettysburg. Darn thing gave me a name of a completely different soldier who AI said did great and heroic things at G'burg.

Well, the "soldier" was 2 years old when Gettysburg happened! It had found a VERY inaccurate website and presented it as fact. When I corrected it with true and precise facts, know what it said? "Sorry, my bad. Of course you're right." Considering people use it for health concerns, it's far too unacceptable to give so many wrong answers, but even worse with health issues.
 

Sorry this is long, but this thread got me thinking and I asked AI why it gets so many things wrong. :devilish: At least it was honest. I learned something new... the misinformation is called hallucinations. Who knew?! Here's the answer:

"Great question — and a really important one. AI (like me) can give wrong answers for several key reasons, even though we're trained to be helpful and accurate. Here's a breakdown of why that happens:
________________________________

🔍 1. AI Doesn’t “Know” — It Predicts

AI like ChatGPT doesn't "know" facts the way humans do. It predicts the most likely response based on patterns in huge amounts of text data. That means: It can sound confident but still be wrong. It sometimes fills in gaps with guesses when unsure.
Analogy: Think of AI as an extremely fast autocomplete — not a fact-checking machine.
_______________________________

📚 2. Training Data Isn’t Perfect

AI learns from the internet and other large-scale texts. But:

Some of that information is outdated, biased, or just plain wrong. If the training data had errors, the AI might repeat or reinforce them.
________________________________

🧠 3. Complex or Nuanced Questions

Some questions are just hard, even for experts — like: Medical advice
Legal interpretations, Deep science or math

AI may give a simplified or incorrect answer, especially if the topic requires expert judgment or up-to-date research.
________________________________

💡 4. Lack of Real-Time Awareness

Unless connected to live data (which I can be if allowed), I don’t always know:
Recent events, Local changes, Current product specs, laws, or news

So answers might be accurate as of a certain time, but no longer valid.
_______________________________

How We’re Improving

OpenAI is actively working on: Reducing hallucinations (false facts), Improving fact-checking, Letting users verify sources,
Giving clearer disclaimers when answers may be unreliable"
 
In my way of thinking, it's the most compelling reason for why AI could really go downhill but fast!!
Yeah... and I've seen some scary things. It used to be that you could tell AI images with things like mangled up fingers... often 6 fingers, etc. But now the deepfake has gotten so "good" (meaning frightening) that they make it look as if people are saying things they didn't say and/or doing things they didn't do. My FB feed is absolutely filled with it. I ignore it all... but everyone isn't going to do the same, and I foresee huge problems. Bottom line... there will at some point be no line between real and fake, and we'll never know what's true.
 
I have a patch of lawn that needs to be reseeded. It's about 15 ft by 20 feet, and the ground is hard and needs to be torn up and tilled. It's more than I want to do with a shovel, so I asked for advice on which rototiller would be the best choice for such a small job, money being the primary concern.

The answer was the Troy Built Super Bronco ($1200), and it gave be credible reasons why. I absolutely hated that answer and rejected it as bad advice. It's not want I wanted to hear at all. The moral of the story don't confuse what you don't like as an untruth, and don't reject AI because of a personal bias. I'm not spending $1200 on 15 X 20 ft patch of weeds. I asked AI because I wanted to know something, but I wanted the answer to be something I liked.

So what is the correct answer? I don't know. I'm still where I was when I started. This doesn't mean AI failed. I'm not rejecting AI on these grounds, but I'm not defending it either. My main issue is that AI mines data from the internet, which is the most unreliable part of the media. Do I understand this correctly? I don't know if it filters what it mines for truth. I just don't know, but the source of the data is a bad one.
 
I have a patch of lawn that needs to be reseeded. It's about 15 ft by 20 feet, and the ground is hard and needs to be torn up and tilled. It's more than I want to do with a shovel, so I asked for advice on which rototiller would be the best choice for such a small job, money being the primary concern.

The answer was the Troy Built Super Bronco ($1200), and it gave be credible reasons why. I absolutely hated that answer and rejected it as bad advice. It's not want I wanted to hear at all. The moral of the story don't confuse what you don't like as an untruth, and don't reject AI because of a personal bias. I'm not spending $1200 on 15 X 20 ft patch of weeds. I asked AI because I wanted to know something, but I wanted the answer to be something I liked.

So what is the correct answer? I don't know. I'm still where I was when I started. This doesn't mean AI failed. I'm not rejecting AI on these grounds, but I'm not defending it either. My main issue is that AI mines data from the internet, which is the most unreliable part of the media. Do I understand this correctly? I don't know if it filters what it mines for truth. I just don't know, but the source of the data is a bad one.
The local Ace Hardware rents tillers.
 
I have to give credit to AI there. In fact it did offer that as a suggestion, but my bias comes in there. I don't like renting stuff. A silly bias.
I don't like renting either. You never know how well that tool is going to work, will it break while you are using it (even correctly) and then you have to pay to fix it? Will it take longer than anticipated and the cost doubles? I did rent a hand-held grinder (to grind smooth a concrete floor) and it worked fine but I admit to being a little anxious about it.
 
I asked AI, "What percent of americans drink coffee?" It answered,"Approximately 66% of American adults drink coffee each day.This translates to roughly two-thirds of the adult populationconsuming coffee daily."

That answer is backed up by the National Coffee Association, almost word for word, but other sources vary from as low as 58% to as high as 80% - so who knows what is correct.
 
I asked AI, "What percent of americans drink coffee?" It answered,"Approximately 66% of American adults drink coffee each day.This translates to roughly two-thirds of the adult populationconsuming coffee daily."

That answer is backed up by the National Coffee Association, almost word for word, but other sources vary from as low as 58% to as high as 80% - so who knows what is correct.
It seems to me that relativism/post-modernism kinda' leave accuracy a moot point...
and that's what I'd guess the majority of AI techs follow, to some degree or other.
 


Back
Top