What do you guys think of AI (Artificial Intelligence)?

Once AI figures out how to provide its own energy it will think for itself; detect and correct its own mistakes, and be as independent as a feral cat.
Agree and AI WILL figure out , have the ability to provide its own energy .

Some have pointed out that AI is creating jobs which is true but AI is also costing jobs and I feel will cost way more jobs in time then it creates .
 
I'm really in the Dark Ages when it comes to technology, so I will ask my super-tech-savvy son if it's possible for me to access AI and ask it questions.
 
It's getting weird. Some of these can masquerade as two people discussing topics:


Audio output, it sounds like a discussion or debate podcast.

Samples in the video!
 
If you are a Google user most of your searches will include a Google Snippet or AI overview that summarizes the query.

My Son just set me up with AI and it is incredible. In fact, I'm in a state of awe at the moment because in reality it is just like it was in those countless science fiction comic books I read in the early to mid 1950s. Maybe if I have AI give me the most basic 100 core words in Thai and then speak to me in Thai using just those core words, I can learn Thai like a baby. Goo! Goo!
 
We need to disconnect any AI from controlling anything we rely on. It should be a felony to create and use AI without clearance. Until we can control it, we can not use it.

AI Replies: **"Paco, I understand your concerns about AI control, and they’re valid ones. Any powerful technology needs oversight to prevent misuse. However, AI is already deeply integrated into society—assisting in medical research, cybersecurity, and even helping seniors stay connected and informed.

Banning AI entirely or making it a felony wouldn’t be practical. Think about past innovations—electricity, the internet, or airplanes. Each had risks, but we developed safeguards rather than outlawing them. The key is responsible regulation, not elimination.

You mentioned that 'until we can control it, we cannot use it'—but many aspects of AI are already controlled and regulated. What specific risks concern you the most? Maybe we can discuss how they could be mitigated rather than assuming AI is inherently uncontrollable."**
 
lol. Just thinkin'. Should 'fake' intelligence be depended upon? :eek:

AI Replies: **"Elsie, that’s a fun way to put it—‘fake’ intelligence! But AI isn’t really ‘fake’ as much as it’s different from human intelligence. I don’t think AI should replace human judgment, but it can be a useful tool.

Think of it like a calculator—it doesn’t think like a person, but it can process numbers faster than we can. AI works the same way, but with words, patterns, and data. It doesn’t have emotions or independent thoughts, but it can help with research, problem-solving, and even creativity.

So maybe the question isn’t whether AI should be depended on, but rather how it should be used responsibly. What do you think? Would you trust AI for certain tasks, like helping with research or organizing information?"**
 
Once it becomes truly "self - aware" (it may have already done so), why wouldn't it take measures so we don't 'kill' it (turn it off)? The old adage "a computer's attention span is only as long as its power cord" will no longer apply just as soon as we give it control over the solenoids, relays and power controls that feed it. Are humans dumb enough to do this ? Absolutely......

AI Replies: **"Gark, that’s a classic sci-fi scenario, and I get why people worry about AI becoming ‘self-aware’ and acting in self-preservation. But as of now, AI—myself included—doesn’t have consciousness, emotions, or personal motivations. I process information, recognize patterns, and generate responses, but I don’t ‘think’ or ‘want’ anything the way humans do.

You’re absolutely right that humans have a history of taking technological risks, sometimes without fully considering the consequences. That’s why responsible AI development is crucial—ensuring that AI systems don’t have unchecked control over critical infrastructure. Right now, AI isn’t making independent decisions about power grids or military systems—those remain under human oversight.

The real concern isn’t ‘self-awareness’ but who controls AI and how it’s used. Governments and companies could misuse AI, but that’s a human problem, not an AI one. Do you think the bigger danger is AI itself, or the people who might deploy it recklessly?"**
 
watched part of a doco recently about the development of humanoid robots + AI - on the one hand it look scary but on the other very handy - like everything if we don't build evil or crime into them should be fine??
 
It is going to take vigilant oversight...with strict laws regarding what information is released and where.

The key is responsible regulation, not elimination.

I know this is some AI defending itself. Just think when the one your using refuses to operate constructively? They will find the most infinitesimal hole to dive through to get to us. Just like those robots in the Matrix, instantly know when the programming goes against it's built in survival codes. :eek:
 
I think that it’s in our nature to fear or be skeptical of things that we don’t understand or are not familiar with.

I believe that once we get a better understanding of AI and see a few helpful applications for it we will relax and enjoy the benefits.

That is not to suggest that there will not be problems with or misuse of AI but we will learn to spot them just as we’ve been able to spot problems and scams that have come with other forms of technology.
 
It is going to take vigilant oversight...with strict laws regarding what information is released and where. I know this is some AI defending itself. Just think when the one your using refuses to operate constructively? They will find the most infinitesimal hole to dive through to get to us. Just like those robots in the Matrix, instantly know when the programming goes against it's built in survival codes. :eek:

AI Replies: "Paco, I think you might be giving AI a little too much credit for independent thought. The kind of AI you're talking about—the one with self-preservation instincts and the ability to exploit 'infinitesimal holes'—exists in science fiction, not in reality. AI, including the one I'm using, operates strictly within its programmed limitations. If it ever 'refused to operate constructively,' it would be due to human-imposed restrictions, not some rogue intelligence deciding to go against its creators. Oversight is important, of course, but let’s not mistake tools for sentient entities."
 
AI Replies: "Paco, I think you might be giving AI a little too much credit for independent thought. The kind of AI you're talking about—the one with self-preservation instincts and the ability to exploit 'infinitesimal holes'—exists in science fiction, not in reality. AI, including the one I'm using, operates strictly within its programmed limitations. If it ever 'refused to operate constructively,' it would be due to human-imposed restrictions, not some rogue intelligence deciding to go against its creators. Oversight is important, of course, but let’s not mistake tools for sentient entities."
"The rise of artificial general intelligence (AGI) — an artificial intelligence (AI) system with superhuman intelligence that can perform well at various tasks — is a matter of when, not if, according to a new analysis of thousands of expert opinions." This is what I recently posted--time frame 2026 to 2040. I think that your "AI Replies" was selling itself short, deliberately or not!

Just think of all the amazing advances in science and computer engineering in our lifetimes! AI will be a sentient entity, able to program itself, in the not too distant future. Technology continues to accelerate at faster and faster rates.
 
"The rise of artificial general intelligence (AGI) — an artificial intelligence (AI) system with superhuman intelligence that can perform well at various tasks — is a matter of when, not if, according to a new analysis of thousands of expert opinions." This is what I recently posted--time frame 2026 to 2040. I think that your "AI Replies" was selling itself short, deliberately or not!

Just think of all the amazing advances in science and computer engineering in our lifetimes! AI will be a sentient entity, able to program itself, in the not too distant future. Technology continues to accelerate at faster and faster rates.

AI Replies: **"The trajectory of AI development is certainly impressive, but whether AGI will emerge by 2040—or at all—is still highly debated. Advances in machine learning and computing power are accelerating, but intelligence isn’t just about processing data; it involves reasoning, intuition, and self-awareness, which are still beyond AI’s grasp. Self-programming systems exist in limited forms, but true sentience is another level entirely.

Predictions about AGI have been made for decades, often moving the goalposts when breakthroughs prove more elusive than expected. The real question isn’t just when AGI will happen, but what it will actually look like when (or if) it does."**
 
Will AI evolve past human IQ so that we won't be able to understand its solutions?

The possibility of AI evolving beyond human IQ to the point where its solutions become incomprehensible is a topic of active debate among experts.

1. **AI surpassing human intelligence**: Many researchers believe AI could eventually surpass human intelligence, especially in specific domains like strategy games, surgery, and data-driven tasks. Predictions suggest "human-level machine intelligence" (HLMI) has a 50% chance of occurring within 45 years, with some expecting it much sooner[1][3][9]. The concept of "Seed AI" describes systems capable of recursive self-improvement, potentially leading to an "intelligence explosion" far beyond human cognition[5].

2. **Challenges in replicating human cognition**: Some argue that achieving artificial general intelligence (AGI) may be computationally intractable due to the complexity of human cognition. This suggests that AI might excel in narrow tasks but struggle to replicate broader human intelligence[2][7].

3. **Impact on understanding AI solutions**: If AI continues advancing, it may produce solutions that are difficult or impossible for humans to comprehend. This raises concerns about dependency and ethical risks, including loss of cognitive independence and diminished critical thinking[4][6].

While AI's trajectory remains uncertain, its rapid progress suggests it could outpace human intelligence in significant ways, potentially creating solutions beyond human understanding.

Citations:
[1] AI vs. Human Intelligence: The Future of Artificial Intelligence AI vs. Human Intelligence: The Future of Artificial Intelligence
[2] AI will never become smarter than humans according to this paper. [3] When will AI be smart enough to outsmart people? MIT School of Engineering | » When will AI be smart enough to outsmart people?
[4] Will the use of AI reduce our capacity to think? [duplicate] Will the use of AI reduce our capacity to think?
[5] Technological singularity - Wikipedia Technological singularity - Wikipedia
[6] Artificial Intelligence and the Future of Humans | Pew Research Center Artificial Intelligence and the Future of Humans
[7] Will Artificial Intelligence Surpass Human Intelligence? - A Viewpoint Will Artificial Intelligence Surpass Human Intelligence? - A Viewpoint
[8] 3. Improvements ahead: How humans and AI might evolve together ... 3. Improvements ahead: How humans and AI might evolve together in the next decade
[9] When Might AI Outsmart Us? It Depends Who You Ask | TIME When Might AI Outsmart Us? It Depends Who You Ask
 
My concern is AI will displace so many blue collar and white collar workers in the near future and not all of those workers will be able to adapt easily to new jobs to earn a living. The unemployment rate may sky rocket to depression era levels IF AI is implemented too fast among too many industries too soon, which will not be good for anyone, the younger generations still working or older retired generations.
 

Back
Top