Can artificial intelligence come alive?

spectratg

Member
Location
Adamstown, MD
That question is at the center of a debate raging in Silicon Valley after a Google computer scientist claimed over the weekend that the company's AI appears to have consciousness. Inside Google, engineer Blake Lemoine was tasked with a tricky job: Figure out if the company's artificial intelligence showed prejudice in how it interacted with humans.

So he posed questions to the company's AI chatbot, LaMDA, to see if its answers revealed any bias against, say, certain religions. This is where Lemoine, who says he is also a Christian mystic priest, became intrigued. "I had follow-up conversations with it just for my own personal edification. I wanted to see what it would say on certain religious topics," he told NPR. "And then one day it told me it had a soul."

Lemoine published a transcript of some of his communication with LaMDA, which stands for Language Model for Dialogue Applications. His post is entitled "Is LaMDA Sentient," and it instantly became a viral sensation. Since his post and a Washington Post profile, Google has placed Lemoine on paid administrative leave for violating the company's confidentiality policies. His future at the company remains uncertain.

Note: There was a brief AI discussion in 2020 on Senior Forums.
 

No, it cannot come alive, unless the definitions of alive are changed to include such things , for profit, just like the asserting or claiming that it said it had/has a soul. Simple deception by greedy money-profit-mongers.
And it will get much much much worse soon.
 
No, it cannot come alive, unless the definitions of alive are changed to include such things , for profit, just like the asserting or claiming that it said it had/has a soul. Simple deception by greedy money-profit-mongers.
And it will get much much much worse soon.
So @Just Jeff, don't beat around the bush--tell us what you really think! :cautious:
 

I thought this comment from the article link was good...

"The confident proclamations of Google that LaMDA is not sentient are more than a little arrogant and beg the question. When Margaret Mitchell worries that "our minds are very, very good at constructing realities that are not necessarily true to a larger set of facts that are being presented to us . . . (and) I’m really concerned about what it means for people to increasingly be affected by the illusion,” she begs the question of where the illusion ends and "reality" begins. To say that the large neural networks being constructed today "rely on pattern recognition — not wit, candor or intent," it begs the question of whether the wit, candor and intent that we so highly prize as "human" are not themselves the results of extremely sophisticated pattern recognition derived from the architecture, technique, and data volume of the human brain.

A little humility is desperately needed here. I don't know whether or not Lemoine's claims that LaMDA has become sentient are true. The problem is, neither does Google, but it acts as though it does."
 
As far as being biologically alive, an AI device doesn't meet the criteria. But I think some day we will have to come up with another definition of being "alive", because we will have "thinking" machines. They will have names, and will respond as entities, not just as a collection of programs. That ain't gonna happen tomorrow, yet it is inevitable.
In the future, I really do think thinking AI devices will be a big deal. Today, the first thing I do is ask Siri its battery status, After it tells me, I say "thank you". I'm thanking an inanimate device.
 
I do not know if it is authentic but the conversation between Lemoine's and LaMDA was interesting.

Can artificial intelligence come alive?​

I would say no but it may become aware of itself.
I think that is what is being claimed, not being alive biologically, but being self-aware. From Wiki:

Self-awareness is the experience of one's own personality or individuality. It is not to be confused with consciousness in the sense of qualia. While consciousness is being aware of one's environment and body and lifestyle, self-awareness is the recognition of that awareness. Self-awareness is how an individual consciously knows and understands their own character, feelings, motives, and desires. There are two broad categories of self-awareness: internal self-awareness and external self-awareness.
 
The fact news reports made a mountain out of a mole hill from someone that wasn't even a scientist points to just another click bait agenda. Correctly understanding narrowed term definitions in that discussion make a huge difference. Am currently reading two of Ray Kurweil's books. At this point, neuroscience does not understand the physical nature of consciousness, something I personally expect is the result of standing and travelling brain wave fields within the organic neuronal containers of our Earth creature brains whether an ant, a mouse, or human. Until neuroscience makes that connection, any amount of structural only theory has limitations.


https://en.wikipedia.org/wiki/Technological_singularity
snippet:


A superintelligence, hyperintelligence, or superhuman intelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to the form or degree of intelligence possessed by such an agent. John von Neumann, Vernor Vinge and Ray Kurzweil define the concept in terms of the technological creation of super intelligence, arguing that it is difficult or impossible for present-day humans to predict what human beings' lives would be like in a post-singularity world.

Technology forecasters and researchers disagree regarding when, or whether, human intelligence will likely be surpassed. Some argue that advances in artificial intelligence (AI) will probably result in general reasoning systems that lack human cognitive limitations. Others believe that humans will evolve or directly modify their biology so as to achieve radically greater intelligence. A number of futures studies scenarios combine elements from both of these possibilities, suggesting that humans are likely to interface with computers, or upload their minds to computers, in a way that enables substantial intelligence amplification.
 
Last edited:
A superintelligence, hyperintelligence, or superhuman intelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to the form or degree of intelligence possessed by such an agent. John von Neumann, Vernor Vinge and Ray Kurzweil define the concept in terms of the technological creation of super intelligence, arguing that it is difficult or impossible for present-day humans to predict what human beings' lives would be like in a post-singularity world.

Technology forecasters and researchers disagree regarding when, or whether, human intelligence will likely be surpassed. Some argue that advances in artificial intelligence (AI) will probably result in general reasoning systems that lack human cognitive limitations. Others believe that humans will evolve or directly modify their biology so as to achieve radically greater intelligence. A number of futures studies scenarios combine elements from both of these possibilities, suggesting that humans are likely to interface with computers, or upload their minds to computers, in a way that enables substantial intelligence amplification.
In some limited way, I can say that my dog considers me to be super intelligent (again I mean at her level, not mine). It is naive (and arrogant) to assume that human intelligence is the maximum. But in addition, there can and probably are different levels of "super intelligence." What a being (our own AI or an otherworldly being) would be to us super intelligent, could in turn be in awe of another entity who they consider to be super intelligent.
 
millions of people will be unable to find a job
Already for years, although a lot of people, millions, just don't want to work as it is offered.

"Hiring lacks a human touch — sometimes literally"

https://www.vox.com/recode/22673353/unemployment-job-search-linkedin-indeed-algorithm

"Hiring software and the proliferation of platforms like Indeed, LinkedIn, and ZipRecruiter have made it super easy for employers to list countless positions and for jobseekers to send in countless résumés. The problem is, they’ve also made it super easy for those résumés to never be seen.

"Artificial intelligence-powered software scans résumés for certain keywords and criteria. If it can’t find them, the software just filters those people out."
 
https://www.goodreads.com/author/quotes/47744.Ray_Kurzweil

“as long as there is an AI shortcoming in any such area of endeavor, skeptics will point to that area as an inherent bastion of permanent human superiority over the capabilities of our own creations. This book will argue, however, that within several decades information-based technologies will encompass all human knowledge and proficiency, ultimately including the pattern-recognition powers, problem-solving skills, and emotional and moral intelligence of the human brain itself.”
― Ray Kurzweil, The Singularity is Near: When Humans Transcend Biology

“There are no inherent barriers to our being able to reverse engineer the operating principles of human intelligence and replicate these capabilities in the more powerful computational substrates that will become available in the decades ahead. The human brain is a complex hierarchy of complex systems, but it does not represent a level of complexity beyond what we are already capable of handling.”
― Ray Kurzweil, The Singularity is Near: When Humans Transcend Biology

“In order for a digital neocortex to learn a new skill, it will still require many iterations of education, just as a biological neocortex does, but once a single digital neocortex somewhere and at some time learns something, it can
share that knowledge with every other digital neocortex without delay. We can each have our own private neocortex extenders in the cloud, just as we have our own private stores of personal data today.”
― Ray Kurzweil, How to Create a Mind: The Secret of Human Thought Revealed

Evolution uses a somewhat similar greatly repeated intertwined structure of about 100 neurons to connect between different brain function regions that by chemical dendrite axon bud neural plasticity wires itself all during life. You are what you do. We become as an intelligent electromagnetic brain field entity with a pilot field controller what we experience and do with our muscle motor earth creature control brain, especially repeating experience that by basic feedback and inhibition connections chemically reinforce analog strength dendrite axon connections.

“The pattern recognition theory of mind that I articulate in this book is based on a different fundamental unit: not the neuron itself, but rather an

assembly of neurons, which I estimate to number around a hundred. The wiring and synaptic strengths within each unit are relatively stable and determined genetically—that is the organization within each pattern recognition module is determined by genetic design. Learning takes place in the creation of connections between these units, not within them, and probably in the synaptic strengths of the interunit connections.”
― Ray Kurzweil, How to Create a Mind: The Secret of Human Thought Revealed
 
An excellent Hollywood film, The 13th Floor, delves into this. It's an older film, but one I own on DVD, and I still watch it. In fact, I dusted it off and watched it last week. It's required watching by anyone who calls him/her self a Sci Fi fan. The only thing dated about it, is they use old computers with clunky monitors, and write programs using DOS.
 
I don't know about Artificial Intelligence "coming alive", but it is increasingly obvious that AI is making human labor increasingly Obsolete, in many career fields. If this trend continues, it won't be long before millions of people will be unable to find a job.
I was in grade school, when I first heard the term "automation." My teacher told us not to worry that people would be put out of jobs, because they would be needed to take care of the machines. Well, that didn't pan out as intended at all. I remember wondering why a business would buy a machine to save money on labor when he still had to pay labor to take care of the machines. There didn't seem to be any incentive for automating. But then, I was ten.

One interesting thing I've heard considered is requiring the machines to pay taxes. Obviously, the owners would pay the machines' taxes for them. But I don't think the proposal is going anywhere soon.
 
In a June 15, 2022, Daily Beast article12 titled "Stop Saying That Google’s AI Is Sentient, You Dupes," Tony Ho Tran warns against the anthromorphication of AI, saying Lemoine’s claims "feed the flames of misinformation around the capabilities of AI that can cause a lot more harm than good." He continues:

"... LaMDA is very, very, very unlikely to be sentient … or at least not in the way some of us think ... ‘In many ways, it’s not the right question to ask,’ Pedro Domingos, professor emeritus of computer science and engineering at the University of Washington and author of the book ‘The Master Algorithm: How the Quest for the Ultimate Machine Will Remake Our World,’ told The Daily Beast ...

‘Since the beginning of AI, people have tended to project human qualities onto machines,’ Domingos explained. ‘It’s very natural. We don’t know any other intelligence that speaks languages other than us.

So, when we see something else doing that like an AI, we project human qualities onto it like consciousness and sentience. It’s just how the mind works’ ...

[O]ne of the biggest issues is that the story gives people the wrong idea of how AI works and could very well lead to real-world consequences. ‘It’s quite harmful,’ Domingos said, later adding, ‘It gives people the notion that AI can do all these things when it can’t.’"


Laura Edelson, a postdoc in computer science security at New York University, agrees with Domingos, stressing that misjudging the sentience of AI could lead people to think we can safely delegate "large intractable problems" to an AI, when doing so could be absolutely disastrous — and unethical.

"In reality, these are issues that can and should only be solved by human beings," Tran writes.13 "‘We can’t wash our problems through machine learning, get the same result, and feel better about it because an AI came up with it,’ Edelson said. ‘It leads to an abdication of responsibility.’"
 
In a 2019 interview with Rebecca Mansour and Joel Pollack, Dr. Robert Epstein, a senior research psychologist at the American Institute for Behavioral Research and Technology and former editor-in-chief of Psychology Today, discussed the power AI wields, warning that it is "too dangerous" to be held by any single entity, government or company.

“Mansour noted the unavoidable integration of programmers’ and developers’ biases into their algorithms, highlighting a Monday-published Financial Times column addressing the phenomenon of values embedded within programming code:

‘Computer algorithms encoded with human values will increasingly determine the jobs we land, the romantic matches we make, the bank loans we receive and the people we kill, intentionally with military drones or accidentally with self-driving cars.

How we embed those human values into code will be one of the most important forces shaping our century. Yet no one has agreed what those values should be. Still more unnerving is that this debate now risks becoming entangled in geo-technological rivalry between the US and China’ ..

Centralization of power related to internet search — and more broadly, the dissemination of information — is dangerous, cautioned Epstein. ‘Another executive at Google quit, Meredith Whitaker, who’d been there for 13 years,’ recalled Epstein.

‘She’s an AI expert, and she is expressing concern about Google’s use of AI and how powerful that is. She just published an article20 in which she’s warning about the company’s — this is a quote — ‘largely unchecked power to impact our world in profoundly dangerous ways.'

Epstein continued, ‘So yes, AI and who controls it, that is one of the central issues of our time. Do we want China to be the leader in AI for the world? Probably not. But the fact is, we don’t want the power of AI in the hands of any one entity, any one government, any one company. It’s much too dangerous ... these companies can suppress anybody ...

They can suppress any content anywhere in the world, and country-by-country, they’re going to do different things depending on what makes them more money and what meshes with their values.’”
 
There's more ...

"Google - A Dictator Unlike Anything the World Has Ever Known."

"In the interview with Epstein we discussed how Google manipulates and shapes public opinion through its search engine. The end results are not minor. As just one example, Google has the power to determine the outcomes of 25% of the national elections in the world. According to Epstein, Google’s powers pose three specific threats to society:

1. They’re a surveillance agency with significant yet hidden surveillance powers. In his article "Seven Simple Steps Toward Online Privacy," Epstein outlines his recommendations for protecting your privacy while surfing the web, most of which don’t cost anything.

2. They’re a censoring agency with the ability to restrict or block access to websites across the internet, thus deciding what people can and cannot see. They even have the ability to block access to entire countries and the internet as a whole. While this sounds like it should be illegal, it’s not, because there are no laws or regulations that restrict or dictate how Google must rank its search results.The most crushing problem with this kind of internet censorship is that you don't know what you don't know. If a certain type of information is removed from search, and you don’t know it should exist somewhere, you’ll never go looking for it.

3. They’re a social engineering agency with the power to manipulate public opinion, thinking, beliefs, attitudes and votes through search rankings, AI and other means — all while masking and hiding its bias."

"To me, that's the scariest area," Epstein says. They produce enormous shifts in people's thinking, very rapidly. Some of the techniques I've discovered are among the largest behavioral effects ever discovered in the behavioral sciences."
 
"AI has come a long way since the first AI workshop at Dartmouth College in the summer of 1956. Today’s AI really does resemble that of a thinking person on the other end of a keyboard.

And the fact that Google controls some of the best, most advanced AI in the world really augments all the risks associated with the anthromorphication of machines. Over the past two and a half years, we’ve seen Google turn its code of conduct, “Don’t Be Evil,” completely upside-down and sideways. Behaviors that were only suspected before have become glaringly obvious, such as censoring.

Equally blatant is Google’s role in the social engineering currently underway, which makes Google’s ownership of DeepMind all the more concerning. DeepMind Technologies was founded in 2010, and acquired by Google in 2014.

The next year, in 2015, the DeepMind AlphaGo program made history by beating a human world champion in the boardgame Go.17 The game of Go is incredibly complex, requiring multiple layers of strategic thinking, as there are 10 to the power of 170 possible board configurations. The video above is a documentary detailing the development and success of AlphaGo.

In 2017, the DeepMind AlphaZero program learned the game of chess and surpassed human chess experts in just four hours18 — a testament to the speed at which an AI can learn brand-new analytical skills."

https://topdocumentaryfilms.com/alphago/
 

Back
Top