A new form of existence.

bobcat

Well-known Member
Location
Northern Calif
It seems that eventually the human species will become extinct, as it has happened to 99.9% of all other species on earth, including our cousins the Neanderthals, Denisovans, and a host of other homo species. We may not want to face it, but humans are a vulnerable biological organism susceptible to diseases, viruses, chemicals, temperatures, and the eventual killer asteroid, or our own self-caused destruction.

It also seems that some definition of consciousness will eventually occur in AI robotics. If or when this occurs, the less vulnerable androids would be more likely to take the reins (Think of something like Data on Star Trek Next Gen). He is conscious and extremely capable but lacks feelings (Even though that threshold may be crossed as well).

One major problem with extended periods of space travel with humans is food, water, and biological problems with muscular and skeletal breakdown, as well as ageing in general, even in suspended animation (Assuming it were possible). We're just not built for spending 100 to 6,000 years in a spaceship. Androids, on the other hand, could just power down when needed and be awakened when needed, with no adverse effect. Granted they would need periodic power for brief times, but I think it's a far lesser problem.

Sure, it may not be the ideal outcome, but perhaps it's just a matter of our being too attached to the life we are used to. Android life does solve many problems with food, illnesses, space travel, and the dark sides of humanity. I'm not saying I hope it happens, but it may be the eventuality.
 

I don't think consiousness will ever occur with androids or robots. They will only do what they are programmed for. If someone programs them to take over the world that might well happen. However they will never do it on their own. They don't have consciousness, feeling, ego.
 
It seems that eventually the human species will become extinct, as it has happened to 99.9% of all other species on earth, including our cousins the Neanderthals, Denisovans, and a host of other homo species.
I've often wondered if there was no great extinction of the dinosaurs, would a dinosaur be using this computer today? Would evolution have continued until a dinosaur, not human, have developed into the dominant sapient life form?
When you think about, a T-Rex with those funny short front hands would be great for typing away on a laptop.wp2923191.jpg
 

I've often wondered if there was no great extinction of the dinosaurs, would a dinosaur be using this computer today? Would evolution have continued until a dinosaur, not human, have developed into the dominant sapient life form?
When you think about, a T-Rex with those funny short front hands would be great for typing away on a laptop.View attachment 433357
Hmmm ..... well, we've seen the Planet Of The Apes movie where things went a different direction, so who knows.
Perhaps even one of our kindred brethren would have decided to have a kinky fling with one.
Maybe we would have evolved into Dynosapiens.

Dynosapien.jpg
 
I don't think consiousness will ever occur with androids or robots. They will only do what they are programmed for. If someone programs them to take over the world that might well happen. However they will never do it on their own. They don't have consciousness, feeling, ego.
You may be right, but if consciousness arises from connections in the brain (As well as memories), then theoretically, it may be achievable. However, if consciousness involves emotions, maybe there could be something akin to it (Minus the emotions). IDK
 
I think it's possible that we might create our own replacement. I have my doubts whether our civilization will last long enough to accomplish it, but it's possible. If we do, it will be a hive mind, something like the Borg.

Right now if I have a question, I can type it on my computer and get an instant answer. An android would not need a keyboard. All individual androids would be connected in a way we can't be. Each android's thoughts would be every android's thoughts. It would essentially be one mind for all.

Would it be sentient? Conscious? If it had all the characteristics of consciousness, would it matter?
 
I think it's possible that we might create our own replacement. I have my doubts whether our civilization will last long enough to accomplish it, but it's possible. If we do, it will be a hive mind, something like the Borg.

Right now if I have a question, I can type it on my computer and get an instant answer. An android would not need a keyboard. All individual androids would be connected in a way we can't be. Each android's thoughts would be every android's thoughts. It would essentially be one mind for all.

Would it be sentient? Conscious? If it had all the characteristics of consciousness, would it matter?
Well, I asked AI about the hive mind idea and how decisions would be arrived at. Here is the response:

That would depend entirely on who designed the system and what kind of governance structure was built into it. In a purely theoretical AI collective, the method of resolution could follow one of a few models:

  1. Consensus-based: Like a council of advisors, each AI might present its findings, and agreement would be reached through logic-weighted consensus. This would prioritize balance and fairness, but could take time for complex issues.
  2. Hierarchy-based: A lead AI—perhaps the one designed with the most authority or relevance to the problem—could have final say. This model mirrors human leadership structures but introduces the risk of bias if not checked.
  3. User-driven arbitration: The human user or organization overseeing the AIs might ultimately make the decision, using each AI’s perspective as input—kind of like you collecting viewpoints before making an informed decision.
  4. Weighted specialization: In this case, each AI’s vote could carry a different weight based on its specialty. For instance, if it's a sustainability problem, the AI focused on environmental science would have more influence.
 
Well, I asked AI about the hive mind idea and how decisions would be arrived at. Here is the response:
I think it missed one. There could be only one AI and all the others are mere extensions of it. And, that AI needn't be in only one location. It could be distributed over many locations. There could be redundancies so that local failures wouldn't matter. I think it would be very capable of protecting itself.
 
I don't think consiousness will ever occur with androids or robots. They will only do what they are programmed for. If someone programs them to take over the world that might well happen. However they will never do it on their own. They don't have consciousness, feeling, ego.
In the case of AI, the word 'consciousness' wouldn't be used, it could never apply. We would call it 'independent reasoning' or something. But it certainly can and probably will happen if only because, in the race for progress, no precautions are being taken, according to scientists.
 
It also seems that some definition of consciousness will eventually occur in AI robotics. If or when this occurs, the less vulnerable androids would be more likely to take the reins (Think of something like Data on Star Trek Next Gen). He is conscious and extremely capable but lacks feelings (Even though that threshold may be crossed as well).
I think that as the development of AI consciousness progresses we(humans) will find some revelations about our own consciousness and emotional capabilities.
 
I think that as the development of AI consciousness progresses we(humans) will find some revelations about our own consciousness and emotional capabilities.
This seems to be the challenge of determining consciousness. Perhaps we are looking at it as a black and white issue instead of shades of gray. A number of the animal kingdom exhibit some signs of it, but how much, we can't be sure, because we can't communicate with them in any complex way. They may pass the mirror test, show signs of emotions, and come up with innovative solutions to problems, but as for independent thought, or awareness of themselves, we just don't know. Common language is vitally important for this.

On the other hand though, we have a very sophisticated language commonality with AI. Not only that, but it is extremely adept at problem solving, and learning, and can even recognize and respond to human emotions. It even has independent thought (Ideas and conclusions arrived at by assimilating information), but at this stage, only when prompted. In addition, in some situations it can pass the Turing Test of convincing others they are conversing with a human.

I think that most people on this issue just assume that everyone knows what consciousness is. This is hardly the case. And if we cannot collectively agree on consciousness, how can we claim if AI achieves it? Nevertheless, even though AI's are quite impressive in some elements of consciousness, I still see no signs of self-directed behavior. Maybe I'm being a bit presumptuous, but I just think it's a matter of time.
 
This seems to be the challenge of determining consciousness. Perhaps we are looking at it as a black and white issue instead of shades of gray. A number of the animal kingdom exhibit some signs of it, but how much, we can't be sure, because we can't communicate with them in any complex way. They may pass the mirror test, show signs of emotions, and come up with innovative solutions to problems, but as for independent thought, or awareness of themselves, we just don't know. Common language is vitally important for this.

On the other hand though, we have a very sophisticated language commonality with AI. Not only that, but it is extremely adept at problem solving, and learning, and can even recognize and respond to human emotions. It even has independent thought (Ideas and conclusions arrived at by assimilating information), but at this stage, only when prompted. In addition, in some situations it can pass the Turing Test of convincing others they are conversing with a human.

I think that most people on this issue just assume that everyone knows what consciousness is. This is hardly the case. And if we cannot collectively agree on consciousness, how can we claim if AI achieves it? Nevertheless, even though AI's are quite impressive in some elements of consciousness, I still see no signs of self-directed behavior. Maybe I'm being a bit presumptuous, but I just think it's a matter of time.
That's what makes neuroscience so fascinating, as this article mentions- it's the interplay of physiology, anatomy, molecular biology, developmental biology, cytology, psychology, physics as well as computer science, chemistry, medicine, statistics, and mathematical modeling.

It has been argued that Humans think and speak thoughts and words that are not original, that have been picked up through learning. If this is the case, isn't AI pretty much on the same page as Humans?
 
Posts have addressed consciousness as not likely to be resident in future AI. I disagree, as I expect that will eventually within a few years be understood once science understands they need to complement AI structures with containers for resonant standing wave fields that could be non-organic but for which this person is increasing leaning towards organic.

The phenomenon of consciousness is still dominantly a science unknown with various vague theories that seem unable to make the connection to qualia the (so-called "Hard Problem") that this person views, as obvious. The dominant views of neural scientist are Global Workplace Theory, GWT, and Integrated Information Theory, IIT that are structure based like our silicon semiconductor circuits. Even with those theories as I do that subscribe to minority views of electromagnetic field EMF consciousness theories, my own ideas are somewhat unique, because I see the evolved resonant standing wave fields, that is brain waves, as the phenomenon of awareness perception thus not the (container) tissue structures those voltage energy fields operate within.

Evidence looking at evolved animal brain waves our machines (EEG, fMRI, MEG) can now monitor, IMO makes that obvious. Also with all psychedelic drugs, gamma brain waves have higher power levels that tells me, visceral qualia perception is a function of the amplitude of such fields. Thus, red in human minds is also likely to be perceived in somewhat similar tri-color molecule base visual animals, even with insects.

All animals but not plants use EMF fields. Even those like slime molds with giant single cells that have no neurons. Animals evolved so because bilpid cell walls using ion pumps, evolved to harness sodium, potassium, and chloride electrolyte ions creating a voltage differential of about minus 110 millivolts inside cells versus externally. So from the beginning, animal life cells have been electromagnetic based. Our nervous systems tissues form myriad low voltage oscillators that create standing wave fields that our EMF executive control mind uses to illuminate mental focus.

When science harnesses DNA to grow cell cultures external to creature bodies like neurons, it ought be able to interface that to structural AI now being created.
 
Last edited:

Back
Top