So you think you're a sentient being? guess again!

Neil DeGrasse Tyson has talked about the 'living in a simulation' theory several times. Largely because at first he found it troubling but hard to dismiss or refute. But when a friend of his found a way to lower the odds of us living in one, he latched onto it like a drowning man to anything that could keep him afloat.

In the beginning of this he explains the basic theory, in usual animated and humorous style, but at about 3:33 is where he starts talking about the counter argument that lifted his anxiety about it.

 
The era of machines becoming sentient is drawing near
Nathan, you pose a huge problem. What will happen when we do create a machine that duplicates the human brain? It's able to think, feel and respond as any human. I don't think humans and human machines will get along. I fear it will be like the Planet of the Apes, only Planet of the humanoid robots.
 
Matrix-Resurrections-Memes.jpg
 
Nathan, you pose a huge problem. What will happen when we do create a machine that duplicates the human brain? It's able to think, feel and respond as any human. I don't think humans and human machines will get along. I fear it will be like the Planet of the Apes, only Planet of the humanoid robots.
There may need to be legally binding standards established really soon, to ensure that developers don't let that genie out of the bottle. However, it may already be too late. Kind of like trying to tell Iran and N.Korea that they can't play with nukes.
 
Nathan, you pose a huge problem. What will happen when we do create a machine that duplicates the human brain? It's able to think, feel and respond as any human. I don't think humans and human machines will get along. I fear it will be like the Planet of the Apes, only Planet of the humanoid robots.
Terminators......Skynet......
 
There may need to be legally binding standards established really soon, to ensure that developers don't let that genie out of the bottle. However, it may already be too late. Kind of like trying to tell Iran and N.Korea that they can't play with nukes.
Apparently Steve Wozniak, the co-founder of Apple, has expressed such concerns.
Quite frankly if it comes to pass it will be due to the deadly combination of human arrogance and ignorance (in the sense of not being able to think thru if we should do something just because we can do it.

Especially considering that Science Fiction literature, TV shows and Movies have given both suggestions and warnings for decades. Cautionary tales such as those @oldaunt mentions. Several original Star Trek episodes, particularly the one 'A Taste of Armageddon' (More on this in another comment i'll add after this.)

But way back there was Azimov's input. Keep in mind Isaac Azimov was a prolific writer (some 469+ books in his life time) of non-fiction as well as science fiction. He was a Professor of biochemistry, and well informed about other sciences. In one of his stories he put forth a notion that all robots (AI) should be programmed to follow three basic rules:
  1. A human person may not be harmed by a robot, nor may a robot put a human being in danger
  2. A robot is required to obey directions, unless such directives are in direct violation of law number one
  3. It is the responsibility of a robot to ensure its own survival, provided that it does it in a manner that does not violate the First or Second Laws of Robotics
 
A human person may not be harmed by a robot, nor may a robot put a human being in danger
Interesting, it seems, if you count drones anyway, that first rule has already been broken.
There may need to be legally binding standards established really soon, to ensure that developers don't let that genie out of the bottle.
Perhaps others worry more about this than I do, but I would want to be very careful in trying to set government standards for risks that are very hypothetical at this point in time. The downside of this is that it could hinder development of technology that could be very useful to us.

The government is not very good at setting standards and regulations even for real risks that we do understand, I suspect it would be worse for this.

I think another industry may be leading the way in human robot development. Not that we have any real need for these things, but it appears there is a market.
 
Many SciFi stories have dealt with the dangers of looking at everything only intellectually--while there are times we should set our emotions aside--the simple fact of how wound up people get about telling others how they should think and live should make it clear at all humans are emotional beings. So it is in a sense irrational not to take human emotions about events, other humans into account.

In the original ST episode 'A Taste of Armageddon' they come across a planet that has been engaged in a World War for a very long time. Their computers actually wage it when the Enterprise comes across them. Computers plan, and in cyberspace carry out 'battles', casualties and fatalities being determined 'logically' by computers---only living, breathing humanoid beings calmly report to death chambers as a result.

i was always more of Spock than Kirk fan. The speech Kirk gives beginning at 2:46 in the clip below is one of the few times i felt different. i remember watching it when it aired and being tremendously moved--at the time the Vietnam conflict was in full swing. i cried because i feared we were not getting the message about the horrors of war. "When will we ever learn?" (from the Pete Seeger song 'Where have all the flowers gone)?

 
Last edited:
@Alligatorob said: "Interesting, it seems, if you count drones anyway, that first rule has already been broken." and "The downside of this is that it could hinder development of technology that could be very useful to us."
The question is what use? Drones to some extent 'sanitize' the process for those that use them. A lot of other near AI gadgets (Alexa type things) mostly serve to make us lazier--both physically and mentally. While voice activated devices are a great boon to people with physical difficulties typing at a keyboard or OMG getting a hard copy book---i can imagine scenarios where if majority of citizens in a country depend on such things for news/info---an authoritarian government would have a great helper in swaying the masses with disinformation.

@Nathan said: "... to ensure that developers don't let that genie out of the bottle."
Personally i think it may already be out.
 
Last edited:
If a machine, named Charlie, is ultimately developed that has the same mental ability as a human. It can talk and respond, has ideas, etc. For all intents, and purposes, Charlie is a "being". Well, if Charlie is so human like does it have any human rights? If a biological being can have "rights", can a mechanical version, with all the same attributes of the biological being not have "rights"? Can Charlie be considered a "person"?
 
The question is what use?
I was thinking robotics in general, but I do know of lots of uses drones are being put to. A company I used to work for had a whole drone business, none of it military. And I still work on a few things where drones are put to good use. A few examples:
  • Land surveys, drones can quickly, easily, and accurately do land surveys for a fraction of the cost of conventional surveys. And with lidar can see through vegetation. This is revolutionizing the survey business.
  • Traffic surveys, including real up to date info on traffic slow downs and jams.
  • A wide variety of inspections, some I know of include property surveys, roof inspection, and close inspection of things like industrial chimneys and smoke stacks. The drones can fly right down into the chimneys looking for fouling, defect or signs of damage. Also there are some air quality sensors that allow the drones to do a very detailed and close inspection of contaminants being emitted.
  • Lots of kinds of photos such as wedding photos, photos and videos for real estate listings and so on...
There are a lot more examples. Many of these things could be done without drones, but at a much higher cost, and some, like flying down into an industrial chimney, can only be done with drones.

There are many more such examples for other kinds of robotics.
 
If a machine, named Charlie, is ultimately developed that has the same mental ability as a human. It can talk and respond, has ideas, etc. For all intents, and purposes, Charlie is a "being". Well, if Charlie is so human like does it have any human rights? If a biological being can have "rights", can a mechanical version, with all the same attributes of the biological being not have "rights"? Can Charlie be considered a "person"?
Possibly, this was addressed in one of my favorite Star Trek: Next Generation episodes, "The Measure of a Man" where some Star Fleet folks wanted to study the AI, Data, quite possibly to the point of dismantling him to see if they could replicate the process. Capt. Picard successfully argued in a judicial hearing that Data was a sentient being.
 

Last edited:

Back
Top