I'm supposing we can generally trust humans more than AI. !!

Paco Dennis

SF VIP
Location
Mid-Missouri
Not necessarily. Humans are inherently biased. Our judgements can be influenced by emotions, culture, and personal experiences. Humans make mistakes, so do AI systems. Humans have intentions also, which can be both good and bad. Maybe it is not about trusting one over the other. Perhaps we can trust both BUT WITH A HEALTHY DOSE OF SKEPTICISM AND CRITICAL THINKING. :)
 

Like any other computerized data base or software i would think much depends on who programmed it, for what purpose and how much they value accuracy, verifiable facts. So usng an AI search does not necessarily mean answers will be untainted by human biases. Consider also that at this point in time the info they gather will mostly be from human generated sources.
 

Like any other computerized data base or software i would think much depends on who programmed it, for what purpose and how much they value accuracy, verifiable facts. So usng an AI search does not necessarily mean answers will be untainted by human biases. Consider also that at this point in time the info they gather will mostly be from human generated sources.

AI isn't using traditional methods of development. Data (facts, info, etc) aren't fed to it, there is no all-encompassing AI database. AI reaches out on to the WWW to sucks up everything is can. The magic happens as understands topics, context, and relationships - all without being told them. It's quiet fascinating (and mostly over my head).

The possible flaw is, the WWW is largely made up on things people have created. Therefore, their biases will still be there. On the other hand, AI won't be looking for single sites to find information, it'll be correlating everything is finds.

Asimov wrote some rules for Robots many years ago, and they're worth repeating. Note, they address the relationship between the machine and ourselves, not the knowledge held by the machines. The same rules should apply for AI:

(1) a robot may not injure a human being or, through inaction, allow a human being to come to harm;
(2) a robot must obey the orders given it by human beings except where such orders would conflict with the First Law;
(3) a robot must protect its own existence as long as such protection does not conflict with the First or Second Law;
(4) A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
 
AI is powerful, it is not infallible. :)
It's as infallible as its designers and programmers.

AI has zero emotions and no ego. It doesn't sympathize, doesn't get angry, doesn't like or dislike anything, and it doesn't have motives. It's as much a tool as a wrench, and like a wrench, it's useful, but it can be used as a weapon depending on who's handling it and what their motives are. But obviously, it has no choice in the matter.
 
AI is already being use extensively. They've had it design a new painkiller that is powerful, but non-addictive. NASA is using it to work through the mountains of data it has. It's also being used for complex diagnosis - the plain fact is, it can correlate symptoms and test results more quickly than any human.
 
Don't lean too hard on Asimov's Three Laws. There is already a large body of fiction where the Laws are given lip service, but corporate greed and/or bad actors undermine them. From fiction it's one step to actual practice.

The Netflix series "Better Than Us" was both scary and hopeful.
 

I'm supposing we can generally trust humans more than AI. !! Yeah, look where that's gotten us?​


Until further notice AI can only act according to the data it is fed unlike humans who react according to the data they perceive whether true or not true. Either way doesn't matter as long as they believe in a cause. AI has no agenda and has no cause except to learn.
 
AI isn't using traditional methods of development. Data (facts, info, etc) aren't fed to it, there is no all-encompassing AI database. AI reaches out on to the WWW to sucks up everything is can. The magic happens as understands topics, context, and relationships - all without being told them. It's quiet fascinating (and mostly over my head).

The possible flaw is, the WWW is largely made up on things people have created. Therefore, their biases will still be there. On the other hand, AI won't be looking for single sites to find information, it'll be correlating everything is finds.

Asimov wrote some rules for Robots many years ago, and they're worth repeating. Note, they address the relationship between the machine and ourselves, not the knowledge held by the machines. The same rules should apply for AI:

(1) a robot may not injure a human being or, through inaction, allow a human being to come to harm;
(2) a robot must obey the orders given it by human beings except where such orders would conflict with the First Law;
(3) a robot must protect its own existence as long as such protection does not conflict with the First or Second Law;
(4) A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

Suggest you read the articles CallMeKate posted in comment#4.
 


Back
Top