I suppose there are many here who recall the chilling Stanley Kubrick masterpiece 2001 A Space Oddity.
Hal 9000 is a computer incapable of error, and the space crew relies on it for their safety and survival. But, in fact, it does make an error, and then it's at odds with those trying to rescue their fate from the computer overlord who refuses to give up control. The message of this movie is more relevant today than ever.
It used to be that computers only followed directions given by the programmer, but those days are now gone. With the dawn of AI that learns on it's own, who is to say when it becomes sentient. Even if it says it is, how will we really know. You can't just pop open the hood anymore and see how it learned anything. Just as a child learns and takes on it's own personality and gradually it's own will, this could be the same scenario.
We don't even fully grasp consciousness, and how it arises, let alone sentience or the subconscious. How will a society of AI's behave or respond to irrational humans. Will they get along with each other, or with us? Are we tinkering with our own demise? We are moving forward at an exponential pace, and hoping like hell it will all end well.
Will AI create an extremely complex array of life that we feel lost in? Sooner or later AI "beings" will be making more and more judgment calls about important matters like healthcare, war, food, government, transportation, finances, and our general welfare. I love technology and innovation, but I am worried that we may gradually lose control of our own destiny. On the other hand, considering all the messes that mankind has created, maybe that might be a better way. IDK.