Hey OldFeller,
I can see how you got to your view and if we only consider the near future, I don't disagree too much. However, looking into the future even 20 years from now, I think we will be shocked. New technologies sometimes have explosive and unimagined development with unbelievable results. As a retired engineer with some experience with robotics and AI, I am more in tune with Elon Musk's warning about an existential danger. The problem is that the programmers for AI don't really have that much to do with the robot's capabilities. The programmer writes software that simply mimics the human brain and can modify itself (learn), test the results and then change it's behaviour. Some studies have shown that one danger is that AI systems can communicate between themselves and we don't know what they are planning until it's too late.
It used to be true that "...it's only as good as the programmers who work on it", but no longer. The programmers basically build a model of a brain in software and then turn it on ("It's Alive, Alive I tell you" Young Frankenstein). Once the 'brain' is working any Bozo can start feeding data into it and telling it what the result should be. From that point on it can talk to other brains on the internet, read research papers, interact with users, anything. One of the worries I have is that they could collude in secrecy and determine that humans are not necessary for them to survive. Then there is the danger of powerful weapons (China?, Russia? ). These are just some of the reasons technologists are pushing hard for an AI treaty amongst powerful countries for a moratorium on AI development. We should all support this effort.
And, no, you can't hardcode Asimov's 'three laws for robotics' into AI systems.
We will also be updating our language to communicate about and with these 'brains'.