That makes no sense, sir. A machine bound by the three laws would not be able to decide between the life of an unborn child or a woman. It would driven mad by the paradox. And that example hardly demonstrates machines would allow for human extinction.
As for the opinions of experts, they all agree that autonomous weapons systems need to be regulated, hence the agreement that human operators must be responsible for making life and death decisions.
And you’re supporting my overall argument here. Any AI put in a position where it could cause harm will have safeguards put in place, thereby preventing it from turning on its handlers.
Simply saying AI is a grave danger is simplistic. Recognizing the potential for abuse or harm and taking steps to prevent that, that’s just common sense.