Matt Williams
1 min readJan 24, 2024

--

Ah, you mean where LLM algorithms they see patterns where none exist? You're talking about an algorithm seeing nails because its programmed to be a hammer. That hardly compares to a sentient program somehow ignoring its programming and killing someone.

And what you're saying about programming is a vast oversimplification. You'd not only need to have the same capabilities as the manufacturer, you'd have to overcome all the safeguards and encryption. And, in keeping with the basic principle of safeguards, the programmers would include a kill-switch. That's assuming advanced AIs in the future aren't protected by quantum encryption, which makes the whole "they can reprogrammed with the same techniques" impossible.

Most importantly, the Three Laws being fictional at the time of writing is irrelevant. They represent Asimov's attempt to show how stupid the idea of "berserker robots" was. Anyone developing a thinking machine would have the foresight to include programming that forbade them from inflicting harm, which is precisely what is being done today.

Case in point: the EU AI Act, the Defense Innovation Board’s AI Principles Project, the UN’s Principles for the Ethical Use of Artificial Intelligence, and this:

https://obamawhitehouse.archives.gov/the-press-office/2016/08/02/fact-sheet-new-commitments-accelerate-safe-integration-unmanned-aircraft

--

--

Matt Williams
Matt Williams

Written by Matt Williams

Space/astronomy journalist for Universe Today, SF author, and all around family man!

No responses yet