My sister just tweeted with news: my niece “is concerned because Siri won’t answer “what are Asimov’s three laws of robotics” without being a smartass. I think I find it concerning as well.”
They are not alone. I too find it disturbing that Siri would be so evasive on such an important subject.
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Simple and elegant. Imagine if a robot–or, by extension, an advance computer–turned rebellious, sociopathic, wrought with violently realized narcissism. The devastation could be incalculable. Now, imagine if such a “smart” machine, hosted in a massive mainframe (somewhere in the Pacific Northwest, for example), with exceptional computing power had a way to access the communications systems of a broad–and growing–swath of the population. Imagine if those same communication systems accessed a vast net–or web, if you will–of computers upon which much of of technological, logistical, social, and economic infrastructure depended.
We could be in trouble.
Leave a Reply