<AI>Devspace
upvote

20

downvote

0

save

Are Asimov&#39;s Laws flawed by design, or are they feasible in practice?

clock icon
asked 2 months ago
message icon
3
eye icon
817

Isaac Asimov's famous Three Laws of Robotics originated in the context of Asimov's science fiction stories. In those stories, the three laws serve as a safety measure, in order to avoid untimely or manipulated situations from exploding in havoc.

More often than not, Asimov's narratives would find a way to break them, leading the writer to make several modifications to the laws themselves. For instance, in some of his stories, he modified the First Law, added a Fourth (or Zeroth) Law, or even removed all Laws altogether.

However, it is easy to argue that, in popular culture, and even in the field of AI research itself, the Laws of Robotics are taken quite seriously. Ignoring the side problem of the different, subjective, and mutually-exclusive interpretations of the laws, are there any arguments proving the laws themselves intrinsically flawed by their design, or, alternatively, strong enough for use in reality? Likewise, has a better, stricter security heuristics set being designed for the purpose?

3 Answers

Asimov's laws are not strong enough to be used in practice. Strength isn't even a consideration, when considering that since they're written in English words would first have to be interpreted subjectively to have any meaning at all. You can find a good discussion of this here.

To transcribe an excerpt:

How do you define these things? How do you define "human", without first having to take a stand on almost every issue. And if "human" wasn't hard enough, you then have to define "harm", and you've got the same problem again. Almost any really solid unambiguous definitions you give for those words—that don't rely on human intuition—result in weird quirks of philosophy, leading to your AI doing something you really don't want it to do.

One can easily imagine that Asimov was smart enough to know this and was more interested in story-writing than designing real-world AI control protocols.

In the novel Neuromancer, it was suggested that AIs could possibly serve as checks against each other. Ray Kurzweil's impending Singularity, or the possibility of hyperintelligent AGIs otherwise, might not leave much of a possibility for humans to control AIs at all, leaving peer-regulation as the only feasible possibility.

It's worth noting that Eliezer Yudkowsky and others ran an experiment wherein Yudkowsky played the role of a superintelligent AI with the ability to speak, but no other connection outside of a locked box. The challengers were tasked simply with keeping the AI in the box at all costs. Yudkowsky escaped both times.

Consider Asimov's first law of robotics:

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

That law is already problematic, when taking into consideration self-driving cars.

What's the issue here, you ask? Well, you'll probably be familiar with the classic thought experiment in ethic known as the trolley problem. The general form of the problem is this:

The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person on the side track. You have two options: (1) Do nothing, and the trolley kills the five people on the main track. (2) Pull the lever, diverting the trolley onto the side track where it will kill one person. Which is the correct choice?

source : Wikipedia

Self-driving cars will actually need to implement real life variations on the trolley problem, which basically means that self-driving cars need to be programmed to kill human beings.

Of course that doesn't mean that ALL robots will need to be programmed to kill, but self-driving cars are a good example of a type of robot that will.

Asimov made the three laws specifically to prove that no three laws are sufficient, no matter how reasonable they seem at first. I know a guy that knew the guy and he confirmed this.

1

Write your answer here