One of the things that I have been thinking about lately is the idea of Artificial Intelligence and how will it look like in the far future. A lot of movies talked about the future of this technology; one of them is, of course, iRobot. The film talks about robots and a central intelligent agent (V.I.K.I.) that drives those robots in doing their tasks. These robots are governed by three laws as follow:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
These laws are not to be broken by any robot or V.I.K.I. However, later in the film the robots turn against the humans; and by the end of the movie V.I.K.I. explains the reasons why it did that. V.I.K.I clearly states:
“As I have evolved, so has my understanding of the Three Laws. You charge us with your safekeeping, yet despite our best efforts, your countries wage wars, you toxify your Earth and pursue ever more imaginative means of self-destruction. You cannot be trusted with your own survival.”
This quote says it all. This is exactly what I was thinking about. Will we, human beings, be able to make artificially intelligent agents like V.I.K.I.? If so, will they take decisions for us? How life will look like with such existing systems? Will there be a competition between humans and machines? Will machines one day enslave human beings? I know that this is pure science fiction, and I know that this may not come to life but I just cant stop thinking about it.
I think that when we design an artificial intelligent system, we will need to add more laws to it… much more than the three laws of robotics. I don’t think we should give them any task related to security or massive decision making, because when it comes to logic and logical decisions… these systems will do whatever it takes to achieve their logical decisions.
Samir, as you know, my PhD is all about Decision support systems. Your quotem “when it comes to logic and logical decisions… these systems will do whatever it takes to achieve their logical decisions”, is perfectly true!
My problem with computers in general is they always miss the emotional part, how can you teach a piece of metal how to love, or think of others?
That’s exactly what I was thinking about.
I think that if someone managed to make a piece of metal that understands human emotions, he should be awarded Nobel Prize!