Tuesday, June 2, 2009

"Robots as Human Beings," by Kristen Humphrey

Throughout the years, robots have been a very popular theme in science fiction. Many movies and books have been made that explore all a robot is capable of doing. In this day and age, as scientists make more and more technological advances, robots are becoming more advanced in reality as well . Though they have not yet become as developed as the robots in Terminator or I Robot, the ideas these movies present seem to be at least a possibility for the future. Even now, robots are capable of a great number of outstanding feats and are also becoming more and more human in their appearances and mannerisms. However, no matter how developed robots become, most people would never consider them to actually be human beings. Robots do not breathe or eat like humans do. They also seem to not be able to think or feel emotion the way humans can. Therefore, it would seem idiotic to consider them as being on the same level as a living human being. There has always been a distinct line between what it means to be a robot and what it means to be an actual human being. However, the line becomes much fuzzier in the world of science fiction. In some cases, robots even seem to appear more human than actual human beings. Could it be possible that one day in the future, a robot could be viewed as a human being? This is a very complicated question and many factors go into deciding one’s answer.

When trying to decide whether robots could be considered human beings, one must ultimately define what being human actually means. It is quite obvious that robots could never be considered to be human in the biological sense, but it would seem that there are more important aspects to being human than just having a human body. For instance, the ability to reason would seem to be one factor. In the science fiction short story, “Reason” by Isaac Asimov, robots clearly have the ability to reason. The story describes how the robot Cutie uses logic and reason to come to the conclusion that he was not made by humans and that he should actually be in charge over the humans who built him. It would seem then, that robots are just as capable of reasoning as a human being is.

Another aspect that is important to being human is having the ability to love and to feel emotion. Humans can form emotional attachments with other humans that are at a deeper level than any other creature is capable of doing. Robots, it seems, just would not be capable of forming the type of emotional attachments with others that are essential to being human. However, certain science fiction stories have attempted to prove that this is, in fact, possible. In Isaac Asimov’s short story “Robbie,” a little girl named Gloria forms an emotional attachment with a robot called Robbie. She is so attached to him that she becomes distraught when her parents take him away. In the story, Gloria has the following conversation with her mother,
’Why do you cry, Gloria? Robbie was only a machine, just a nasty old machine. He wasn’t alive at all.’

‘He was not no machine!’ screamed Gloria, fiercely and ungrammatically. ‘He was a person just like you and me and he was my friend’ (Asimov, “Robbie,” p. 59).
It is obvious that in Gloria’s eyes, Robbie is capable of forming a tight bond with a human. However, one cannot tell how Robbie feels about the supposed friendship, or whether he even feels anything at all. However, his arms did, “wound about the little girl gently and lovingly, and his eyes glowed a deep, deep red” when he was around her (Asimov, “Robbie,” p. 70). From Robbie’s physical actions one could conclude that Robbie did feel some type of favorable emotion towards Gloria. However, this was the result of how he was programmed to behave towards humans. One could argue that Robbie was not actually conscious of his actions and that he had no choice in how he behaved. He was only behaving how he was programmed to behave. However, one could also argue that it does not matter where Robbie’s emotion comes from as long as he has them.

“The Bicentennial Man” by Isaac Asimov challenges the reader to think more about the question, could robots be human? In the story, the robot Andrew Martin has the strong desire to become a human being. He longs for this so much that he gives up his resilient metal body in exchange for a weaker, organic one that closely resembles a human’s body. He even received humanlike organs and underwent certain procedures that gave him the ability to breathe and eat. In every outward aspect, he appeared to be human. However, many people treated Andrew as if he were just an ordinary robot. Andrew longed to become the first robot to be given his freedom, but people were unwilling to grant him his wish. They felt that only a human being had the right to be considered free. In response to this, Andrew says, “It has been said in this courtroom that only a human being can be free. It seems to me that only someone who wishes for freedom can be free. I wish for freedom” (Asimov, “The Bicentennial Man” p. 256). Like Robbie, Andrew is another example that robots do feel some type of emotion. Eventually, Andrew is given his freedom, but he is still not satisfied and begins to wish to be called an actual human being. He has the following conversation with a human,
’There we would run into human prejudice and into the undoubted fact that however much you may be like a human being you are not a human being.’

‘In what way not?” asked Andrew. ‘I have the shape of a human being and organs equivalent to those of a human being. My organs, in fact, are identical to some of those in a prosthetized human being. I have contributed artistically, literally, and scientifically to human culture as much as any human being now alive. What more can one ask?’ (Asimov, “The Bicentennial Man” p. 281-282).
If robots can reason as well or even better than humans can, and if they are capable of feeling some type of emotion, is this enough for them to be called a human being, or is there something else that humans have and robots lack?

An important aspect of humans is that they are moral creatures. Robots seem to have no sense of right and wrong. Their actions are determined by how they were programmed and not by what they believe is right or wrong. If robots were somehow programmed so that they knew the difference between right and wrong and were instructed to act accordingly, some people might say they have a set of morals. However, others would say that robots would still be forced to act in a certain way and would not be acting of their own free will. This raises the questions of what free will actually is and if humans even have it?

One must decide whether humans have free will before deciding if robots have it or if it even matters if they have it or not. A popular view on this issue is determinism, which states that our actions are predetermined. There are many different variations of determinism. For example, there are hard and soft determinists. Hard determinists are incompatibilists and believe that free will and determinism cannot both be true. Hard determinists are also called libertarians. On the other hand, soft determinists believe that determinism and free will coexist at the same time. At the core of the issue, libertarians and compatibilists differ on their definition of freedom. Libertarians believe freedom to be the power to do otherwise, whereas compatibilists believe freedom is the ability to do what one wants to do. According to compatibilists, it does not matter whether or not a person can only make a certain decision or perform a certain action, because it is what he decided to do in the first place. This raises the question of whether or not there is also a “robot” form of compatibilism? Could being free be compatible with being programmed? If one takes the view that the actions of a human being are destined to happen, and yet that human is still free, then a robot could be considered to have free will as well, even though he is doing what he is programmed to do, for human beings would only be doing what they were “programmed” to do as well. Also, if one believed that human beings did not have free will, then robots would still be the same as humans, because they would not be considered to have free will either.

It has often been said that human beings are the only creatures who are in possession of a soul. Some people would say that this is the deciding factor when considering if robots could be human. They would say that robots do not have souls, so therefore they cannot be human. However, others, such as materialists, do not even think that humans have souls. Also, even if one does believe that humans have souls, he would have a very hard time proving it. In the story “Article of Faith” by Mike Resnick, the robot Jackson struggles with why he cannot be part of the congregation of the church he cleans. He has the following conversation with the pastor of the church:
“Is the Bible true?” asked Jackson.

“Yes, the Bible is true.”

“Then is it not true for robots as well as for men?”

“No,” I said. “I am sorry, but it is not.”

“Why?” he said.

“Because robots don't have souls,” I answered.

“Where is yours?” asked Jackson.

“Souls are intangible,” I explained. “I cannot show mine to you, but I know that I possess it, that it is an integral part of me.”

“Why am I prohibited from offering the same answer?”

“You are making this very difficult, Jackson,” I said.

“I do not wish to cause you discomfort or embarrassment,” replied Jackson. He paused, then continued: “Is that not the manifestation of a soul?” (“Article of Faith”).
The pastor has no way of answering Jackson’s questions about how he knows that he has a soul and Jackson does not. Jackson makes a very strong argument towards the fact we cannot really be sure humans have souls any more than we can know that robots do not have souls.

An obvious difference between humans and robots is that humans are “naturally” made, while robots are artificially made. If scientists could create robots who were considered to be human beings, this would seem to denote the worth of a human life, because it would be implying that humans are nothing more than factory products. Of course, there are plenty of people who already believe that humans are merely manufactured products, but when looking at the issue through a religious lens, people amount to more than this. Also, one must remember that humans can now produce other living humans artificially through a variety of methods that help infertile couples get impregnated. At what point would artificial methods start to threaten the value of human life? In today’s world, the human sperm and egg are still essential to producing a human being, even if the way they come together is unnatural. If scientists were able to create another human being with material that was completely nonhuman, I feel that this is where the worth of human life would begin to be threatened.

The human brain is an incredibly complex aspect of human beings. The human brain separates humans from other animals and machines. “Surely, if there is one thing that makes us human it is the brain. If there is one thing that makes us a human individual, it is the intensely complex makeup, the emotions, the learning, the memory content of our particular brain” (Asimov, “Cybernetic Organism” p. 465-466). If a robot were to be viewed as a human being, he would have to have some type of brain that would be considered to be equal to a human’s brain in its level of functioning and performing as a human brain would perform.

When trying to decide if a robot could ever be considered to be a human being, one must look at the human mind from the inside and outside. When one looks at it from the inside, one will “probably find things like thoughts, experiences, feelings, emotions, hopes, fears, expectations, beliefs, desires, intentions,” (Rowland, p. 57). The things listed above make people feel a certain way, and when people think about how it feels to have a certain emotion or desire, it is called consciousness. Consciousness is defined as being aware of how one feels. Also, the things listed above could also be about someone or something. When people consider what their thoughts and desires are about, it is called intentionality. Looking at the mind from the inside reveals that humans have both consciousness and intentionality, and this is what robots seem to be lacking in. Looking at the mind from the outside presents a startling contrast from looking at it from the inside. When looking at the brain from the outside, it just looks likes some type of matter formed into a shape that people consider to be called a brain. However, technology has allowed brain surgeons to substitute this view “with a more complex picture of the brain as a sophisticated information-processing system, where electro-chemical messages are sent between neurons – the units of the brain – with inconceivable rapidity” (Rowland, p. 58). However, although this view is much more advanced than just looking at the brain as matter, it still does not amount to more than essentially saying all the brain is made up of is neurons and chemical activity. When one looks at the mind from the outside, he sees nothing nonphysical about it. All he can observe are the physical aspects of it. Consciousness and intentionality would both be considered to be non-physical aspects of the brain, because they do not have physical properties, or at least, one cannot observe them in a physical sense. Hence, the views from the inside and outside of the brain do not match, and this is where the mind-body problem comes in.

There are many different views on the mind-body problem, and one of the main ones is dualism. Dualism states that human beings are made up of both physical and nonphysical properties. Properties such as the mind or soul would be considered to be nonphysical, while the body and brain would be viewed as physical parts. According to this view, it would be impossible for robots to be considered human beings, because they are purely physical beings, and therefore, they cannot possess minds or souls. However, there are some dualists who think that the mind “emerges” from physical organization of the brain, and if this were true, robots could have a mind if their brains were built correctly. Also, philosophers have found many problems with dualism. One complaint has been that it does not define what a mind is, but it merely says what it is not. For example, dualists say that the mind has no shape or color and is not defined by physical properties, but they do not say what the mind actually does look like. Also, another problem would be explaining how the mind, which is nonphysical according to dualists, could interact with and affect the physical world. The nonphysical mind and the rest of the physical world do not seem to be similar enough to be able to have any affect on each other.

Another view on the mind-body relationship is materialism. Materialists view humans as entirely physical beings. According to materialists, the brain and the mind are essentially the same thing. If one is to believe a robot could be considered human, a materialistic view of the brain would pose the greatest possibility for this to be able to happen.

Functionalism is another view on the mind-body relationship. Functionalists believe that it is not the material mental states are made of that is important, but rather the role these mental states play. For example, pain should be thought of as a damage detector. If having a human brain is required to being human, then it would be impossible for a robot to be a human being. However, one must also listen to the words of the robot Andrew Martin from Isaac Asimov’s short story “The Bicenntenial Man.” “It all comes down to the brain, then, but must we leave it at the level of cells versus positrons? Is there no way of forcing a functional definition? Must we say that a brain is made of this or that? May we not say that a brain is something – anything – capable of a certain level of thought?” (p. 286-287).

Andrew draws the argument away from what the brain is made of to how the brain actually functions. To him it does not matter what a mind or brain is made of as long as it is able to do everything it is supposed to do.

Now-a-days, robots are capable of accomplishing a great number of intellectual feats that even the most intelligent humans would struggle with. However, most people believe that humans are still more intelligent than robots, even though some robots would be able to beat just about any human in an intellectual contest. This is because there are multiple kinds of intelligence. One definition of intelligence would be the ability to learn and retain a large amount of knowledge. Robots have a very high level of this type of intelligence. They have the ability to store a large amount of data and facts and can apply this knowledge to the situations they are put in. However, as science fiction wrier Isaac Asimov put it,
But such computer ability is not the only measure of intelligence. In fact, we consider that ability of so little value that no matter how quick a computer is and how impressive its solutions, we see it only as an overgrown slide rule with no true intelligence at all. What the human specialty seems to be, as far as intelligence is concerned, is the ability to see problems as a whole, to grasp solutions through intuition or insight; to see new combination; to be able to make extraordinary perceptive and creative guesses. Can’t we program a computer to do the same thing? Not likely, for we don’t know how we do it. (“Intelligences Together,” p. 452)
What truly seems to be important to being a human being is not the ability to retain a large amount of data, but the ability to be perceptive and to use intuition in situations. It is not just having knowledge that is important, but what actually matters is using that knowledge in creative and useful ways. No matter how advanced robots become, there is a possibility that they will always be lacking in this area of intelligence.

When deciding if robots could ever be considered human, one must ultimately define what makes a human being human. Most people would agree that it is not the human body, but the more complex and intangible aspects such as their ability to love and to have hopes and dreams that really define what a human is. Also, humans are conscious and are capable of being aware of how and why they do what they do. It is apparent from science fiction stories that robots can have wishes and desires and are very apt at using logic and reason. In this sense, they are just like humans. It has also been shown that robots can have an artificial brain that is just as advanced as a human’s brain in terms of its physical aspects. It comes down to then, the question of whether or not there is something nonphysical about human beings that cannot be replicated. If humans do have a mind or soul that is nonphysical, this would make human life very hard to replicate. Do emotions such as love, fear, and hope, amount to nothing more than the results of chemical reactions? To me, the evidence points to there being something more to humans that is much more complex than can simply be artificially created. However, even though this would appear to be true, it is not necessarily the correct conclusion. Due to the lack of tangible evidence, the debate of whether or not robots could be human will forever remain unsolved.

No comments:

Post a Comment