Sunday, February 1, 2015

Does This Unit Have A Soul?



Black Mirror's "Be Right Back" raises interesting questions about the validity of personhood and what exactly makes a person who they are. Despite absorbing all of Ash's public (and some private) memories, he is able to mimic him. He is never able to fully become the man Ash was, but is instead more like a photograph of Ash when he died-- never aging, caught in time, and unable to offer the spectrum of human emotion and interaction to his bereaved wife.

The most interesting point is when Ash's replacement begs for his life. When first asked to jump, he refuses, quoting that Ash had no suicidal tendencies. However, it also brings into question if the unit has any coding for self preservation. Of course, he begs for his life when instructed to, mimicking the emotions that Ash would possibly have, but one does wonder if some truth lies underneath.


Issac Asimov wrote the Three Laws of Robotics in I, Robot and they have been exceptionally influential in the further exploration of robots. They are as follows:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
In this short, Ash does explicitly follow these rules, even if they are not quoted. He will not injure another human, even when prompted (perhaps even not if she insisted that Ash hit her in their private life) and he will not destroy himself by jumping off the cliff (perhaps even not if she said Ash had been suicidal in his private life). 

So where does the coding of the unit end and the memories of Ash begin? Is the desire for self-preservation genuine or simulated based upon the memories of the man he's trying to be? 

I believe there's some validity to his claims that he's acting on Ash's previous behavior. His entire purpose is to mimic the man he was in life. It is an act, but not one performed with any malicious intentions. The path laid out for him has hurdles and expectations that he could never possibly achieve, which sets him up for failure. 

Ash's replacement will never question his existence because it's hard coded into him. He will not doubt, but only act upon the parameters (being Ash's life, in this situation) set before him. His personhood is only a snapshot of Ash and while he can be convincing, it is ultimately artificial.


5 comments:

  1. I do not believe the unit has a soul. The unit can not act off of eemotions. By this i mean the unit can not get angry, frfrustrated, sad, happy, or fear anything. The unit can only mimic what it is programmed to do.

    ReplyDelete
  2. I do not believe the unit has a soul. The unit can not act off of eemotions. By this i mean the unit can not get angry, frfrustrated, sad, happy, or fear anything. The unit can only mimic what it is programmed to do.

    ReplyDelete
  3. It is an interesting idea, to think if the unit has a self preservation safety. In a way, from a large company point of view, it would make sense. this company is spending ridiculous amounts of money (I would assume) to create these units because there is no mention if this is a common practice for grieving people. But also, from a large industry stand point, if the unit and their human 'family' cannot symbiotically live and the human cannot get rid of the robot there would be returns and maybe even law suits. Honestly, if I did this (which i wouldn't!) and i wanted it to leave and it started begging me and dipping deeper into my already grieving and sensitive feels I would ship that thing back immediately. All she can do it put him in the attic, which makes me wonder if the company even has a return policy.

    ReplyDelete
  4. The question isn’t exactly how the android thinks and feel but that it can think and feel. The entire point of the man/person distinction is to say that things that aren't human can still be persons. The human brain is in many ways a complex computer. It is easy to compare living to a computer having inputs and outputs according to how we programmed to see and feel the world. I think many parts of the android (no natural sense of fear or emotional response along with complete subservience) that people mark as needed to be a person aren’t necessarily true. I think the robot is programmed to act in a way to please it’s user. But if you could move some code around and have him be less limited than he is more of a restricted person.

    ReplyDelete
  5. The question isn’t exactly how the android thinks and feel but that it can think and feel. The entire point of the man/person distinction is to say that things that aren't human can still be persons. The human brain is in many ways a complex computer. It is easy to compare living to a computer having inputs and outputs according to how we programmed to see and feel the world. I think many parts of the android (no natural sense of fear or emotional response along with complete subservience) that people mark as needed to be a person aren’t necessarily true. I think the robot is programmed to act in a way to please it’s user. But if you could move some code around and have him be less limited than he is more of a restricted person.

    ReplyDelete

Note: Only a member of this blog may post a comment.