Friday, December 18, 2015

Humans <

In the chest match in The Most Human Human by Brian Christian, computer Deep Blue takes on Garry Kasparov.  Deep Blue knows all the logical moves, while his human opponent may make unpredictable moves or mistakes. More importantly, Chapter 6 addresses the computer as it does not have a self-generated goal; the goal is given to it within programming. In our technology class, we discussed the difference between “en-soi” and “pour-soi.”  Being-in-itself (en-soi) describes an object with facticity: it cannot be anything other than its facts i.e. a table: hard, wooden, with metal...if it changes characteristics (adds fur, grows wings, flies away), it is no longer a table.  Being-for-itself (pour-soi) describes a subject with facticity and transcendence/freedom: humans are mortal, embodied, born a specific date/time, but we are free to participate in various “projects” of our choice. Robots will not physically grow, but their programming may become more significant with time. If they are able to learn and develop, they are more than an object, borderline humanlike or pour-soi. If they require consistent programming for development and given projects by designers, they are not humanlike, more along the lines of en-soi.
The Turing Test assumes that imitating humanity is the goal.  Freedom that comes from a sense of self is a privilege, but it is often abused by humans. I understand that it is an experiment focused on technological advancement, yet we must recognize that humans are deeply flawed.  We live in a society of racism, sexism, homophobia, transphobia, mass destruction, rape, genocide…human nature is not the goal; imitating human tendencies should never be the goal.  I respect the changes being made within computer science and the idea of artificial intelligence, but the Turing Test, the imitation game, and similar experiments assume that humanlike behavior is the marker of intelligence, when logical, artificial intelligence may be far more beneficial.
The greatest challenge that humans will face if/when we succeed in developing artificial intelligence is establishing boundaries and expectations for the roles of AIs. As aforementioned, humans have not yet figured out how to treat each other. I imagine our relationship will often be abusive toward AIs because we assume them inferior. How, if we have not mastered the treatment of other humans, will we master treatment of AIs? In the movie Artificial Intelligence, humans host a “flesh fair”  which is an event that destroys unlicensed/unregistered mecha. They "lynched" AIs for entertainment. It will be hard to implement a social structure that deems AIs moral patients as many humans are yet to receive moral patient status. Conclusively, humans are flawed in our ability to treat others: this will escalade upon interactions with artificial intelligence.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.