Brains Beat Binary
Since Samuel Butler first expressed his fears of the rapid development of machinery, humans have fixated on this imagined future where we are enslaved or worse by our own creations. Considering he said, "There is no security against the ultimate development of mechanical consciousness," in 1906, before computers, his words proved surprisingly salient (Butler, 1906). Despite the fact that computers are indisputably growing in terms of computational power at an incredible rate (Moore, 1998), there is no reason to realistically fear a computer that can work autonomously in any meaningful way, let alone outsmart a human. Consider Microsoft's Project Adam. The software can sort and organize millions of photos quickly and accurately by analyzing the images. It can distinguish between extremely similar looking breeds of dogs in a photo, for example (Microsoft Research, 2014). Practically this means you might one day be able to do an image search for a sweater you want, oh, say, "a mauve cashmere sweater with 3/4 lengths sleeves," and without any cumbersome text based tagging or sorting the search engine will analyze every single image on the internet and parse out all the cashmere sweaters that aren't mauve or have full sleeves. An impressive accomplishment. This represents one of the most incredible achievements of practical computing today. However, even this breakthrough in computing technology does nothing to narrow the gap between computer and human intelligence. The technology cannot operate independently of human involvement. The technology is still responding entirely to human based input, on human based instructions with human programmers and human technology feeding it, like electricity or data from the internet.
If we examine the thinkers that predict a world of computers thinking on a human level we encounter a mostly deluded camp of sci-fi lovers who base their theories on Star Wars inspired fantasy more than any facts. Even the serious and respected thinkers, like Ray Kurzweil, Google's "futurologist" (Even the title invites mockery, doesn't it?) have questionable motives when they make predictions about computers that think like people in 15 years. The existence of his job relies on the hope that one day computers can reach that level. Similarly, Kurzweil's reputation would suffer if the idea that computer's will match our thinking power became common place. Certainly The Guardian would be less interested in him (Cadwalladr, 2014).
Artificial intelligence, and the abiding fears of computer-powered dominion over humans, are common place and popular fodder for idle discussion. However, when considered in reality these fears are misguided, and the hope of a computer as smart as a human is absurd.
Perhaps one of the challenges to adequately discussing this topic is the difficulty of defining the human brain in a way that can be compared to a computer so as to compare the power of the two. Let's first look at the human brain through a terrible lens, and one that sci fi concepts seem to constantly attribute to computers: the power to destroy. Perhaps the unique human ability to war and fight at a level unique to our species (Dolphins, as predatory and scary as they may be (Goldstein, 2009), will never launch a mortar barrage against an enemy pod or engage in genocide.) so will robots ever reach this uniquely human metric? Computer science professor at the University of Sheffield, England Noel Sharkey says no. "They are just computer systems... the only way they can become dangerous is when used in military applications." (Tan, n.d.) To Sharkey, robots and artificial intelligence have the greatest growth potential in toy markets, a strong indication of the potential for nefariousness he sees in future computing technology. He goes on to point out that the largest developments in robotics come not from software, but from their hardware. Robots that can walk or navigate difficult terrain seem to be the new trend for robots mimicking human behavior.
An article from Vox.com makes an interesting case about why computers will never be able to match human intelligence, "A computer program has never grown up in a human family, fallen in love, been cold, hungry or tired, and so forth. In short, they lack a huge amount of the context that allows human beings to relate naturally to one another." (Lee, 2014). Basically, the argument is that even if a computer can match our brain's computational power (A very far off and unlikely possibility (Myers 2010), it will never be able to pass as a human because it lacks the experiences that really create our humanity. Or, in other words, humans are so much more than our brain power - we are the products of our upbringings. Our tenacity, will, passions and dreams all come from the sum of our experiences, not how fast we think. Because of that, computers will not be able to function at a human level of creativity or character, even if they can eventually perform more calculations per second than we can.
Actually, this supposition stems from a famous scenario from philosopher John Searle in the 1980s (The Chinese Room, 2004). He proposed that an Englishman with no knowledge of Chinese, if locked in a room with an instruction manual for reading and writing Chinese characters, could successfully interpret and respond to messages passed under the door to him from a native Chinese speaker on the outside of the room. Theoretically, given enough time, the Englishman could respond so accurately that the native Chinese speaker would be sure that she was in fact corresponding with another native Chinese speaker. Essentially, the Englishman would have passed himself off as a Chinese person with no contextual understanding of what it means to be Chinese. The extension of the argument into artificial intelligence is that even if we create a computer that can mimic and interact with humans so convincingly that we believe we are conversing with a real human, that machine will not be human because it lacks the contextual understandings of humanity.
Whether we define the brain by what it produces (In this paper I discussed the example of war, but many other examples would suffice, art or romance, for example), or in terms of raw computational power or how the experiences that mold each molecularly similar brain into such unique masterpieces the conclusion remains the same: Any computer, no matter how powerful or well conceived, can approach a human level of thought or existence.
It is not hard to find sources that will warn you of the coming robot apocalypse or singularity that will render humans obsolete, either in entertainment - the Matrix or Terminator series - or legitimate science - Ron Kurzweil and the whole school of futuroligists. In part, I agree; computers and technology are capable of terrifying acts of destruction and cold inhumanity. What is important to remember, though, is that none of these acts are possible without human provocation, and the sometimes-scary lifelessness of computers is really only as scary as the lifelessness of a vacuum cleaner or screw driver. In short, they’re tools: Incredibly powerful, important and relied-upon tools, but still just tools. If we ever limit the expansion of technology, we will cost ourselves advances in medicine, food, water and air purification, clean energy developments and crisis management solutions. It is not an exaggeration to say that technological advances save lives when used responsibly. Instead of looking at technology suspiciously, we need to consider it from the perspective of, “How can we use this technology? How can we develop it to better serve our needs?” Like Prometheus surely scared his friends by wielding fire, we will no doubt earn criticism and condemnation for allowing and encouraging the pursuit of new technologies. But, like Prometheus, it will be easy to ignore those criticisms with a full belly - or a robot hygienist meticulously disinfecting our whole house, as the case may be.
References
http://www.gutenberg.org/files/1906/1906-h/1906-h.htm
Cadwalladr, Carole. (2014, February 22). Are the Robots About to Rise?
Google's New Director of Engineering Thinks So... .
The Guardian. Retrieved from
http://www.theguardian.com/technology/2014/feb/22/robots-google-
ray-kurzweil-terminator-singularity-artificial-intelligence
ray-kurzweil-terminator-singularity-artificial-intelligence
Goldstein, Miriam. (2009, May 13). The Dark Secrets
That Dolphins Don't Want You to Know. Slate. Retrieved from
http://www.slate.com/blogs/xx_factor/2009/05/13/
dolphins_are_violent_predators_that_kill_their_own_babies.html
Lee, Timothy B. (2014, August 22). Will artificial intelligence destroy
dolphins_are_violent_predators_that_kill_their_own_babies.html
Lee, Timothy B. (2014, August 22). Will artificial intelligence destroy
humanity? Here are 5 reasons not to worry. Vox. Retrieved from
http://www.vox.com/2014/8/22/6043635/5-reasons-we-shouldnt-
worry-about-super-intelligent-computers-taking
Microsoft Research. (2014, July 14). On Welsh Corgis, Computer
Microsoft Research. (2014, July 14). On Welsh Corgis, Computer
Vision, and the Power of Deep Learning. Microsoft Research.
Retrieved from http://research.microsoft.com/en-us/news/features/
dnnvision-071414.aspx
Moore, Gordon E. (1998). Cramming more components onto integrated
Moore, Gordon E. (1998). Cramming more components onto integrated
circuits. Proceedings of the IEEE. 86. 1. Retrieved from
http://www.cs.utexas.edu/~fussell/courses/cs352h/papers/moore.pdf
Myers, PZ. (2010, August 17). Ray Kurzweil Does Not Understand the
Myers, PZ. (2010, August 17). Ray Kurzweil Does Not Understand the
Brain. Science Blogs. Retrieved from http://scienceblogs.com/
pharyngula/2010/08/17/ray-kurzweil-does-not-understa/
Tan, Lay Leng. (n.d.). Inteligent-less. Innovation: The Singapore Magazine
Tan, Lay Leng. (n.d.). Inteligent-less. Innovation: The Singapore Magazine
of Research, Technology and Education. 6. 1. Retrieved from
http://www.innovationmagazine.com/innovation/volumes/
v6n1/feature3.shtml
The Chinese Room Argument. (2004, March 19). In Stanford
The Chinese Room Argument. (2004, March 19). In Stanford
Encyclopedia of Philosophy. Retrieved from
http://plato.stanford.edu/entries/chinese-room/
No comments:
Post a Comment