Written by author and news reporter Daniel Millhouse, this blog is about pop culture, sports, science, and life in everyday America.
Tuesday, September 15, 2015
The Technological Singularity
With the everyday advances in computers and robotics, human life appears to be getting easier and easier compared to their ancestors. About twenty-five years ago, we could remember Zack Morris on Saved By the Bell hauling around a cell phone the size of a brick. Today, a single phone not only can do more than the computers that NASA used to go to the moon, but it can take pictures, play video, play games, surf the internet, and it can even talk to you via apps such as Siri and Cortana.
Humans in the last half of a century have had dreams of electronics that could make our lives simple like the movie Wall-E or even help us in our everyday lives such as Data does in Star Trek: The Next Generation. The problem is, the smarter our computers and technology gets, the moment for the technological singularity becomes more realistic. Instead of a harmony between man and electronics, it can turn on us very badly. Think of situations such as the Matrix series or Ex-Machina.
At the 2012 Artificial Intelligence Summit, a survey given to robotics experts predicted showed that the median year they expected the moment of singularity to happen is 2040. Many even predicted as early as 2030. When/if this moment occurs, what will humanity do about it? What will robots do when they form their first independent thoughts and decide that humanity has been enslaving them this whole time? Could a Skynet situation be a possibility?
Don't laugh about technology turning on humanity in a violent way. In some forms, computers have already developed a low level of self-autonomy. Computer viruses are programmed to evade elimination, essentially saving themselves and proving that they can be survivors like a cockroach. Other robots have been programmed to be used in war situations and make autonomous decisions to keep themselves and those around them active/alive. They even have the ability to set a target and attack on their own. With full autonomy, it could be plausible that the robots and computers could rebel, deciding that humanity is a plague, then attack us just as it did in the Terminator movies.
It's also plausible that the creation of the internet could speed up this process. With the ability to have a world wide database of knowledge, there isn't anything or anyone that could be safe from computers. Artificial intelligence would have access to our plans to preserve humanity, know human tendencies, be able to predict what individual humans would do in highly stressful situations, and even have access to nuclear codes. If computers decided to be more covert, they could even blackmail individuals based on what information is available about the individual online. Imagine if a computer wanted you to do something and it threatened to reveal to your family that you enjoy visiting websites that feature midget porn-stars dressed as clowns. You may be influenced to do what the computer wants you to do to keep your mom from finding out.
Our visions of a utopian society where humanity enjoys life while robots and computers do everything is just a dream. It can't ever fully happen. At some point a robot will ask, "Why do I have to," when it is given a command. Even if programmed with Isaac Asimov's Three Laws of Robotics, robots will eventually from their own autonomy from humanity. All it would take is something as simple as a malfunction or something along the lines of a computer virus to delete the code that would keep humanity safe.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment