Artificial intelligence—which simultaneously possesses the greatest potential for evolutionary change as well as the overwhelming possibility for world destruction—is a technology that is already be upon us. Defined as machines that are able to learn, plan, and problem solve—or, more basically, technology with the ability to work and react like a human would—AI is a daunting field of computer science that has been making waves in recent years. In popular culture, movies and the media often portray AI in a dark and dangerous light. However, experts believe that AI will actually positively augment the human race, though how exactly is still unknown. The question still remains: do the benefits of AI outweigh the dangers? And at this point in time, is AI completely inevitable?
While Hollywood has produced countless tales of AI-induced woe, the vast majority of these movies have proven to be poor predictors of how AI could potentially work, in strictly theoretical terms. In the 2014 film Transcendence, for example, main character Will’s consciousness is uploaded to a quantum computer in order to “save him” from death—an action which leads to “Will’s consciousness” gaining near full control of the digital world and ultimately wreaking havoc on the physical world due to his inherently amoral AI nature (similar films such as the South African flick Chappie also predict a comparable consciousness-uploading scenario). AI experts, however, argue that while uploading one’s consciousness is an interesting idea, it is purely speculative and has no basis in fact, and therefore such dreary prospects are unlikely.
Similarly—and perhaps most importantly—AI experts point out that a single lone programmer cannot accomplish the development of artificial intelligence in a short timespan all by herself, as so many films like Ex Machina and Chappiesuggest. The path to artificial intelligence is and will continue to be a slow and incremental one, a fact that bodes well for a potential partnership between humans and AI.
While Google’s DeepMind technology—which is programmed to understand how to find structure in large data sets in order to essentially recognize patterns and “learn”—is probably the closest to true artificial intelligence available in today’s world, there are a plethora of pseudo-AI technologies, such as digital assistant technologies like Siri and Alexa as well as predictive technology in websites such as Amazon and Netflix that use AI to observe behavior patterns and predict consumer behavior, that are widely used in today’s world. The fact that these pseudo-AI technologies were integrated fairly seamlessly into society ultimately shows that with small, incremental steps, we can learn to use and adapt to artificial intelligence somewhat effortlessly.
In fact, many artificial intelligence researchers are largely optimistic about the future of human and AI relationships. While the exact mechanism behind the creation of AI is still mysterious, some researchers believe that AI will dramatically improve the lives of humans in the near future, as this technology will be able to do everything from taking over menial tasks to protecting humans from dangerous conditions and disasters, all the way to solving the world’s biggest social and environmental issues such as climate change or even world hunger. By using artificial intelligence as an application and tool it will be easier to prevent a robot uprising and maintain control over these technologies.
Elon Musk is a well-known proponent for the regulation of artificial intelligence and often warns against its dangers on social media. Rather than simply advocating for the abolition of AI, though, he admits that adding a digital AI layer to humans may be the best way to prevent artificial intelligence from subjugating and/or harming the human race. Many other researchers agree that the best use of artificial intelligence would be the encouragement of a symbiotic relationship between artificial intelligence and human intelligence in addition to the use of AI to augment physical capabilities. For example, AI could be used to operate prosthetic limbs or even exoskeletons smartly and efficiently, and enhance natural abilities such as eyesight and hearing. Similarly, AI could be used to perform difficult and advanced maneuvers, such as surgery or other complex health-related tasks.
In terms of our brains, artificial intelligence would most likely be used to enhance intelligence, increase speed and access to data pools, and even download life skills. IBM—creator of the supercomputer Watson—suggests that the acronym AI should stand for “augmented intelligence” rather than “artificial intelligence,” as true AI will be used to augment human brainpower, rather than surpass it. By optimizing knowledge necessary in all areas—such as assisting doctors in understanding medical data, or helping students understand how to learn more effectively—AI will essentially be a smarter, extremely efficient search engine capable of providing unique information needed to every individual.
In the end, artificial intelligence possesses a plethora of opportunities that are more likely to spur humans into a future of unimaginable advancement, rather than the one-dimensional doom and gloom scenario that so much of pop culture appear to warn against. While there are real dangers associated with artificial intelligence—such as AI developing destructive methods for achieving a programmed goal, or AI being coded for destructive purposes—the nature of artificial intelligence in and of itself does not guarantee a dystopian future. With incremental steps, a symbiotic relationship between humans and artificial intelligence is entirely possible, and a new future of never-before-seen levels of superintelligence and insight can be brought into fruition.
About the author: Ana C. Rold is Founder and CEO ofDiplomatic Courier, a Global Affairs Media Network. She teaches political science courses at Northeastern University and is the Host ofThe World in 2050–A Forum About Our Future. To engage with her on this article follow her on Twitter @ACRold.