Since 1953, when the term Artificial Intelligence was coined byJohn McCarthyin the Dartmouth Conference, we have achieved unthinkable highs: we have taught machines to see, to recognize images or text; we have taught them to read, to listen, speak, or translate.In spite of all of that, they are not even close to achieving full human-capacities; they don’t comprehend, they don’t understand or learn further beyond the environment in which they are placed. They have unmatched computing power, but little to none true creativity; a superb ability to combine and analyze probabilities, but not a single spark of unsupervised creativity. If someone tells you that the “singularity”, the moment in which Artificial Intelligence achieves a definitive human-like character and overpasses our own biological limits on intelligence, is around the corner they are a bit too optimist -the consensus among experts in thatwe are still decades away from a singularity point, and some think we might never get to a true human-like intelligence-.
Regardless of the time to a future in which we have to address machines as intellectually equal or superior, the revolution of AI is unstoppable in so many other ways. The amount of resources and talent devoted by the largest global corporations is accelerating a new industrial revolution in which the speed of adoption is increasingly faster than the pace of adaptation.
The fact that increasingly complex automation is applied to problems that were solved entirely by humans opens the door to great opportunities but also to fallouts and shortcomings. According to consulting firm Accenture, American companies are expected to invest 35 trillion dollars in cognitive technologies before 2035, and that does not take into account other big players, such us Europe, China or Japan. Governments,such the French, are recognizing the importance of AI for the economy and the society. In Spain, the Ministry of Digital Agenda (Minetad) has created a group of experts that is working on aWhite Paper on AI.
The fact that an increasingly more complex automation is applied to problems that not long ago were part of a human task opens the door to many opportunities, but also to risks, threats and misunderstandings. In this scenario we should be more concerned about the human bias introduced in AI that about self-conscious killing machines. To avoid misuse, ethical norms have to be in the core of AI development and answer questions such as “how” and “what for”.
There are initiatives that are creating a trail of principles and rules, such as the23 principles of Asilomar. In fact, more voices ask for a global consensus of minimum rules for the role of AI in society; something theEuropean Unionis starting to explore.
As early as 1942, Isaac Asimov was able to envision a world in which Artificial Intelligence needed to be governed bybasic laws.An AI that makes life and death decisions requires clear ethical norms that limit the possibility of AI negligence or weaponized AI (seeProject MavenandGoogle), as explained in the dystopian short film ‘Slaughterbots’, by Stuart Russell.
In part, to correct the perception on the role of AI in the company’s future, Google recently released a new set of principlesthat will guide them in the application of new AI capabilities, including a rejection of lethal applications of AI.
Intelligent machines are going to be embedded in every infrastructure, from air traffic, the electric grid or urban planning. The imminence of self-driving cars, trucks or passenger planes generates understandable public concern and requires solid safeguards to avoid hacking, unfair decision-making or proper response to unexpected events.
Government, civil society, academia, and businesses have to establish communication channels to implement measures that reduce the impact and occurrence of errors. Furthermore, they have to create platforms in which to measure and correct algorithmic-based decision that perpetuate or widen inequality. In this early stage of human-taught machines there is a high possibility that machine learning designers transmit on purpose or inadvertently biases that make society more unfair instead of moving us forward in the opposite direction.
To correct these problems increased transparency will be essential. The use of “black boxes” in the creation of machine learning models could worsen if intelligent machines are trusted with decision-making. The ability of these machines to “explain” the reasoning behind such a decision will improve accountability, supervision and avoid giving full control to artificial agents.
The next challenge is how to integrate AI in a world where different levels of economic development, political priorities and skilled workforces coexist. The G20 is already working on measures to mitigate the impact of work automation in developed countries and in certain sectors that are likely to be highly automated in the next few decades. Ideas such as basic income or “taxing robots” should be, at least, considered by policy makers in order to correct extreme inequalities, and send a large number of citizens to the fringes of society and without the necessary resources to be incorporated in the AI economy.
As Economists Anton Korinek and Joseph Stiglitz put it: “innovation could lead to a few very rich individuals, whereas the vast majority of ordinary workers may be left behind, with wages far below what they were at the peak of the industrial age”.
But the creative potential of AIis much greater and offsets any other. AI in the workplace could equip workers with almost superhuman abilities to perform complex tasks, that take advantage of vasts amounts of information, personalizing services to a level that even the customer deems impossible today.
An “augmented”AI workercan complete better distributed and fairer transactions or services, that could bring the world to a new era of development and inclusion. Not only tedious tasks will be eventually eliminated, humans will have more time to reflect on the true purpose of new technologies or have greater impact in their communities.
Medical diagnosis could be almost flawless, the process of preventing or finding solutions in possible advert scenarios could be analyzed in record time. Education, healthcare or financial services (key components in an AI-driven economy) would become more accessible, even for those that have had unequal access up until today, because a personalized path for inclusion could be built for each one. At the same time humans won’t have to be exposed to dangerous or eroding jobs, creating a services-only type of workforce.
The use of AI for social good help us identify hidden needs in vulnerable parts of the population, find insights in the way wealth is transmitted and circulates through the economic system, fight corruption, fraud and crime. AI can also facilitate channels for a more direct, participatory democracy, speed up bureaucratic processes, and improve budgetary prioritization. For all these goals to be achievable these AI tools should incorporate elements that are increasingly present in the debate of the development of automation: “Privacy by Design”, “Ethics by Design”, “Fairness by Design”, “Sustainable by Design”, “Transparent by Design”, etc.
In conclusion: the augmented capacities that AI is going to add to our capacity to process information and make decisions should improve our lives in general, should eliminate tedious tasks in the workplace, will increase productivity in the medium to long run, and, most likely, will free us to undertake more creative tasks or simply toreduce the workload per capita. The shortcomings that could be originated during this complex transition have to be tackled not only from a technological perspective, but also from a socioeconomic or ethical point of view. The design of effective political frameworks and accountability should be a necessary component if we want to extract the full potential of AI technologies.