Artificial Intelligence — There has not been a breakthrough yet

Artificial Intelligence — There has not been a breakthrough yet

The Motto of the present period is Artificial Intelligence ( AI). Technologists, scholars, journalists, and venture capitalists alike intone the word. As with several phrases that extend from specialized academic fields into general circulation, the use of the term is followed by substantial confusion.

But this isn’t the standard case where the public doesn’t understand the scientists — here the scientists are just as puzzled as to the public. The thought that our age is somehow seeing the rise of a digital intellect that rivals our own entertains us all — enthusiasm us and scaring us in equal measure. And sadly it distracts us.

There’s a particular story about the present period that one can tell. Consider the following story which involves decisions about people, computers, data, and life-or-death, but where the emphasis is something other than fantasies about intelligence-in-silicon. We were doing an ultrasound when our relative spouse was pregnant 14 years ago.

A geneticist was in the room, and she pointed out some white spots around the fetus’ heart. “These are Down syndrome signs,” she said, “and your risk has now risen to 1 in 20.” She further told us that we might discover if the fetus really had the Down syndrome underlying genetic alteration via an amniocentesis.

But amniocentesis was risky — the chance of killing the fetus was about 1 in 300 during the operation. As a statistician, I was eager to figure out where those numbers came from. To cut a long story short, I found that a statistical study had been done in the United Kingdom a decade before, where these white spots, which represent calcium accumulation, were in fact identified as a predictor of Down syndrome

But I also found that our test’s imaging system had a few hundred more pixels per square inch than the system used in the UK analysis. I went back to tell the geneticist that I thought the white spots were probably false positives — that they were simply “white noise.” She said, “Ah, that’s why we started seeing an upsurge in the diagnosis of Down syndrome a few years ago; it’s when the new computer arrived.”

We did not do the amniocentesis, and some months later a healthy girl was born. But the episode worried me, particularly after a back-of-the-envelope estimate persuaded me that this diagnosis had been made to several thousands of people worldwide the same day, that many of them had opted for amniocentesis, and that a number of babies had died needlessly. And that happened day by day before it got fixed somehow. The problem this episode exposed was not about my individual medical care; it was about a medical system that evaluated variables and outcomes in different locations and times, performed statistical analyses, and made use of the findings elsewhere and at different times.

The problem had to do not only with data analysis per se, but with what database researchers term “provenance” — generally, where did data originate, what inferences were made from the data, and how important are those inferences to the present situation? Although a qualified person could be able to do all of this on a case-by-case basis, the challenge was to build a planetary-scale medical system that would be able to do this without the need for such extensive human supervision.

I am also a computer scientist, and it happened to me that the concepts needed to construct such inference-and-decision-making systems on a planetary scale, combine computer science with statistics, and take human utility into account, were nowhere to be found in my schooling. And it occurred to me that the development of such principles — which will be required not only in the medical domain but also in areas such as trade, transportation, and education — was at least as important as those of building AI systems that can blind us with their game-playing or sensorimotor capabilities.

If we get to understand “intelligence” soon or not, we have a big challenge on our hands in getting machines and people together in ways that improve human life. While some interpret this task as subordinate to the development of “artificial intelligence,” it can also be regarded as more prosaically — but no less reverently — as the formation of a new branch of engineering. Much like civil engineering and chemical engineering in the past decades, this new discipline is aimed at corralling the power of a few key concepts, giving people new tools and skills, and doing so safely.

Whereas civil engineering and chemical engineering were focused on physics and chemistry, this new engineering discipline will be founded on ideas that the preceding century gave substance to — ideas such as “knowledge,” “algorithm,” “data,” “uncertainty,” “computing,” “inference,” and “optimization.” Moreover, as much of the new discipline‘s emphasis will be on data from and about humans, its emphasis will be on data from and about human beings.

Although the building blocks have started to appear, the guidelines for putting together these blocks have not yet evolved, and so the blocks are being placed together in ad-hoc ways.

Thus, just as human beings designed buildings and bridges before structural engineering took place, humans are constructing societal-scale, inference-and-decision-making structures involving computers, humans, and the environment. Just as early buildings and bridges often fall to the ground — in unexpected ways and with devastating consequences — there are also serious conceptual shortcomings in many of our early social inference and decision-making processes.

And, sadly, we are not very good at predicting what will become the next critical error that arises. What we do lose is an engineering discipline with its theoretical and design concepts.

Too frequently, the existing public discourse on these topics uses “AI” as an analytical wildcard, one that makes it difficult to think about the nature and implications of emerging technologies. Let us start by more carefully examining what “AI” has been used to refer to, both recently and historically.

Much of what is known today as “AI,” particularly in the public domain, has been called “Machine Learning” (ML) for several decades. ML is an algorithmic field that incorporates ideas from statistics, computer science, and many other disciplines (see below) to develop algorithms that process data, predict, and support decision making. ML is the real thing in terms of bearing on the real world, and not just recently. In reality, it was already clear in the early 1990s that ML would develop into massive industrial significance, and by the turn of the century, forward-looking companies

Throughout their company, such as Amazon already used ML, solving mission-critical back-end problems in fraud detection and supply-chain prediction, and developing groundbreaking consumer-facing services such as recommendation systems. As data sets and computing resources grew rapidly over the next two decades, it became clear that ML would soon be controlling not only Amazon but virtually any organization where decisions could be tied to large-scale information.

New business models are expected to evolve. The term “Data Science” began to be used to refer to this phenomenon, indicating the need for experts in ML algorithms to collaborate with database and distributed-system experts to create scalable, stable ML systems and represent the wider social and environmental reach of the resulting systems.

Historically, in the late 1950s, the term “AI” was coined to refer to the heady desire to realize an object possessing human-level intelligence in software and hardware. We will use the term “human-imitative AI” to refer to this desire, underlining the notion that the artificially intelligent object should appear to be one of us, if not at least physically mentally (whatever that may mean). That was essentially an academic undertaking. Although there were already related academic fields such as operations analysis, statistics, pattern recognition, information theory, and control theory, which were often influenced by human intelligence (which animal intelligence),

These fields were inspired by the capacity, say, of a squirrel to perceive the three-dimensional structure of the forest in which it lives, and to jump across its branches. “AI” was intended to concentrate on something else — high-level or “cognitive” human capacity to “reason” and “think.” Sixty years later, though, high-level reasoning and thinking remain elusive. The technologies now called “AI” emerged mainly in the engineering fields concerned with the identification of low-level patterns and regulation of movement, and in the field of statistics — the discipline centered on identifying trends in data and making well-founded predictions, testing hypotheses and decisions.

Indeed, the popular “backpropagation” algorithm that David Rumelhart rediscovered in the early 1980s, and now considered as the cornerstone of the so-called “AI revolution,” first appeared in the 1950s and 1960s in the field of control theory. One of its early uses was to refine the Apollo spaceships’ thrusts as they went towards the moon.

Significant progress has been made since the 1960s, but the development of human-imitative AI has arguably not culminated in it. Rather, as in the case of the Apollo spaceships, these concepts have always been kept behind the scenes and based on unique technical problems as the handiwork of researchers. While not evident to the general public, research and system-building have been a big success in areas such as document retrieval, text processing, spam detection, recommendation systems, custom search, social network analysis, preparation, diagnostics, and A / B testing — these are the innovations that have propelled companies such as Google, Netflix, Facebook, and Amazon.

One might only agree to refer to all of this as “AI,” and indeed that’s what seems to have happened. Such labeling may come as a surprise to optimization or statistics researchers, who wake up to find themselves suddenly referred to as “AI researchers.” But aside from labeling researchers, the bigger problem is that using this special, ambiguous acronym prevents a clear understanding of the spectrum of intellectual and commercial issues at stake.

Over the past two decades, significant progress has been made — in industry and research — in a complementary effort to human-imitative AI, also referred to as “Information Augmentation.” Computation and data are used here to build services that increase human intelligence and creativity. A search engine can be seen as an example of IA (increasing human memory and awareness of facts), as can natural language translation (increasing a human’s ability to communicate). The generation of sounds and images based on computing serves as a palette and enhancer of creativity for artists.

Hoping that the reader can accept one last acronym, let us narrowly conceive of an “Intelligent Infrastructure” (II) discipline whereby there is a network of computing, data, and physical entities that makes human environments more friendly, interesting, and secure. Such infrastructure is starting to emerge in areas such as transportation, health, trade, and finance, with far-reaching consequences for individuals and communities. This advent often occurs in discussions about an “Internet of Things,” but this initiative usually refers to the mere issue of having “stuff” on the Internet — not the much greater collection of challenges associated with these “stuff” capable of manipulating these data streams to discover information about the environment and communicating with people and other “things” at a far higher level of abstract

Returning to my personal anecdote, for example, we could imagine living our lives in a “societal-scale medical system” that creates data flows and data-analysis flows between doctors and devices placed in and around human bodies, thus allowing human intelligence to make a diagnosis and provide treatment. The framework will integrate knowledge on medications and therapies from cells in the body, DNA, blood samples, climate, population genetics, and the vast scientific literature. It will concentrate not just on a single patient and a doctor, but on relationships among all humans — just as current medical research enables tests to be carried out on one group of humans (or animals) in the care of other humans. It will help to preserve notions of significance, provenance, and continuity in the manner in which the existing banking system works on such financial and payment challenges. And although one may anticipate several problems that occur in such a system — including issues of privacy, liability issues, protection issues, etc. — these issues should be properly treated as obstacles, not show-stoppers.

Now we come to a key issue: Is working on classical human-imitative AI the best or only way to concentrate on these bigger challenges? In reality, some of ML’s most heralded recent success stories were in areas related to human-imitative AI — fields such as computer vision, speech recognition, gameplay, and robotics. So maybe we should only await further progress in certain domains. We need to make two points here. First, while one may not know it from reading the newspapers, in reality, progress in human-imitative AI has been limited — we are very far from realizing expectations for human-imitative AIs.

Second, and more importantly, to solve important IA and II problems, progress in these fields is neither sufficient nor necessary. Consider self-driving cars, on the sufficiency side. A variety of engineering problems that may have no connexion to human competencies (or human lack of competencies) would need to be solved for such technology to be realized. The overall transport structure (a II structure) would possibly resemble the current air traffic control system more closely than the current set of loosely-coupled, forward-looking, inattentive human drivers. It will be much more complex than the current air traffic control system, especially in using large data volumes and adaptive statistical modeling to inform fine-grained decisions. It is those issues that need to be at the forefront, and an emphasis on human-imitative AI can be a diversion in such an endeavor.

As for the necessity statement, it is often argued that the human-imitative AI aspiration subsumes IA and II aspirations, since a human-imitative AI system would not only be able to solve the classical AI problems (as expressed, for example, in the Turing test), but would also be our best bet to solve IA and II issues. Such a claim has no basis in history. Has civil engineering evolved by trying to build an artificial carpenter or bricklayer? Is chemical engineering supposed to have been designed to construct an artificial chemist? Even more polemically: if our goal was to construct chemical plants, could we first have produced an artificial chemist who would then have figured out how to develop a chemical plant?

A similar argument is that human intelligence is the only form of intelligence we know and that as a first move we should try to emulate that. But in fact, humans are not very good at any kind of reasoning — we’ve got our lapses, prejudices, and weaknesses. In addition, fundamentally, we have not evolved to conduct the kinds of large-scale decision-making that modern II systems have to face, nor to cope with the kinds of ambiguity that occur in II contexts. One could argue that an AI system is not only imitating human intelligence But it would also “right,” and would arbitrarily scale up big problems. But we are now in the world of science fiction — such speculative claims, though exciting in the setting of fiction, should not be our key strategy to move forward in the face of the crucial problems of IA and II that are beginning to arise. On their own terms, we need to solve IA and II issues, not as a mere corollary of a human-imitative AI agenda.

It is not difficult to recognize algorithmic and technology problems that don’t core themes in human-imitative AI research in II systems. II systems need the ability to handle distributed information sources that are rapidly evolving and are likely to be incoherent globally. Such systems have to cope with cloud-edge interactions in making timely, distributed decisions and have to deal with long-tail phenomena whereby there are loads of data about some individuals and little data about other people. They must overcome data sharing problems across institutional and competitive boundaries. Last but not least, and of particular importance, II systems must bring economic ideas such as incentives and pricing into the realm of statistical and computational infrastructures that connect human beings to each other and valued goods. Such II systems may be seen as generating markets rather than simply offering a service. There are fields like music, literature, and journalism that are crying out for such markets to emerge, where data analysis connects producers and consumers. And all of this must be achieved within the framework of emerging cultural, ethical, and legal norms.

Classical human-imitative AI problems of course still remain of great interest. However, the current emphasis on conducting AI research through data collection, implementing “deep learning” infrastructure, and demonstrating systems that emulate those loosely defined human skills — with little in the way of evolving explanatory concepts — tends to distract attention from major open issues in classical AI. These problems include the need to incorporate meaning and reasoning into systems that process natural language, the need to infer and reflect causality, the need to infer and interpret causality, the need to establish computationally traceable representations of uncertainty, and the need to build long-term goal-setting frameworks. These are classic goals in human-imitative AI, but it’s easy to forget that they’re not yet solved in the latest hubbub over the “AI revolution”

IA will also remain completely important, as machines will not be able to match humans in their ability to think abstractly about real-world circumstances in the near future. To solve our most pressing issues, we’ll need well-thought-out human and machine interactions. And we’ll want computers to activate new levels of human creativity, not substitute (whatever that may mean) human creativity.

It was John McCarthy (when he was a professor in Dartmouth, and soon to take a course place at MIT) which coined the word “AI” to distinguish apparently its, New research agenda from that of Norbert Wiener (then an older MIT professor). Wiener had invented “cybernetics” to refer to his own concept of smart systems — a concept closely related to studies on operations, statistics, pattern recognition, knowledge theory, and control theory. On the other side, McCarthy underscored the links to rationality.

In an odd reversal, under the umbrella of McCarthy’s terms, it is Wiener‘s intellectual agenda that has come to dominate in the modern period. (This is certainly only a temporary state of affairs; the pendulum swings more in AI than in most sectors.)

But we need to step beyond McCarthy and Wiener’s specific historical viewpoints.

This focus is less about fulfilling the dreams of science fiction or super-human machine fantasies, and more about the need for human beings to consider and form technology as it becomes ever more prevalent and powerful in their everyday lives. In addition, a diverse range of voices from all walks of life is required in this understanding and shaping, not merely a dialogue among the technologically tuned. Focusing exclusively on human-imitative AI avoids listening to a suitably large variety of voices.

Although the industry will continue to drive many advances, academia will also continue to play an important role not just in providing some of the most creative technical ideas but also in bringing together researchers from the computational and statistical disciplines and researchers from other disciplines the contributions and perspectives of which are desperately required — especially the social sciences, the humanities, and cognitive sciences.

On the other hand, while the humanities and sciences are important as we move forward, we can also not pretend that we are talking about something other than an engineering project of enormous size and scope — society seeks to create new kinds of objects. These artifacts should be designed to operate as stated. We don’t want to create programs that provide us with medical care, transportation choices, and business opportunities to find out that these programs don’t really work — that they are making mistakes that take a toll on human life and happiness. In this regard, as I have emphasized, the data-focused and learning-focused fields have yet to emerge as an engineering discipline. As fascinating as these latter areas tend to be, they can not yet be seen as being a branch of engineering.

In addition, we should welcome the idea that what we’re experiencing is the emergence of a new engineering division. Frequently the word “tech” is used Invoked in a narrow sense — in academia and beyond — with cold equipment overtones and negative connotations of loss of power by humans. But what we want it to be, maybe an engineering discipline.

I’ll avoid giving this new discipline a name, but if the acronym “AI” continues to be used as a placeholder nomenclature for the future, let’s be mindful of this placeholder’s very real limitations. Let’s widen our reach, tone down the excitement, and consider the obstacles that lie ahead.