Is The Brain An Effective Artificial Intelligence Model?

Last updated: 07-04-2020

Read original article here

Is The Brain An Effective Artificial Intelligence Model?

In the summer of 2009, the Israeli neuroscientist Henry Markram endeavored onto the TED stage in Oxford, England, and introduced an immodest proposal: he and his colleagues would develop a full human brain simulation inside a supercomputer within a decade. They had been mapping the cells in the neocortex, the supposed seat of thought and perception, for years already. “It’s a bit like going and cataloging one piece of rainforest,” explained Markram. “How many trees it has? What features are the trees? “His team would now establish a virtual Silicon rainforest from which they hoped artificial intelligence would evolve organically. If all went well, he quipped, maybe the simulated brain will offer a TED follow-up talk, beamed in by hologram.

Markram’s idea — that by mimicking its forms, we might recognize the nature of biological intelligence — was ingrained in a long tradition, dating back to the work of the Spanish anatomist and Nobel laureate Santiago Ramón y Cajal. In the late 19th century, Cajal conducted a microscopic brain analysis, which he compared to a forest so thick that “the trunks, roots, and leaves reach everywhere.” By sketching in meticulous detail thousands of neurons, Cajal was able to conclude an incredible amount about how they were functioning. He saw that they were input-output devices effectively of one way: In treelike shapes, they received electrochemical messages called dendrites and transferred them through slender tubes called axons, much like “the junctions of electric conductors.”

The way Cajal looked at neurons has been the lens through which scientists analyzed brain function. This way also provided the impetus for significant technological advancements. In 1943 the psychologist Warren McCulloch and his protege Walter Pitts, a prodigy of homeless teenage mathematics, proposed an elegant framework for how complex thoughts are encoded by brain cells. Every neuron performs a simple logical operation, they theorized, integrating several inputs into a single binary output: true or false. These operations could be strung together into words, sentences, awareness paragraphs as simple as letters in the alphabet. It turned out that McCulloch and Pitts’ model did not describe the brain very well, but it became a vital part of the first modern computer’s architecture. Eventually, it developed into artificial neural networks now widely used in deep learning.

All such networks could better be called neural-ish. Like the neuron of McCulloch-Pitts, they are impressionist portraits of what’s going on within the brain. Think a Yellow Labrador approaches you. To recognize the dog, your mind needs to funnel raw data from your retinas through layers of specialized neurons in your cerebral cortex, which select the visual characteristics of the dog and assemble the final scene. A deep neural network is likewise learning to break the world down. The raw data flows through several smaller sets of neurons from a broad array of neurons, each pooling input from the previous layer in a way that adds complexity to the overall image. The first layer realizes edges and bright spots, which the next incorporates into textures, which the next one assembles into a snout, etc. until a Labrador pops out.

Amid these similarities, several artificial neural networks are distinctly un-brainlike, in part as they learn to use mathematical tricks that would be difficult, if not impossible, to perform for biological systems. However, brains and AI models share something fundamental in common: researchers still do not understand why they are as well working as they are doing.

What computer scientists and neuroscientists want is a universal intelligence theory — a set of principles that stands right in both tissue and silicon. What they have is a muddle of details, instead. Eleven years and $1.3 billion after Markram introduced his simulated brain, the analysis of intelligence has not contributed any fundamental insights.

Part of the problem is more than just a century ago writer Lewis Carroll put his finger on it. Carroll envisioned a nation so obsessed with cartographic detail that it continued to expand its maps’ scale — 6 yards a mile, 100 yards a mile, and finally a mile to the mile. A plan of the size of a whole country is remarkable, of course, but what does it teach you? Even though neuroscientists can re-create intelligence by systematically simulating every molecule in the brain, they would not have discovered the cognition’s underlying principles. As the physicist, Richard Feynman famously affirmed, “I don’t understand what I cannot develop.” To which Markram and his fellow cartographers might add: “And I don’t necessarily understand what I can create.”

It is feasible that AI models do not mimic the brain at all. Despite bearing no resemblance to the birds, airplanes fly. However, it seems likely that learning principles from biology are the fastest way to understand intelligence. It doesn’t stop at the brain: The blind design of evolution has struck on brilliant solutions across the whole of nature. Our brightest minds are working hard against the dim quasi-intelligence of a virus, its intellect borrowed from our cells’ reproductive machinery as the moon borrows light from the sun. Also, as we analyze the details of how intelligence is implemented in the brain, it is essential to note that in the absence of the emperor, we are describing the emperor’s clothes. Nonetheless, we promise ourselves we will know him when we see him — no matter what he wears.


Read the rest of this article here