In 2009, neuroscientist Henry Markram declared that he and his colleagues could actualize a complete simulation of the human brain inside a supercomputer within a decade. They had spent years mapping neocortex cells, the part of the brain that is believed to be the nexus of thought and perception. His team planned to create a virtual rainforest in silicon where they had hoped artificial intelligence could grow. Markram compares the process to cataloging a part of the rainforest by the number of trees and their shape. 

But is it possible to use the brain as a map for creating AI?

The background that humans can grasp Markram’s idea to understand the nature of biological intelligence by mimicking its design has a long-standing history. The Nobel laureate and Spanish anatomist Santiago Ramon y Cajal detailed his microscopic study of the brain, which he believed to be so dense in matter that it resembled a thick forest with branches and leaves. Essentially, he saw them as input-output pathways. They received electrochemical messages in treelike structures which passed through axons, slender tubes.

Cajal laid the foundation for how scientists studied brain function. Technological advances also came from this focus. Psychologist Warren McCulloch and his protege Walter Pitts proposed a framework for how brain cells encode complex thoughts. They hypothesized that each neuron performs a logical operation but combines many inputs into a single binary output that determines if that information is true or false. These operations can be put together to form words, sentences and more cognitive reasoning. It then evolves into the artificial neural networks we commonly use today in deep learning.

Neural networks, as many experts know them, are good examples of what occurs inside the brain. For instance, to recognize a friend’s face, the brain must filter raw data from the retina through layers of neurons located in the cerebral cortex. This process determines the visual features of a friend and can then assemble the final image.

Deep neural networks learn to process the world in similar ways. Data moves from a large set of neurons to smaller groups of neurons, pulling information through each previous layer. The complex process works in stages. The first layer identifies edges and areas of light and then combines that into textures. The next layer can assemble that into the shape of a face or nose. This process continues until the face is recognizable.

Although neural networks are similar to how brains work, most of them are not like the brain. This is partially because they learn mathematical shortcuts that would be difficult, if not impossible, for the human brain to figure out. However, brains and AI models share the idea that researchers don’t understand why both work as well as they do. Most neuroscientists and computer scientists are searching for the universal theory of intelligence —  a group of principles that bears truth for both silicon and brain tissue. Theories abound as to how they work so efficiently. Even eleven years after Markram announced his stimulated brain project, it hasn’t yet offered any fundamental insights to studies in intelligence.

Even if neuroscientists spend time mapping the entire brain — or recreate intelligence by stimulating each molecule of the brain — that doesn’t mean they’ll find the underlying principles of brain activity. What they create, they won’t necessarily understand. It may be that AI models don’t need to be like a brain at all. For example, airplanes take flight without being identical to birds. However, the best way to understand intelligence is to utilize principles and theories from biology. This can go far beyond the brain. Evolution is the prime example of how random design has created a world of brilliant solutions. As we learn how intelligence is created in the brain, we’re searching for something that seems so elusive, but when we do find it, as humans, we’ll be able to recognize it.