A critical goal of many neuroscientist researchers is to help paralyzed people use robotic arms to reach objects they could naturally grasp on their own. These experts have recorded brain activity in people with paralysis and hope to identify patterns in electric activity in neurons that relate to a person’s attempt to move their arm so that this data can be used to tell the automated arm how to move. In a nutshell, they want innovative prosthetics to read the human mind.

However, the issue remains a challenge. One researcher, Chethan Pandarinath, is using AI to help solve the problem. Feeding his brain activity recordings to a neural network, he programmed the machine to learn how to reproduce data. Coming from a small subset of neurons in the brain (only about 200 from the 10 to 100 million neurons required to move a human arm), he had to find the underlying structure of the data. It shows the temporal dynamics —  how patterns of neural activity change from moment to moment —  to provide a fine-tuned information for arm movement than previous research. The new results show what someone is trying to move at a precise angle to control the robotic arm.

This is one primary example of the developing relationship between AI and cognitive science. AI can identify patterns in complex data and has succeeded in the previous decades by emulating how the brain performs some computations. Independent neural networks that are analogs to neural networks that structure our brains have given computers the ability to determine an image of a house from that of a mouse. Today, cognitive science is starting to benefit from AI, both as a model for developing and testing ideas about how the brain performs, but also as a resource for using complex data. As the technology advances and it’s applied to understand the brain, this relationship between the two fields will continue, especially as it helps neuroscientists gather further insights. The result may be machines that can produce more human-like intelligence. 

 

Brain analogs

AI’s success comes from the arrival of more powerful processing and ever-growing data. But the advances rely on the artificial neural network. These node networks consist of layers that are analogs to neurons. Nodes are connected in two layers; the input layer and the hidden layer, by a series of mathematical weights. Similarly, the hidden layer is connected to an output layer. For example, input data for a task like facial recognition could implement a series of numbers that describes each pixel in an image of a face depending on where it falls on a 100-point scale from white to black. Data are put into the system, and the hidden layer then multiplies these values by the weights of the connections between the nodes and an answer is formed. A more complex version of this is called a deep neural network. This same network was used by DeepMind Technologies, owned by Google’s parent company, to build a computer that beat a human player at the game Go in 2015 — a revolutionary moment for AI.

Artificial neural networks have proved helpful in studying the human brain. If these systems can produce neural activity that resembles the recorded patterns from the brain, scientists can learn how the system generates output and then discover how the brain does the same thing. This process may be applied to any cognitive task. Experts have said that if you can train a neural network, then perhaps you can understand how that network functions, and use that to understand the biological data.

 

The Dynamics of Data

AI techniques are useful for making models and generating ideas, but also as a tool for handling data. Neural data is complicated, so scientists use techniques from machine learning to look for structure.

For example, Functional magnetic resonance imaging takes snapshots of activity in the brain at a resolution of 1–2 millimeters every second or so, potentially for hours. The challenge is finding the signal in very large images. Using a machine to analyze these data is speeding up the research between data science and neuroscience. 

Creating an artificial system that can reproduce brain data was the approach taken by a computational neuroscientist, Daniel Yamins, at Stanford University. In 2014, he and his colleagues trained a deep neural network to predict the brain activity of a monkey as it recognized objects when he was a postdoctoral researcher at MIT. Object recognition is performed by a brain system called the ventral visual stream, which has two main structural features. While the brain can recognize objects in various positions and under different lighting, it seems bigger or smaller based on distance or when partially hidden. Computers are still confused by these obstacles. When Yamins and his colleagues constructed the deep neural network with the same hierarchical structure and retinotopic, the network learned how to recognize thousands of images of 64 objects. It then produced several possible patterns based on the neural activity. They then compared the computer patterns to the neural patterns from the money and found that the computer was best at recognizing objects with patterns that most closely matched those with the monkey. The researchers reviewed their created AI network to areas of the brain with 70% accuracy.

While neuroscientists are still a long way from understanding how the brain identifies a task like distinguishing jazz from rock music, machine learning provides a way to construct models that explore these questions. As researchers design systems that perform similar to the brain, these designs can inform our understanding of how the brain solves these tasks. This is crucial because scientists often don’t hold a working hypothesis for how the brain works. 

After researchers develop a hypothesis, they will next test it. AI models can once again help by offering a representation of brain activity that is reworked to see which factors might be useful in accomplishing tasks. 

Ethical considerations remain on how much AI can intervene in processes in the healthy human brain, as recordings of neural activity come from those with epilepsy who are scheduled to have brain tissue removed. This is because it is permissible to implant electrodes in brain tissue that will later be expunged. While animal models enable researchers to use more invasive procedures, human behaviors, particularly speech, cannot be replicated in other species. 

AI systems that can mimic human behavior and integrated without ethical issues will provide a gateway for scientists with extra tools to explore how the brain works. Of course, neuroscience alone will not be able to detail how unsupervised learning works. Instead, computer scientists will likely come up with more solutions that they can test. Answering these questions creates more intelligent machines that can learn from their surroundings and combine computer processing with human abilities. Data processing and modeling is already bringing about considerable advances in brain science that will continue to develop the field of neuroscience.