A new study published in Neuroscience reveals how computation can be used to find out the visual stimulation to which the brain cells respond most frequently and productively.
Researchers built artificial neural networks with multiple layers to predict how a biological brain actually handles a variety of randomly chosen visual stimuli, in terms of the neural activity. The networks act as virtual neurons linked together rather like the brain’s visual cortex. In this way, using these artificial neural networks, they were able to identify which images induced extremely strong responses from specific neurons.
Image Credit: ktsdesign / Shutterstock
Vision – simple sensation, complex processing
Once the eyes are open we begin to see the outside world, a phenomenon so common that we take it for granted. However, we have learned that it is a complex process, which consists of building up a relevant picture of the world from the images cast by the light falling on various objects around us, upon the retina. The intricacies involved in converting the photons from multiple images of the same objects, to electrical impulses, processing them in the brain through multiple pathways, and then revealing the final picture to our minds, are so many that of necessity, brain cells are also programmed to react to light-induced images in a variety of ways.
The fact is that it is possible to create an endless succession of images from light, which makes it difficult to define how neurons respond to images per se. in fact, past studies relied on ‘hunches’ and luck, as well as perseverance, to find out which type of images the brain neurons preferentially responded to. This approach has brought about fundamental discoveries in the field of vision.
Inception loops – the new framework
This time around, the researchers wanted to make it a bit more predictable. They first recorded immense quantities of brain waves, using an instrument called a mesoscope that is a functional imaging microscope developed to capture neural activity on a large scale. They worked with mice, showing them about 5,000 natural images. Simultaneously, the mesoscope captured the electrical impulses occurring in their brains as they viewed the images. These images with the neural activity formed the database used to train the neural network how to respond to a visual stimulus just like the real brain.
To validate their findings, they then exposed the network to unknown images and analyzed the resulting predictions, as well as gauging the real brain’s response to the same images. They found that the network had been trained successfully to react to a visual stimulus just like a live mouse brain. To prove this, they confronted the network with new or unknown images.
The network showed that it was capable of correctly predicting how the biological brain would respond to a given image. They used this capability to build up a new set of images designed to produce maximal excitation for each group of neurons – the optimal stimuli or most exciting inputs (MEIs).
In other words, they had built “an avatar of the visual system”, so that they could then test how the brain responds to any number and any type of visual stimulus, without limit, since they do not rely on an actual brain. Finally, they test their findings using this method (which they call “inception loops”) on biological brains.
The neural network is easier to work with, since it allows all kinds of images to be tested for their visual response. The resulting knowledge of which stimuli optimally stimulate different types of neurons could be surprising as well as enlightening, because it drives our understanding of how information is processed in the brain to provide visual sensations.
Researcher Andreas Tolias comments, “Experimenting with these networks revealed some aspects of vision we didn’t expect. For instance, we found that the optimal stimulus for some neurons in the early stages of processing in the neocortex were checkerboards, or sharp corners as opposed to simple edges which is what we would have expected according to the current dogma in the field.”
For instance, in an area of the mouse brain called the primary visual cortex V1, the MEI had complex characteristics often seen in nature but not in keeping with the images commonly accepted as being ‘liked’ by V1. When the newly acquired data was used to build a designer image, which was presented to the living neurons, the responses showed a markedly greater degree of excitation than with the control images.
Moreover, inception loop methodology need not be confined to visual processing in one part of the brain alone. Instead, says Fabian Sinz, “we think that this framework of fitting highly accurate artificial neural networks, performing computational experiments on them, and verifying the resulting predictions in physiological experiments can be used to investigate how neurons represent information throughout the brain. This will eventually give us a better idea of how the complex neurophysiological processes in the brain allow us to see.”
Inception loops discover what excites neurons most using deep predictive models. Edgar Y. Walker, Fabian H. Sinz, Erick Cobos, Taliah Muhammad, Emmanouil Froudarakis, Paul G. Fahey, Alexander S. Ecker, Jacob Reimer, Xaq Pitkow, and Andreas S. Talias. Nature Neuroscience. (2019). https://doi.org/10.1038/s41593-019-0517-x. https://www.nature.com/articles/s41593-019-0517-x#article-info