Human Brain Floating in a Liquid in a Bell Jar with a Dark Foggy Background 3d illustration 3d render

Getty Images

Scientists and philosophers have long pondered what it’d be like to live in a fake reality. The idea, originally prompted by René Descartes, suggests that there is no clear way to distinguish between an external world and the manipulation of our senses. Descartes thought this manipulation could come from an evil demon. The modern view suggests it could come from a cabal of twisted scientists.

Recently, a new paper published in Science has inched us closer to this dystopian possibility. What the study shows is that stimulation to certain parts of the visual cortex can induce rodents to behave as if they actually perceived an image. If you’re a brain in a vat, then, all you might need to think you’re seeing a beautiful mountain vista or the microwave in your one-bedroom is the stimulation of a handful of cells in your visual cortex.

The manipulation of neurons to produce a given sensation or movement is by no means unique to this experiment. We’ve structured robotic arms, for instance, to respond to the activation of a specific suite of neurons in the brain’s motor cortex. For the brain-in-a-vat scenario, this strongly suggests that we could feel we are moving a given limb or body part through the activation of only a couple hundred cells. Doing this for vision is just another step in the dystopian direction.

While scientists aren’t sure how the visual system aligns with the actual perception of an image, getting an animal to behave in the same way as if it had just seen something is a close approximation. If scientists can get an animal to behave as if it saw an image from nothing but cerebral stimulation, in other words, we can assume that in some capacity it did, in fact, perceive that image. And this is what these scientists set out to establish.

To know exactly which cells to manipulate, the researchers — headed by James H. Marshel at Stanford University in Palo Alto, California — first trained the rodents to perform a task. Specifically, they were trained to lick from a spigot when they saw a line presented at a certain orientation. If the line was horizontal, they were trained to lick. If it was vertical, they were trained to abstain.

After learning the task, the researchers carefully monitored which cells were activated in the rat’s visual cortex when it perceived these different lines. When the rat licked from the spigot after seeing the horizontal line, the scientists knew which neuronal ensemble was involved. For the vertical line, they would know the same thing.

With this data in hand, the researchers could now try to artificially activate the neurons responsible for the perception of these different lines. If the mouse were to lick only when the horizontal line neurons were activated, it would suggest that the stimulation of these neurons induced the mouse to perceive the horizontal line. If it abstained when the vertical line neurons were activated, it would further support this conclusion.

To artificially trigger the different neuronal constellations, the researchers used a technique called optogenetics. Here, what they do is plant viral DNA into the genes that code for certain receptors in the brain. The DNA that they implant will enable these modified receptors to now respond to light stimulation. Now, when the researchers want to activate this specific subset of cells, all they have to do is shine a finely tuned light on them. The light will encourage them to fire just as if they were activated regularly.

When the scientists artificially activated the neurons responsible for perceiving the horizontal line, lo and behold, the mice licked from the spigot. When the vertical line neurons were artificially activated, the mice did nothing. The results, then, were in the affirmative — the furry little animals might have actually perceived the different lines and acted accordingly.

Ultimately, the new study shows that if we artificially trigger certain ensembles of neurons in the visual cortex, we can replicate a visual experience without the actual vision. We are, in other words, one step closer to creating that brain-in-a-vat scenario. Who knows? Maybe one day that vision of your laptop and the keys within might be artificially induced.