I want to show you something simple your mind can do, which illustrates a fascinating emerging theory about how the brain works. First, look at this logo of the World Cup this year.
The idea of the emblem is obvious: This is an illustration of a trophy with an abstract soccer ball on top. The colors—green, yellow, and blue—mirror the host country's flag.
Now consider this tweet from copywriter Holly Brockwell, which got 2,400 thousand retweets: "CANNOT UNSEE: the Brazil 2014 logo has been criticised for 'looking like a facepalm.'"
You know, a facepalm:
With this new cue—to see the logo as a facepalm—the yellow part becomes an arm with its hand pressed into a green head. And, as Brockwell indicated, once you see this second possibility, you can't unsee it.
People report this kind of thing all the time, and they use this same phrase: cannot unsee. Someone points out something and suddenly a secondary interpretation of an image appears. There's something a little scary about this process, even when the images are harmless. We have a flash of insight and a new pattern is revealed hiding within the world we thought we knew. It surprises us. Ah! That's not a vine, that's a snake! That's an LG logo. NO—it's Pac-Man!
But usually the image hasn't changed; only what we think about it has. What's going on here?
I couldn't find anyone who studies the really specific cannot-unsee phenomenon that I'm talking about here. But Villanova psychologist Tom Toppino has been studying phenomena like this for decades. He sent me a famous image from the academic literature that gets at what's happening with the World Cup logo. I'm not going to tell you what it is yet, but there is a figure in this field of spots. (Don't scroll ahead!)
See it yet?
It's a dalmatian, camouflaged.
"It is hard to discern a Dalmatian standing among many black spots scattered on a white background because the part of the image corresponding to the dog lacks contours that define the edges of the dog, and the dog’s spotted texture resembles that of the background," write Dartmouth cognitive scientists Peter Tse and Howard Hughes. "Many observers find that they first recognize one part of the dog, say the head, which then makes the whole dog’s shape apparent."
Here, I'll outline it (just in case).
And if you ever encounter this image again, you will immediately see the dalmatian again. What's interesting is that the visual stimulus (the picture) doesn't change, but once your mind knows what kind of organization to impose, it's obvious that the dalmatian is there.
"When the scene is encountered again, sensory cues will again identify high information areas, but this time the prior knowledge needed to complete the perceptual act is readily available, and the perceptual interpretation is achieved in a way that seems automatic and perhaps inevitable," Toppino said. "One general lesson of this demonstration is that perception is not the result of simply processing stimulus cues. It also importantly involves fitting prior knowledge to the current situation to create a meaningful interpretation."
In short: what you know influences what you see.
One way psychologists and other people who study the brain have been probing these questions is through the use of ambiguous figures. These are images for which there are two totally plausible alternative interpretations. Here's a famous one that may give you nightmares:
What do you see? I see a duck first, then a rabbit. But in a test, more people saw the rabbit first. But it's the ability to flip back and forth that gets to me. Once I saw the rabbit, I couldn't unsee it, even if I could occasionally force my perception to see the duck.
Or try this one, perhaps the most famous ambiguous image of all.
Most people see a young woman, but some see an older woman. Others see both. For the life of me, I can't force my mind to find the older woman in the image.
Back in the 1960s, one scientist (Gerald Fisher) even showed how to develop this kind of figure using gradations of ambiguity.
Almost everyone sees a guy in the top left box and almost everyone sees a woman in the bottom right box, but the illustrations in the middle could go either way.
Other images are called "reversible." These are pictures that toggle between states. You see one thing, look away, then look back and see another. A lot of these fall into the optical illusion category, and psychologists like to manipulate the conditions under which one or the other will appear.
They have developed a ton of these tools to probe different areas of the mechanics of perception.
"I think one can describe the can't-unsee phenomenon as follows: Once you interpret visual stimulus in a certain way, you'll continue to interpret it in the same way now and the next time you encounter the stimulus," Toppino said. "Ambiguous figures certainly involve some of the same processes."
Before we get into the mechanics of those processes, let's step back for a minute and talk about a current hypothesis about the way the visual processing works. We tend to think of the eyes as sensors, like a CCD in a camera. Light falls on retinal cells and they convey that information "up" to the brain, which shows us a real-time image of our environment.
Of course, the brain is much more complex than this. When scientists look at the visual cortex, they find distinct layers that function in a rough hierarchy. Each layer handles a certain level of complexity. So, the most basic might simply process lines at a certain angle. Here, we see a famous illustration by Nobel Prize winners David Hubel and Torsten Wiesel of neural recordings in V1, the region that receives input most directly from the retina, through two areas of the thalamus known collectively as the lateral geniculate nucleus, or LGN.
It depicts recordings from neurons that respond most strongly to diagonal lines running through their receptive field, which is represented as the dashed box. Diagonal line: heavy firing. Horizontal line: not so much.
The neurons in the next layer might respond most intensely to a simple pattern, or density of lines. Through the cortex, the complexity increases, but the mechanisms remain the same: neurons are tuned to certain phenomena, whether that's a line of a certain orientation or a face.1
Much of this has been known for decades.2 But here's where the science has gotten really interesting in the last five or ten years. When neuroscientists look at the connections between the cells, they don't just see information passing up the complexity chain. There is information running down from the neocortex's higher levels to the lower ones.
"For every axon coming from the retina into our thalamus before entering our 'consciousness' in the primary visual cortex, the primary visual cortex sends at least twice as many axons back onto the thalamus to modulate the raw signal," explained UC San Diego neuroscientist Bradley Voytek.3
Why is that significant? "Our cortex is already changing the raw visual information before that information gets into our consciousness," Voytek concluded.
You're not only seeing what is actually before you; you're seeing what your brain is telling you is there.
Specifically, the cortex is sending a cascade of predictions about what should be seen at all the different layers of complexity. So what travels back up from the eyes is not raw visuals of the environment, but how the world deviates from what the brain is expecting.
According to University of Edinburgh philosopher Andy Clark's masterful 2013 summary of the state of cognitive science, this emerging idea about the brain is called the "bidirectional hierarchical network model." It holds that every level of the brain is engaged in making predictions, so the expectation of seeing a house feeds down through the cortex to the eyes, which are then more likely to perceive a sloping roof instead of something else. But if something is amiss with the prediction, that information gets transmitted and the brain tries to find a better organizational paradigm for the visual input. Knowledge feeds perception and back again. There are loops everywhere strengthening and weakening according to how well they seem to reflect exterior reality.
Now perhaps you see why the facepalm trick works: if you know to look for a hand, there it is.
University of Sheffield cognitive scientist Tom Stafford likes to show people this picture in talks. He "warns the audience that [he's] about to 'rewire their brains.'"
Then, he says, "It's a frog."
At that moment, the frog jumps out at most people, and they'll find it easy to see the frog from then onward. They cannot unsee it. And though he's joking about the brain wiring stuff—playing off the media trope/tripe that this or that thing rewires the brain—he's not wrong. Somewhere in your neocortex, the predictive model that knows what a frog looks like is influencing the cascade of neuronal activity, turning circles into eyes and an arc into a mouth. If you still can't see him, here's the original grayscale:
So, when Holly Brockwell tells us that she can't unsee the facepalm in the World Cup logo, she is, quite literally, rewiring your brain.
As is Richard Shearwood here:
And this one, oh god, this one is a great example of someone directing your conscious attention to a previously unnoticed alternative image interpretation.
These silly images tell us something significant about the way we are. Shakespeare was onto something with his email-signature-famous line, delivered by Hamlet, "for there is nothing either good or bad, but thinking makes it so."
Cognitive scientists have other ways of putting it. Here are a couple: "Sensory stimulation might be the minor task of the cortex, whereas its major task is to ... predict upcoming stimulation as precisely as possible," write Lars Muckli and colleagues at the University of Glasgow.
Or Karsten Rauss at the University of Tubingen in Germany and collaborators: "Neural signals are related less to a stimulus per se than to its congruence with internal goals and predictions, calculated on the basis of previous input to the system."
To paraphrase all of them: It is not that the real world doesn't exist, but more that we experience it as a hybrid reality: our top-down categories and imagination of the world and our bottom-up sensory experience of the world blend seamlessly into the experience of walking outside into the sunshine or seeing a bird on a wire or eating an oyster or seeing Jesus in a tortilla.
"All this makes the line between perception and cognition fuzzy, perhaps even vanishing," is the conclusion Clark the philosopher draws.
So, the next time someone tweets an image they can't unsee, know that your brain will never be the same.
2. Thanks to neuroscientist and machine learning expert Beau Cronin for his guidance through the literature.
3. Voytek published a fascinating paper on the "hypothetical neuroscience" of the residents of China Mieville's novel The City and The City, in which two towns co-exist ignoring each other's existence.
More From The Atlantic