While Gallant decodes what we see, Moran Cerf from the California Institute of Technology is decoding what we think about. He uses tiny electrodes to measure the activity of individual neurons in the hippocampus, a part of the brain involved in creating memories. In this way, he can identify neurons that fire in response to specific concepts – say, Marilyn Monroe or Yoda. Cerf’s work is a lot like Gallant’s – he effectively creates a dictionary that links concepts to patterns of neural activity. “You think about something and because we learned what your brain looks like when you think about that thing, we can make inferences,” he says.
But both techniques share similar limitations. To compile the dictionaries, people need to look at a huge number of videos or concepts. To truly visualise a person’s thoughts, Cerf says, “That person would need to look at all the concepts in the world, one by one. People don’t want to sit there for hours or days so that I can learn about their brain.”
So, visualising what someone is thinking is hard enough. When that person is dreaming, things get even tougher. Dreams have convoluted stories that are hard to break down into sequences of images or concepts. “When you dream, it’s not just image by image,” says Cerf. “Let’s say I scanned your brain while you were dreaming, and I see you thinking of Marilyn Monroe, or love, or Barack Obama. I see pictures. You see you and Marilyn Monroe, whom you’re in love with, going to see Barack Obama giving a speech. The narrative is the key thing we’re going to miss.”
You would also have to repeat this for each new person. The brain is not a set of specified drawers where information is filed in a fixed way. No two brains are organised in quite the same fashion. “Even if I know everything about your brain and where things are, it doesn’t tell me anything about my brain,” says Cerf.
There are some exceptions. A small number of people have regular ‘lucid dreams’, where they are aware that they are dreaming and can partially communicate with the outside world. Martin Dresler and Michael Czisch from the Max Planck Institute of Psychiatry exploited this rare trait. They told two lucid dreamers to dream about clenching and unclenching their hands, while flicking their eyes from side to side. These dream movements translated into real flickers, which told Dresler and Czisch when the dreams had begun. They found that the dream movements activated the volunteers’ motor cortex – the area that controls our movements – in the same way that real-world movements do.
The study was an interesting proof-of-principle, but it is a long way from reading normal dreams. “We don’t know if this would work on non-lucid dreams. I’m sceptical that even in the medium-term future that you’d ever have devices for reading dreams,” says Dresler. “The devices you have in wakefulness are very far from reading your mind or thoughts, even in the next couple of decades.”
Even if those devices improve by leaps and bounds, reading a sleeping mind poses great, perhaps insurmountable challenges. The greatest of them is that you cannot really compare the images and stories you reconstruct with what a person actually dreamt. After all, our memories of our dreams are hazy at the best of times. “You have no ground-truthing,” says Gallant. It is like compiling a dictionary between one language and another that you cannot actually read. One day, we might be able to convert the activity of dreaming neurons into sounds and sights. But how would we ever know that we have done it correctly?