In 2011, Gallant’s team used a magnetic resonance imaging (MRI) scanner to show how computers can be trained to read the minds of those watching moving images. They built up a database of the activity in a key visual centre of the brain as three fellow researchers watched a compilation of Hollywood film trailer clips. Next the subjects watched a new set of clips. Based on the brain activity this generated, and the database created from the first phase of the experiment, the computer was asked to select segments of 5,000 hours of randomly selected YouTube clips that best matched the second sequence of clips. The results, although blurry, were recognisable as copies of the originals, and demonstrated for the first time the ability to decode moving images from the brain.
A related advance came in April this year when Japanese scientists led by Yukiyasu Kamitani, of the ATR Computational Neuroscience Laboratories in Kyoto, revealed they had made significant steps towards automated dream decoding. Three people had their brains scanned in a MRI machine while they slept. They were awoken when EEG signals indicated they had reached an early phase of sleep associated with dreaming. The researchers then asked them to describe their dreams, with the process being repeated until more than 200 reports had been collected for each person.
Kamitani’s group then chose 20 categories of objects and scenes based on the words that occurred most frequently in the descriptions. They selected photos representing each category, and scanned their volunteers’ brains while they looked at them. Comparing the scans taken while participants were awake with those while they were dreaming allowed the accurate prediction of dream content 60-70% of the time, depending on the individuals, the brain areas involved and different objects and scenes involved. It may not yet be a fully formed “dream decoder”, but it does show that direct decoding mental images is a possibility.
MRI scanners may be better at reading the brain than cheap EEG headsets, but that does not make them a practical or affordable solution for 3D printing. “As a giant three million dollar magnet, it is not something you would just wear around,” says Gallant.
Even if these issues can be overcome, there are other obstacles. Seeing an object and imagining one may not produce the same brain signals. On top of this individuals vary widely in their abilities to dream up designs from scratch, and in the level of detail they can imagine. “I have been doing 3D modelling since it began back in the 80s,” said Salt. “And the process is that you build something and then you move it about. You do not sit down and think, I have something absolutely finite in my head and that is what I am going to build.”
These challenges suggest the idea of 3D printing guided by fully formed mental images of users is, if not entirely farfetched, a long way from becoming reality. Combining sensors that can pick up human emotions with design software that can interpret and respond to them looks like the nearest we’re going to get to creating 3D objects from thought in the near future.
So Thinker Thing’s twig-like orange monster arm, as unsophisticated as it may appear at first glance, may one day be celebrated as marking the start of a new and exciting way of moulding the things around us. “It is really something magical to be there, sat without moving a limb, and watching the designs evolve into something that you were thinking about,” says Laskowsky.