A group of researchers in Japan has developed a mind-reading AI and it is already giving away some scary results, reconstructing thought visualisation with an uncanny resemblance.
Over years, we have seen some wide-spanning uses of machine intelligence. From detecting costumes for Halloween and diagnosing critical diseases to predicting the sexuality of a person, AI has been tweaked for a range of analytical missions – good as well as bad.
However, this time, researchers from Japan's Kyoto University have developed a deep neural network for something far outreaching – the ability to read mind and 'decode' what a person is actually thinking.
Although this sounds like a piece from a sci-fi movie, the technology is pretty much real and was detailed in a study published last month. The team is still working on it but the neural network has already started giving away some really surprising results.
According to CNBC, the system scans brain waves to recreate visualisations of thoughts, which carry an uncanny resemblance to what a person is actually seeing or thinking.
"We have been studying methods to reconstruct or recreate an image a person is seeing just by looking at the person's brain activity," Yukiyasu Kamitani, one of the scientists, told CNBC Make It. "It's known that our brain processes visual information hierarchically extracting different levels of features or components of different complexities."
To test the models, three subjects were shown natural images of birds and living people, alphabets, and geometric shapes for about 10 months. When the person's brain activity was scanned while seeing an image, the system detected the object and reverse engineered image with multiple layers of colour and very similar structures.
As you can see, the image was not an exact copy of what the subject was seeing but was pretty close to what the subject was seeing.
However, when the subject was thinking about the image he saw, the system had a hard time completing the reconstruction and gave away more distorted results. As the report notes, this is because human brain finds it harder to remember a cheetah or a bird exactly as it was seen. "The brain is less activated" in such scenarios, according to Kamitani.
"These neural networks or AI model can be used as a proxy for the hierarchical structure of the human brain," the scientist added.
Although the development and working of this system could raise fears of mind-reading machines and their domination, the team thinks the tech would be a boon to humanity. Kamitani believes the neural network, with appropriate developments, could be modified to visualise hallucinations of psychiatric patients or to allow communication with imagery or thoughts. It could even aid creativity by allowing people to draw things using their imagination.