Scientists said Monday that they have found a way to use brain scans and artificial intelligence modelling to “transcribe” what people are thinking, in a step toward reading the mind.
The system recreates the essence of what a person hears or imagines, rather than trying to repeat each word, a team reports in the journal Nature Neuroscience.
“It’s getting at the ideas behind the words, the semantics, the meaning,” says Alexander Huth, study author and assistant professor of neuroscience and computer science at the University of Texas at Austin.
However, this technology cannot read the mind. This only works if a participant is actively collaborating with scientists.
Still, systems that decode language may someday help people who are unable to speak because of brain injury or disease. They are also helping scientists understand how the brain processes words and ideas.
Previous attempts to decode language relied on sensors mounted directly on the surface of the brain. Sensors detect signals in clearly defined areas.
But the Texas team’s approach is “an attempt to decode more independent thoughts,” says Marcel Just. A professor of psychology at Carnegie Mellon University who was not involved in the new research.
That could mean it has applications beyond communications, he says.
“One of the greatest scientific medical challenges is understanding mental illness, which is ultimately a dysfunction of the brain,” says Buss. “I think this general approach is going to solve this puzzle someday.”
Podcast in MRI
The new study comes as part of an effort to better understand how the brain processes language.
The researchers had three people spend up to 16 hours in a functional MRI scanner, which detects signs of activity throughout the brain.
Participants wore headphones that streamed audio from podcasts. “For the most part, they just lay there and listened to stories from The Moth Radio Hour, Huth says.
Those streams of words produced activity in the brain, not just in areas associated with speech and language.
“It turns out that a lot of the brain is doing something,” Huth says. “So areas that we use for navigation, areas that we use for doing mental math, areas that we use for processing what things feel like to touch.”
After the participants listened to stories for hours in the scanner, the MRI data was sent to a computer. It learned to match specific patterns of brain activity with certain streams of words.
Read More | Narcissism what is it? Symptoms and Causes
Next, the team told participants new stories while in the scanner. The computer then tried to reconstruct these stories from each participant’s brain activity.
The system got a lot of help from artificial intelligence in generating meaningful sentences: an early version of the popular natural language processing program ChatGPT.
What came out of the system was a condensed version of what a participant heard.
So, if a participant heard the phrase, “I don’t even have my driver’s license yet,” the decoded version might be, “He hadn’t even learned to drive yet,” Huth says. In many cases, he says, the decoded version contained errors.
In another experiment, the system was able to understand words that a person imagined.
And in the third experiment, participants watched videos in which a story was told without using words.
“We didn’t tell the subjects to try to describe what’s happening,” Huth says. “And yet what we got was this kind of language description of what’s going on in the video.”
A non-invasive window on language
Dr. MRI access at the University of California, San Francisco. is slower and less accurate than the experimental communication system currently being developed for paraplegics by a team led by Edward Chang.
“People get a sheet of electrical sensors implanted directly on the surface of the brain,” says David Moses, a researcher in Chang’s lab. “That records brain activity really close to the source.”
The sensors detect activity in areas of the brain that normally respond to spoken commands. At least one person has been able to use the system to accurately produce 15 words per minute using only their thoughts.
But with an MRI-based system, “nobody has to do surgery,” Moses says.
No approach can be used to read a person’s mind without their support. In the Texas study, people were able to beat the system by telling themselves a different story.
But future versions may raise ethical questions.
“It’s very exciting, but it’s also a little scary,” Huth says. “What if you can read out the word that somebody is just thinking in their head? That’s potentially a harmful thing.”
Moses agrees.
“This is all about the user having a new way of communicating, a new tool that is totally in their control,” he says. “That is the goal and we have to make sure that stays the goal.”