
Think about the words that are swirling in your head: that tasteless joke you wisely kept to yourself at dinner; the silent impression you had of your best friend’s new partner. Now imagine that someone could eavesdrop.
On Monday, scientists at the University of Texas at Austin took another step in that direction.In a study published in the journal Nature Neuroscienceresearchers describe an artificial intelligence that can translate the private thoughts of human subjects by analyzing fMRI scans, which measure blood flow to different regions of the brain.
Researchers have developed language decoding method to pick up trying to speak a person who has lost the ability to speak and allows paralyzed man writes But just want to write. But the new language decoder is one of the first to not rely on implants. In this study, it was able to turn a person’s imaginary speech into real speech, and when subjects watched silent movies, it was able to produce relatively accurate descriptions of what happened on the screen.
“It wasn’t just a verbal stimulus,” said University neuroscientist Alexander Huth, who helped lead the study. “We’re getting meaning, ideas about what’s going on. The fact that it’s possible is very exciting.”
The study centered on three participants who came to Dr. Huth’s lab for 16 hours over several days to listen to “The Moth” and other narrative podcasts. As they listened, an fMRI scanner recorded blood oxygen levels in parts of their brains. The researchers then used large language models to match patterns of brain activity to the words and phrases the participants heard.
Large language models like OpenAI’s GPT-4 and Google’s Bard are trained on a lot of writing to predict the next word in a sentence or phrase. During this process, the model creates a map indicating the relationship between words.A few years ago, Dr. Huth noticed Particular parts of these maps — so-called contextual embeddings, which capture the semantic features, or meaning, of phrases — can be used to predict how the brain lights up in response to language.
In a fundamental sense, says neuroscientist Shinji Nishimoto of Osaka University, who was not involved in the study, “brain activity is an encrypted signal, and language models provide the way to decipher it.”
In their study, Dr. Huth and his colleagues effectively reversed the process, using another AI to translate participants’ fMRI images into words and phrases. The researchers tested the decoder by having participants listen to new recordings and then seeing how closely the translations matched the actual transcripts.
Nearly every word is misplaced in the decoded text, yet the meaning of the passage is regularly preserved. Essentially, the decoder is paraphrasing.
Original Transcript: “I got up off the air mattress and pressed my face against the glass of the bedroom window, expecting eyes to stare at me, only to find it was pitch black.”
Decoded from brain activity: “I just kept going to the window and opened the glass and I stood on tiptoe and looked out and I didn’t see anything, looked up and I didn’t see anything.”
Participants were also asked to quietly imagine telling a story while under the fMRI scan; afterward, they retell the story aloud for reference. Here, too, the decoding model captures the gist of the unspoken version.
participant’s version: “Looking for news from my wife that she has changed her mind and is coming back.”
decoded version: “Seeing her for some reason, I thought she was going to come to me and say she missed me.”
Finally, subjects watched a short, silent animated movie again while undergoing the fMRI scan. By analyzing their brain activity, a language model can decode a rough outline of what they’re watching—perhaps their internal description of what they’re watching.
The results showed that the AI decoder not only captured the words, but also the meaning. “Language perception is an externally driven process, whereas imagination is an active internal process,” said Dr. Nishimoto. “The authors show that the brain uses common representations in these processes.”
Greta Tuckute, a neuroscientist at the Massachusetts Institute of Technology who was not involved in the study, says it’s a “high-level question.”
“Can we decode the brain’s meaning?” she continued. “In some ways, they showed that, yes, we can.”
Dr. Huth and his colleagues point out that this method of language decoding has limitations. For one thing, fMRI scanners are bulky and expensive. Furthermore, training a model is a long and tedious process that must be performed on individuals to be effective. When the researchers tried to use a decoder trained on one person to read the brain activity of another, it failed, suggesting that each brain has a unique way of representing meaning.
Participants were also able to block out their internal monologues and get out of the decoder by thinking about other things. AI may be able to read our minds, but for now it has to read them one at a time and get our permission.