AI — While artificial intelligence has been a central focus for tech companies in the past few months, it has become a hot topic in science.
Scientists are looking into how they can utilize AI in their specialties.
For example, a peer-reviewed study published in the Monday edition of Nature Neuroscience journal revealed how it could be implemented into brain activity.
According to the study, scientists have developed a noninvasive AI system that can translate individuals’ brain activity into a stream of text.
AI & neuroscience
Artificial intelligence can benefit neuroscience by enabling more efficient and accurate analysis of large-scale neuroscience datasets.
It can develop more accurate models of neural systems and processes.
Additionally, AI can aid in the development of new diagnostic tools and treatments for neurological disorders.
The system
The system is dubbed a semantic decoder.
It could benefit patients who lost their ability to communicate physically after a stroke, paralysis, or other degenerative diseases.
Researchers from the University of Texas at Austin developed the system using a transformer model.
The transformer model is similar to those supporting OpenAI’s ChatGPT and Google’s Bard.
The recent study involved participants who trained the decoder by listening to hours’ worth of podcasts in an fMRI scanner.
It is also a larger piece of machinery that measures a person’s brain activity.
The semantic decoder doesn’t require surgical implants.
Benefits
AI can help neuroscience develop ways for thoughts to become text by using machine learning algorithms to decode brain activity patterns associated with language processing.
By analyzing patterns of neural activity, AI algorithms can identify the specific words or phrases that a person is thinking about and then use this information to generate corresponding text output.
This technology can revolutionize communication for individuals unable to speak or type, such as those with severe paralysis or communication disorders.
However, further research is needed to improve these systems’ accuracy and reliability and address ethical and privacy concerns related to accessing and interpreting individuals’ thoughts.
Text
When the AI system is trained, it generates a stream of text when participants are listening to or imagining telling a new story.
The produced text might not be an accurate transcript, but researchers designed the text to at least create general thoughts or ideas.
A new release said the trained system creates text that closely aligns with the intended context of the participant’s original idea around half the time.
For example, if a participant in the experiment heard the words “I don’t have my driver’s license yet,” the thoughts would translate to: “She has not even started to learn to drive yet.”
Read also: AI chatbot privacy concerns abound
The absence of implants
Alexander Huth, one of the study’s lead researchers, said:
“For a noninvasive method, this is a real leap forward compared to what’s been done before, which is typically single words or short sentences.”
“We’re getting the model to decode continuous language for extended periods of time with complicated ideas.”
Compared to other decoding systems in the works, the semantic decoder doesn’t require surgical implants, making it noninvasive.
Participants also don’t just need words from a prescribed list.
Potential misuse
The researchers also addressed inquiries about the potential misuse of the technology.
The study describes how decoding only worked with cooperative participants who joined willingly to train the decoder.
The results from people who didn’t train with the decoder were unintelligible.
Additionally, participants who trained with the decoder and put up resistance led to unusable results.
“We take very seriously the concerns that it could be used for bad purposes and have worked to avoid that,” said researcher Jerry Tang.
“We want to make sure people only use these types of technologies when they want to and that it helps them.”
The system is only available for use in the laboratory due to its reliance on the time needed on an fMRI machine.
However, researchers believe the work could be transferred to other, more portable brain-imaging systems like functional near-infrared spectroscopy (fNIRS).
“fNIRS measures where there’s more or less blood flow in the brain at different points in time, which, it turns out, is exactly the same kind of signal that fMRI is measuring,” said Huth.
“So, our exact kind of approach should translate to fNIRS.”