
Graphic representation of thoughts from the brain expressed on a computer – Photo: IFL Science
by page IFL ScienceMicroelectrode arrays were implanted in regions involved in speech production in the brain of a patient diagnosed with amyotrophic lateral sclerosis (ALS). This brain-computer interface (BCI) helps him speak his mind to people.
The dataset was trained on the BCI system with a total of 10,850 sentences.
The patient’s voice on the computer was decoded from what was in their brain at a rate of 62 words per minute – 3.4 times faster than the previous record.
The authors say that the patient’s speech on the computer had a “word error” rate — mistranslating words from the brain to the computer — of 9.1% and a “word error” rate for a 50-word vocabulary. 23.8% vocabulary group for large.
“This BCI system has been trained to know which words should come before other words and which vowels make up which words,” study author Dr Frank Willett told the BBC.
The BBC was told that patients tested said these advances could help them “to be able to continue working, maintain relationships with friends and family”.
However, they also note that the 24% “word error” rate is probably still high for day-to-day use. For example, compared to the 4–5% “word error” rate for modern speech-to-text systems.
The second study involved a patient who had had a brain stroke several years earlier.
The authors “trained and evaluated deep learning models using neural data collected when participants attempted to whisper sentences”.
The researchers say that the decoder “achieved high performance” after less than two weeks of training.
They demonstrate the ability to quickly and accurately decode large vocabularies of thought words in the brain, with an average speed of 78 words per minute and an average “word error” rate of 25 percent, according to the researchers.
The writers personalized the patient’s synthetic voice to sound like a machine. They based this on a short clip of the patient’s voice taken from a video taken before the patient became ill.
The researchers also created a digital avatar to recreate the facial expressions, using “an animation system for the avatar”. They are designed to convert speech signals into animations of facial motion using applications in games and movies (speech graphics).
The patient said: “The simple fact of hearing a voice like your own is emotional. The ability to speak loudly is essential. For the first seven years after the stroke, I only used a writing board. Now the machine makes it possible for me to express it in words.”
(TagstoTranslate)brain-computer interface