Tech enables man with ALS to ‘speak’ in real time

By Published On: 12 June 2025
Tech enables man with ALS to ‘speak’ in real time

A brain-computer interface has enabled a man with ALS to speak in real time by converting brain activity directly into a synthetic voice.

The investigational system allows the user to communicate with family through a digital vocal tract that simulates speech, enabling him to adjust intonation and even produce simple melodies.

Unlike earlier technologies that translated brain signals into text—akin to texting—this interface generates voice instantly, enabling more natural and fluid conversation.

Developed by researchers at the University of California, Davis, the system uses four microelectrode arrays surgically implanted in the area of the brain responsible for producing speech.

These devices record electrical activity from neurons and send it to computers that decode the signals to recreate spoken words.

Sergey Stavisky, senior author of the study and assistant professor in the UC Davis department of neurological surgery.

Stavisky said: “Translating neural activity into text, which is how our previous speech brain-computer interface works, is akin to text messaging.

“It’s a big improvement compared to standard assistive technologies, but it still leads to delayed conversation. By comparison, this new real-time voice synthesis is more like a voice call.

“With instantaneous voice synthesis, neuroprosthesis users will be able to be more included in a conversation. For example, they can interrupt, and people are less likely to interrupt them accidentally.”

The participant is enrolled in the BrainGate2 clinical trial at UC Davis Health.

The interface translates his brain activity into audible speech played through a speaker with a delay of just one-fortieth of a second—comparable to the slight lag people experience when hearing their own voice.

Maitreyee Wairagkar is first author of the study and project scientist in the UC Davis Neuroprosthetics Lab.

The researcher said: “The main barrier to synthesising voice in real-time was not knowing exactly when and how the person with speech loss is trying to speak.

“Our algorithms map neural activity to intended sounds at each moment of time.

“This makes it possible to synthesise nuances in speech and give the participant control over the cadence of his BCI-voice.”

The interface enabled the participant to say new words not previously known to the system and to use vocal elements such as intonation to ask questions or place emphasis.

He also made early attempts at modulating pitch by singing simple, short melodies.

Listeners understood nearly 60 per cent of the synthesised words correctly, compared with just 4 per cent without the BCI.

The system relies on advanced artificial intelligence algorithms trained using data collected as the participant attempted to speak sentences displayed on a screen.

Neural activity showed firing patterns from hundreds of brain cells, which researchers aligned with the intended speech sounds.

This allowed the algorithm to reconstruct his voice using neural signals alone.

David Brandman is co-director of the UC Davis Neuroprosthetics Lab and the neurosurgeon who carried out the implantation.

He said: “Our voice is part of what makes us who we are. Losing the ability to speak is devastating for people living with neurological conditions.

“The results of this research provide hope for people who want to talk but can’t. We showed how a paralysed man was empowered to speak with a synthesised version of his voice.

“This kind of technology could be transformative for people living with paralysis.”

Brandman is also assistant professor in the department of neurological surgery and site-responsible principal investigator for the BrainGate2 trial.

While the findings are encouraging, the researchers emphasise that brain-to-voice neuroprostheses remain in early development.

The study involved only one participant with ALS – amyotrophic lateral sclerosis – a progressive neurological disease that affects nerve cells controlling voluntary muscles, including those used for speech.

Further studies involving participants with speech loss caused by other conditions, such as stroke, will be needed to validate the system’s broader use.

Oral and gut bacteria linked to cognitive decline in Parkinson’s, study finds
Nerve stimulation therapy shows promise for spinal cord injury recovery