fbpx
Connect with us
  • Elysium

News

Brain implant may help ALS patients to communicate with thoughts

Published

on

A speech prosthetic developed by researchers in the US can translate brain signals into words, and could one day enable people with neurological disorders to regain their ability to communicate.

The research, published in Nature Communications, was conducted by a collaborative team of neuroscientists, neurosurgeons and engineers at Duke University.

Gregory Cogan, Ph.D., a professor of neurology at Duke University’s School of Medicine, said: “There are many patients who suffer from debilitating motor disorders, like ALS or locked-in syndrome, that can impair their ability to speak.

“But the current tools available to allow them to communicate are generally very slow and cumbersome.”

There are already speech decoding tools designed to help blah speak.

However, these tools can only decode at a maximum of 78 words per minute – less than half the speed of human speech.

This lag is partially due to the fact that only a few brain activity sensors can be fused onto paper-thin material than sits on the surface of the brain.

To tackle the problem, Jonathan Viventi, Ph.D and his team from the university’s biomedical engineering team crammed 256 microscopic brain sensors onto a postage stamp-sized piece of flexible, medical-grade plastic.

After creating the implant, Cogan and Viventi and their colleagues teamed up with several Duke University Hospital neurosurgeons, who helped recruit four patients to test the implants.

The device was then temporarily placed in patients who were undergoing brain surgery for another condition, such as treating Parkinson’s disease or having a tumour removed.

The researchers only had a limited time to test the device in the OR.

Cogan said: “I like to compare it to a NASCAR pit crew.

“We don’t want to add any extra time to the operating procedure, so we had to be in and out within 15 minutes. As soon as the surgeon and the medical team said ‘Go!’ we rushed into action and the patient performed the task.”

The task required the participants to listen to a series of nonsense words, like “kug,” or “vip,” before speaking each one one aloud.

The device recorded activity from each participant’s speech motor cortex as it coordinated nearly 100 muscles that move the lips, tongue, jaw and larynx.

Biomedical engineering graduate student Suseendrakumar Duraivel then took the neural and speech data from the surgery suite and fed it into a machine learning algorithm to see how accurately it could predict what sound was being made, based only on the brain activity recordings.

For some sounds and participants, such as ‘g’ in the word “gak,” the decoder got it right 84 per cent of the time when it was the first sound in a string of three that made up a given nonsense word.

However, accuracy dropped as the decoder parsed out sounds in the middle or at the end of a nonsense word.

It also struggled if two sounds were similar, such as ‘p’ and ‘b’.

Overall, the decoder was accurate 40 per cent of the time, with the algorithm working with only 90 seconds of spoken data from the 15-minute test.

The researchers are excited about making a cordless version of the device with a recent $2.4 million (£1.9 million) grant from the National Institutes of Health.

Cogan said: “We’re now developing the same kind of recording devices, but without any wires.

“You’d be able to move around, and you wouldn’t have to be tied to an electrical outlet, which is really exciting.”

Viventi added: “We’re at the point where it’s still much slower than natural speech, but you can see the trajectory where you might be able to get there.”

HIWIN

Trending