Lifestyle

Mind reading machine that converts people’s thoughts into words has been developed by scientists

A mind reading machine that converts people’s thoughts into words has been developed by scientists.

It offers hope of patients paralysed after a stroke or accident communicating with friends and loved ones – despite being unable to speak.

The pioneering system combines the power of speech synthesisers and AI (artificial intelligence) to turn brain activity into intelligible sentences.

It is based on the same technology used by Amazon Echo and Apple Siri.

Electrical engineering Professor Dr Nima Mesgarani at Columbia University, New York, hopes it will be life changing for thousands of stricken individuals.

He said: “Our voices help connect us to our friends, family and the world around us, which is why losing the power of one’s voice due to injury or disease is so devastating.

“With today’s study, we have a potential way to restore that power.

“We’ve shown that, with the right technology, these people’s thoughts could be decoded and understood by any listener.”

Ultimately, he hoped their system could be part of an implant, similar to those worn by some epilepsy patients, that translates the wearer’s thoughts directly into words.

Added Prof Mesgarani: “In this scenario, if the wearer thinks ‘I need a glass of water,’ our system could take the brain signals generated by that thought, and turn them into synthesised, verbal speech.

“This would be a game changer.

“It would give anyone who has lost their ability to speak, whether through injury or disease, the renewed chance to connect to the world around them.”

People with locked in syndrome, for instance, cannot move any muscles voluntarily apart from the eyes – making conversation impossible.

British cosmologist Professor Stephen Hawking used a speech generating device activated by a muscle in his cheek after he was struck down by severe motor neurone disease.

But Dr Mesgarani’s technique described in Scientific Reports is completely different – reconstructing the words a person hears with unprecedented clarity.

It lays the groundwork for helping those who cannot speak regain their ability to converse with the outside world.

New ways could also be devised for computers to communicate directly with the brain.

When people talk – or even imagine doing so – telltale patterns of activity appear in their brain.

Distinct, recognisable signals also emerge when we listen to, or imagine, someone speaking.

Experts who record and decode them see a future in which thoughts need not remain hidden inside the brain – but instead could be translated into verbal speech at will.

Early efforts by Prof Mesgarani and others that focused on simple computer models of sound frequency scans called spectrograms failed.

So his team turned to a computer algorithm known as a ‘vocoder’ that can synthesise speech after being trained on recordings of people talking.

Prof Mesgarani said: “This is the same technology used by Amazon Echo and Apple Siri to give verbal responses to our questions.”

He taught it to interpret brain activity by teaming up with US neurosurgeon Dr Ashesh Mehta whose epileptic patients have allowed him to implant their brains with electrodes to find the source of their seizures.

Explained Prof Mesgarani: “Working with Dr. Mehta, we asked epilepsy patients already undergoing brain surgery to listen to sentences spoken by different people, while we measured patterns of brain activity. These neural patterns trained the vocoder.”

The researchers then asked the patients to listen to speakers reciting digits between 0 to 9, while recording brain signals that could then be run through the vocoder.

The sound produced by the vocoder in response to those signals was analysed and cleaned up by neural networks, a type of AI that mimics the structure of brain cells.

The end result was a robotic-sounding voice reciting a sequence of numbers.

To test the accuracy of the recording, Prof Mesgarani and his team got individuals to listen to the recording and report what they heard.

He said: “We found people could understand and repeat the sounds about 75 per cent of the time, which is well above and beyond any previous attempts.”

The improvement in intelligibility was especially evident when comparing the new recordings to the earlier, spectrogram-based attempts.

Prof Mesgarani said: “The sensitive vocoder and powerful neural networks represented the sounds the patients had originally listened to with surprising accuracy.”

The researchers now plan to test more complicated words and sentences next, and want to run the same tests on brain signals emitted when a person speaks or imagines speaking.

By Mark Waghorn

SWNS

This content was supplied for The London Economic Newspaper by SWNS news agency.

Published by