Blog: AI Produces Simulated Sentences From Brain Signals
Imagine if you were unable to speak, and had to rely on machinery to transmit what you had to say to others. Stephen Hawking was one such person that had to rely on this type of system, and while he was able to accomplish great things in his life, consider that this type of system is only capable of producing about 10 words per minute — as opposed to natural human speech at around 150.
Scientists have turned brain activity into simple words in the past, but as reported on Nature, University of California, San Francisco neurosurgeon Edward Chang has been experimenting with a technique that is able to produce entire sentences. The experimental setup involved recording the speech of five people with electronics implanted on their brain as a treatment for epilepsy, and models the human vocal tract to interpret signals.
After training a deep learning algorithm with data from previous experiments, the program was then able to translate brain signals from the subjects into estimated muscle movements, which are then turned into speech. On average, people could recognize 70% of the synthesized words, which sound like they could be human speech patterns — if lo-fi and somewhat slurred — in an audio clip of the experiment. While there is still a lot of work to do, it could be a very good starting point to enhance communication for those who now have to rely on slow movement-based speech capabilities.
Further commentary on the study by Chethan Pandarinath is available here.