Decoding Speech from Brain Wave Recordings

A paper published in Nature Machine Intelligence on 2023-10-05, “Decoding speech perception from non-invasive brain recordings” (full text at link) reports identifying, with 41% accuracy, speech perceived by individuals entirely from magneto-encephalography signals picked up from outside the skull. Here is the abstract:

Decoding speech from brain activity is a long-awaited goal in both healthcare and neuroscience. Invasive devices have recently led to major milestones in this regard: deep-learning algorithms trained on intracranial recordings can now start to decode elementary linguistic features such as letters, words and audio-spectrograms. However, extending this approach to natural speech and non-invasive brain recordings remains a major challenge. Here we introduce a model trained with contrastive learning to decode self-supervised representations of perceived speech from the non-invasive recordings of a large cohort of healthy individuals. To evaluate this approach, we curate and integrate four public datasets, encompassing 175 volunteers recorded with magneto-encephalography or electro-encephalography while they listened to short stories and isolated sentences. The results show that our model can identify, from 3 seconds of magneto-encephalography signals, the corresponding speech segment with up to 41% accuracy out of more than 1,000 distinct possibilities on average across participants, and with up to 80% in the best participants—a performance that allows the decoding of words and phrases absent from the training set. The comparison of our model with a variety of baselines highlights the importance of a contrastive objective, pretrained representations of speech and a common convolutional architecture simultaneously trained across multiple participants. Finally, the analysis of the decoder’s predictions suggests that they primarily depend on lexical and contextual semantic representations. Overall, this effective decoding of perceived speech from non-invasive recordings delineates a promising path to decode language from brain activity, without putting patients at risk of brain surgery.

The complete training set (175 volunteers and 160 hours of recordings) and recognition code is posted on GitHub. The authors work at Meta AI (Facebook) in Paris.

image

6 Likes

I wonder if anyone has applied these technologies to a much simpler problem, improving/correcting the output from a throat (subvocal) microphone:

5 Likes