Reading Music from Brainwaves of Listeners

The research paper is “Music can be reconstructed from human auditory cortex activity using nonlinear decoding models” (full text at link). Here is the abstract.

Music is core to human experience, yet the precise neural dynamics underlying music perception remain unknown. We analyzed a unique intracranial electroencephalography (iEEG) dataset of 29 patients who listened to a Pink Floyd song and applied a stimulus reconstruction approach previously used in the speech domain. We successfully reconstructed a recognizable song from direct neural recordings and quantified the impact of different factors on decoding accuracy. Combining encoding and decoding analyses, we found a right-hemisphere dominance for music perception with a primary role of the superior temporal gyrus (STG), evidenced a new STG subregion tuned to musical rhythm, and defined an anterior–posterior STG organization exhibiting sustained and onset responses to musical elements. Our findings show the feasibility of applying predictive modeling on short datasets acquired in single patients, paving the way for adding musical elements to brain–computer interface (BCI) applications.

This is a press release from the University of California, Berkeley, “Brain recordings capture musicality of speech — with help from Pink Floyd”.

This is the clip from “Another Brick in the Wall, Part 1” that was played for the subjects.

and here is the reconstruction from activity in the auditory cortex of the listener. Note that not only the melody and rhythm were decoded, you can almost hear the words (if you know what they are).


As this technology progresses, I will be interested to observe differences in brain processing and responses to different kinds of music. Say, compare the present Pink Floyd to a Bach fugue. However fanciful my impression, I strongly infer some mathematical correlation to Bach’s music. Does some reflection of the “mathematical universe” inhere in at least some music?


Another interesting question would be whether music with a complex structure stimulates processing in the left hemisphere of the brain. The present research found right hemisphere processing to be dominant, as the popular conception of the specialisation of the hemispheres would suggest, but one wonders if music that has a more mathematical construction such as polyphony would also activate the left hemisphere which is often associated with “doing math” (although not necessarily the kind of creative thinking that’s involved in discovering new ways to prove things).


Indeed. I find it quite curious that the researchers choose to use that particularly monotonous snippet of Pink Floyd to demonstrate their breakthrough. I would think that a Mozart symphony or piano concerto would be much more impressive. Perhaps the system that they developed can’t handle music of such complexity (yet).


That sounds reasonable. The sensors they used on the brain surface, although numerous compared to a conventional EEG, were still extremely sparse compared to the number of neurons whose signals they picked up—probably by a factor of tens of millions or more to one. This gave them crude spatial resolution which further compounded the difficulty of the signal being buried in noise. If they’d used a more complex music clip, that would have further reduced the signal to noise ratio and made it harder for them to train their analysis network to reconstruct the sound.


Thanks for the explanation! Still, it’s an impressive feat and provides a glimpse of what might be possible in the not-to-distant future. The benefits of such technology are obvious, but so are the detriments…I fear that the cons may ultimately outweigh the pros.