Google teams up with a Japanese company in a new and ambitious musical project where they seek to apply the latest in music technology Artificial intelligence to turn brain activity into music we can hear.
Over the past few months we have seen AI enter almost every conceivable territory to make a never-before-seen impact on our society at many different levels.
For this reason, we have recently published a reflection, an opinion piece, in which we compare the figure of Robert Oppenheimer, virtual father of the atomic bomb, with Sam Altmann, CEO of Open AI and also relative progenitor of the Artificial Intelligence of ChatGPT.
Both figures triggered a change that in principle now seems much smaller and symbolic for the real range of profound impact that it could have on the history of humanity, permanently altering the social and cultural order.
A perfect example of this is the most recent project by Alphabet, the large company that owns the world’s most famous web browser, where they seek to convert brain waves into music. Basically because we are at a point where achieving it is feasible.
MusicLM: Google’s Artificial Intelligence that can convert brain activity into melody
It turns out that Google has teamed up with a group of researchers in Japan, who together have found a way to produce music from human brain activity, all after capturing it through a series of images of functional magnetic resonance imaging (fMRI) to rebuild them from their new model of music generation MusicLM.
The details of this process that converts the operation of the human brain into harmonic tones have been published in a paper research under the title of Brain2Music: Reconstructing Music from Human Brain Activity (Brain2Music: Reconstruction of music from human brain activity).
The document recounts how the project was assembled, where 15-second clips of 540 pieces of music spanning ten different genres were randomly selected. Among the songs selected for the experiment, melodies from Britney Spearssubjects of Led Zeppelin and classics of Beastie Boys.
Thereafter, five participants listened to the clips through a pair of MRI-compatible insert headphones, during which time their brain activity was scanned.
The researchers then fed the data into MusicLM to “predict and reconstruct” the types of music the human subject was exposed to.
In the end, the generated music had similarities to the music the test subjects originally listened to “on a semantic level.”
The generated songs sound like different pieces, although they share similarities in their tempo, time signature, and drum order.
To verify it with our own ears, Google has launched a Web page where we can listen to the music generated by Artificial Intelligence.