Possibility Of New Music Generation By Interconversion Of Molecules And Music
3 main points
✔️ Proposed a method of interconversion between molecules and music
✔️ Showed that similar molecules become similar music
✔️ New music generation possibilities
Molecular Sonification for Molecule to Music Information Transfer
written by Babak Mahjour, Jordan Bench, Rui Zhang, Jared Frazier, TIMOTHY CERNAK
(Submitted on 24 Mar 2022)
The images used in this article are from the paper, the introductory slides, or were created based on them.
first of all
Although line drawings are the most established way to encode molecular structures, graphs, strings, one-hot encodings, or fingerprint-based representations are also important for computational studies of molecules.
In this paper, we show that music is a high-dimensional information storage medium that can be used to encode molecular structures. This allows for a molecular generation approach that utilizes artificial intelligence tactics for music generation.
If we can encode molecules as music, we can provide sensory substitution from sight to sound. For example, it could provide a new way to show molecules to a blind chemist. With modern chemistry and drug discovery using artificial intelligence (AI), and the explosion of AI methods in music research, new possibilities could be found by integrating chemical machine learning (ML) and music machine learning techniques. The impetus for my research was to explore ways to generate new molecules using music as a medium, but in the course of my research, I found that molecules can provide ideas for generating new music.
In this research, we have developed a workflow for transferring molecules to music and vice versa. We call this SAMPLES (Sonic Architecture for Molecule Production and Live-Input Encoding Software).
encode
The key and the sequence of notes are derived from the physicochemical properties and their SELFIES sequences, respectively, to create a melody based on the molecular structure. To determine the key, the physicochemical properties of the molecule, such as logP, molecular weight, and the number of hydrogen bond donors and acceptors, are summed and finally projected to an integer between 1 and 12, corresponding to a specific key. The sequence of notes is determined from a one-to-one mapping of the molecule's SELFIES tokens to the multi-octave steps of the major scale.
The final melody is generated by adding the MIDI values of the melody to the MIDI shifts corresponding to the notes of the major scale. To enhance the texture of the music, every quarter note was converted to a major chord.
decode
MIDI is inverted and converted into molecular structures on a key-by-key basis. Therefore, multiple molecular structures are generated for the same MIDI sequence ( one per key).
Each molecular structure is then hashed into a key using the original encoding algorithm. If the hashed key matches the key used in the inverse calculation, the molecular structure is decoded. At least one decoded key is guaranteed to match the MIDI generated from SAMPLES.
Case Studies
Four case studies are presented to demonstrate the usefulness of SAMPLES.
For example, a piece of music generated from a molecule that conforms to Lipinski's law can be aurally distinguished, based on the musical key, from a piece of music from a molecule that did not conform to the law. The reason for this is that the physicochemical properties of the molecule have been hashed into the musical key. Fingerprints of physicochemical properties from the DrugBank hash are hashed to the keys of the most popular songs in the music database Spotify.
The concept of molecular similarity is very important for compound design, for example, when molecules with comparable functional properties are selected for drug discovery.
We examined whether SAMPLES generated from molecules with high Tanimoto's coefficients sounded similar and determined that it is difficult to define both molecular and musical similarity.
SAMPLES of codeine (10) and morphine (11) sound similar to each other. Similarly, the SAMPLES of sulfamethoxazole (12) and sulfadoxine (13) sound similar but different from the pair of 10 and 11.
The second experiment investigates the generation of molecules by changing the music domain. This was made possible in SAMPLES through the SELFIES application, which allows editing of strings while generating valid molecular structures.
Starting with morphine (11), we can change the score one note at a time to generate new chemical structures that have a clear relationship to morphine, but significantly change the bonding and atomic structure.
Note that SAMPLES may generate undefined chiral centers.
In the following case study, we applied the melody mixing function of MusicVAE to a MIDI melody generated by SAMPLES as input.
The new melody can then be put back into the molecular structure with SAMPLES, thus creating a new molecule that is a "blend" of the two molecules. This function is called CROSSFADE. Algorithms for creating new molecules by blending molecular structures or properties are known, but the interactivity provided by CROSSFADE is interesting.
As an example, CROSSFADE of glutamate (17) and acetylcholine (18) yields (19). Similarly, examples 20-28.
As a final experiment, we surveyed 32 students.
A SAMPLES song for a single study molecule was presented, and the participants were asked to listen to the SAMPLES songs for four test molecules and select the most similar song. (One of the four test molecules had a higher Tanimoto coefficient than the survey molecule.) For three of the four pairs studied, students selected the molecule with the highest Tanimoto coefficient based on the similarity between the SAMPLES songs.
However, the morphine (10) and codeine (11) songs were not judged to have the highest musical similarity, despite the obvious similarities between the molecules. Further research is needed to understand the reasons for this. However, the results of this experiment highlight the possibility of molecular interpretation by sensory substitution from visual to auditory when visual information is not available.
summary
We have developed a new means of encoding organic molecules through music. It can interact with molecular structures by editing, inserting, and deleting melodies, and can even generate new molecular structures.
A transition learning application where this research could be used is music generation. The motivation for machine learning for content generation is its generality. That is, there is no need to specify grammar or rules for such a model to generate content.
Converting molecules to music can provide a collection of musical data that can be used to train music generation models, as seen in MusicVAE. In particular, seq2seq models such as RNNs can interconnect domains containing variable length data signals such as text, music, and structure-based machine-readable molecular representations.
In the context of molecules, we use a variational autoencoder to learn the distribution of molecular features, such as SELFIES tokens, and provide a continuous embedding of the molecular space; SAMPLES provide a means to connect molecules directly to content-generating machine learning models in the music domain.
Categories related to this article