Eventide has released a trial version of Mood, a new plug-in that characterizes the emotional content of music, and translates it into MIDI signals.
Mood analyzes music including key, spectral content, tempo, dynamics and additional aspects to create a set of ‘descriptors’ which are then compared to a data base. The data base has been populated by people listening to and rating pop songs.
Mood displays, in real time, the relative intensity of four emotions – angry, calm, happy and sad. The idea is that the intensity of these emotions can be translated to MIDI and OSC values which could be used, for example, to control the brightness and color of lights on stage or in a dance club.
How Mood Works
While a computer algorithm can analyze audio, it cannot, on its own, map the results of the analysis to how the audio will make someone feel. The computer must be trained by humans. Training is done by asking people to listen to examples of songs that make them feel a certain way and having them judge the degree of each emotion. The algorithm then analyzes these rated songs to determine those characteristics involved in eliciting specific emotions. This process creates the ‘descriptors’ that can then be used to analyze a new submission/song.
“Mood is a bit whimsical and no doubt some will question why we bothered to create the plug-in. The fact is that audio analysis is at the heart of what we do and we were curious to explore the possibility of using signal analysis to map musical content to emotion,” said Eventide’s Tony Agnello. “We were also inspired by a well-known producer who, upon learning of the idea, said we were ‘nuts’. Fair enough.”
To date, Mood has only been trained on pop songs. Solo voice, solo instruments, jazz and classical music will not yield meaningful results. Training is ongoing, however and Eventide is hoping that people will download the plug-in and help improve it.
Mood is available for download at no cost for a 90 day trial period. If you try out Mood, let us know what you think of it!