"Machine sound": synthesizers based on neural networks

Developers from the research project Magenta (a division of Google) submitted synthesizer open source NSynth Super. It is based on an artificial intelligence system that mixes several pre-loaded samples (for example, the sound of a guitar and a piano) into a new sound with unique characteristics.
More details about the NSynth Super system and other algorithms-composers will be discussed below.
"Machine sound": synthesizers based on neural networks

Photo Ta Da CC

More about NSynth Super

The NSynth Super synthesizer has a touchscreen display on which a square "work surface" is displayed. The musician selects several instruments, the sound of which will be used to create a new sound, and assigns them to the corners of this square.
During the performance, the performer controls the sound being played by moving the pointer within the working field. The resulting sample will be a combination of the original sounds in different proportions (depending on the proximity of the cursor to a particular angle).
The new samples are synthesized using the machine learning algorithm NSynth . He studied 30?000 instrumental sounds with the help of open TensorFlow libraries and openFrameworks. In his work, the model is also used. WaveNet .
To generate new samples NSynth analyzes 16 characteristics of incoming sounds. They are then linearly interpolated to create mathematical representations of each audio signal. These representations are decoded back into sounds that have combined acoustic qualities of those at the input of the algorithm.
You can use NSynth Super with any MIDI source: for example, DAW, a synthesizer or a sequencer. How NSynth Super works you can see in this video . In it, the performer "mixes" the sounds of sitar , electric pianoforte and others:

NSynth Super is an experimental tool, therefore it will not be sold as a commercial product. However, its code and the assembly diagram are laid out on GitHub .

Who else uses MO to create music

The Magenta project also works on other technologies related to machine learning. One of them is the MusicVAE model, which can "mix" melodies. On its basis, several web applications have already been created: Melody Mixer , Beat Blender and Latent Loops . MusicVAE (and other models from Magenta) are collected in the open library Magenta.js .
Other companies work on algorithms for creating music. For example, in Sony Computer Science Laboratories implement the project Flow Machines . Their AI system is able to analyze different musical styles and use this knowledge to create new compositions. An example of his work can be music for the song Daddy's Car in the style of The Beatles.

Several applications have been created within the Flow Machines project, for example, FlowComposer , helping musicians to write music in a given style, and Reflexive Looper , independently supplementing the missing instrumental parts. With the help of solutions Flow Machines even recorded and released a music album Hello World .
Another example is the startup Jukedeck . He develops a tool for creating compositions with a given mood and tempo. The company continues to develop the project and invites to cooperation developers and musicians . Here is an example of a composition created by the Jukedeck machine learning algorithms:

A similar tool is created by the company Amper . The user can choose the mood, style, tempo and duration of the composition, as well as the instruments on which it will be "played". The application synthesizes music in accordance with these requirements.
The company is also working on AI systems for writing music. Popgun . They develop algorithms capable of writing original pop songs. Also, research in this area holds streaming giant Spotify. Last year, the company opened a laboratory in Paris, which will develop tools based on AI systems.

Will AI replace composers?

Although some companies are developing algorithms for creating music, their representatives emphasize that these tools are designed not to replace musicians and composers, but, on the contrary, to give them new opportunities.
In 201? American singer Terin Southern released an album recorded using artificial intelligence systems. Southern used tools from Amper, IBM, Magenta and AIVA. At its words , this experience was similar to working with a person who helps create music.
Moreover, not only composers can use machine learning algorithms, but also other specialists from the music industry. Neural networks are better than people to cope with the classification of objects. This feature can use music streaming services for of Definition genres of songs.
Moreover, with the help of machine learning algorithms it is possible to " to separate "Vocals from the accompaniment, create musical transcriptions or to reduce tracks.
By the way, if you like to read about the sound in the microformat - our Telegram channel :
Amazing sounds of nature
How to hear the color
Water Songs
And the narratives in our blog on "Yandex.Den":
4 famous people who were fond of music
11 interesting facts from the history of the brand Marshall

+ 0 -

Add comment