Abstract
We extend our evaluation of generative models of music tran-
scriptions that were first presented in Sturm, Santos, Ben-Tal, and Korshunova (2016). We evaluate the models in five different ways: 1) at the population level, comparing statistics of 30,000 generated transcriptions with those of over 23,000 training transcriptions; 2) at the practice level, examining the ways in which specific generated transcriptions are successful as music compositions; 3) as a ‟nefarious tester”, seeking the
music knowledge limits of the models; 4) in the context of assisted music composition, using the models to create music within the conventions of the training data; and finally, 5) taking the models to real-world music practitioners. Our work attempts to demonstrate new approaches to evaluating the application of machine learning methods to modelling and
making music, and the importance of taking the results back to the realm of music practice to judge their usefulness.
Our datasets and software are open and available at https://github.com/IraKorshunova/folk-rnn.
| Original language | English |
|---|---|
| Journal | Journal of Creative Music Systems |
| Volume | 2 |
| Issue number | 1 |
| Publication status | Published - 30 Sept 2017 |
Keywords
- Computer science and informatics
- Deep learning
- algorithmic composition
- evaluation.
- music mod- elling
- recurrent neural network (RNN)