Olympia was built for the purpose of exploring the creative potential of machines in music composition. At it's current iteration, Olympia generates multi-part songs using an ensemble of neural networks trained on midi clips mined from free online music repositories (midiworld.com, freemidi.com). Beyond a machine learning exercise, Olympia is intended to be a collaborative platform for music composition between humans and machines. The modeling engine uses the music21 project (https://web.mit.edu/music21) extensively in processing and compiling songs. Output from the Olmypia engine has been crafted into tracks, which may be found at the soundcloud link below.

For the creative neural network workflow, the model ensemble contains models governing duration, note progression, and sequence model to capture structural repetition (ABA, ABB etc). LSTM models were attractive for this application because of the differential memory structures in the LSTM cell. The sigmoid layer of an LSTM network allows for the model "forget" past information while the forget gate determines the relevance of the relationships between steps in the time series. This works well for a music application because a resolution tone three time steps back may be more significant than a consonant directly proceeding a note. Applying these concepts in generative music may account for the differential impact of recency on composition. Finally, models were optimized not only by convergence to training time series; but, also against a set of music theory rules.

Read more: source code