Published on Fri Feb 09 2018

Neural Dynamic Programming for Musical Self Similarity

Christian J. Walder, Dongwoo Kim

We present a neural sequence model designed specifically for symbolic music. The model is based on a learned edit distance mechanism which generalises a classic recursion from computer sci- ence.

0
0
0
Abstract

We present a neural sequence model designed specifically for symbolic music. The model is based on a learned edit distance mechanism which generalises a classic recursion from computer sci- ence, leading to a neural dynamic program. Re- peated motifs are detected by learning the transfor- mations between them. We represent the arising computational dependencies using a novel data structure, the edit tree; this perspective suggests natural approximations which afford the scaling up of our otherwise cubic time algorithm. We demonstrate our model on real and synthetic data; in all cases it out-performs a strong stacked long short-term memory benchmark.

Thu May 21 2020
Artificial Intelligence
An approach to Beethoven's 10th Symphony
Ludwig van Beethoven composed his symphonies between 1799 and 1825. A neural network model has been built based on the Long Short-Therm Memory (LSTM) neural networks. The generated music has been analysed by comparing the input data with the results.
0
0
0
Thu Dec 01 2016
Artificial Intelligence
Computer Assisted Composition with Recurrent Neural Networks
Sequence modeling with neural networks has lead to powerful models of symbolic music data. We address the problem of exploiting these models to reach creative musical goals. Our algorithms are capable of convincingly re-harmonising famous musical works.
0
0
0
Wed Jul 19 2017
Machine Learning
From Bach to the Beatles: The simulation of human tonal expectation using ecologically-trained predictive models
Tonal structure is in part conveyed by statistical regularities between musical events. Research has shown that computational models reflect tonal structure in music by capturing these regularities in schematic constructs like pitch histograms. Our experiments indicate that various types of recurrent neural networks produce musical expectations.
0
0
0
Wed Nov 16 2016
Artificial Intelligence
Composing Music with Grammar Argumented Neural Networks and Note-Level Encoding
Artificial intelligence has not yet been able to generate natural-sounding music conforming to music theory. A new method combines the LSTM with Grammars motivated by music theory to generate music inheriting the naturalness of human-composed pieces from the original dataset.
0
0
0
Sun Dec 09 2012
Machine Learning
High-dimensional sequence transduction
We investigate the problem of transforming an input sequence into a high-dimensional output sequence. We introduce a probabilistic model based on a recurrent neurological network. The resulting method produces musically plausible transcriptions even under high levels of noise.
0
0
0
Mon Jun 03 2013
Neural Networks
Riemannian metrics for neural networks II: recurrent networks and learning symbolic data sequences
Recurrent neural networks are powerful models for sequential data. Yet they are notoriously hard to train. Here we introduce a training procedure using a gradient ascent in a Riemannian metric. This produces an algorithm independent from design choices such as the encoding of parameters.
0
0
0