Published on Tue Apr 27 2021

Generating Lead Sheets with Affect: A Novel Conditional seq2seq Framework

Dimos Makris, Kat R. Agres, Dorien Herremans
0
0
0
Abstract

The field of automatic music composition has seen great progress in the last few years, much of which can be attributed to advances in deep neural networks. There are numerous studies that present different strategies for generating sheet music from scratch. The inclusion of high-level musical characteristics (e.g., perceived emotional qualities), however, as conditions for controlling the generation output remains a challenge. In this paper, we present a novel approach for calculating the valence (the positivity or negativity of the perceived emotion) of a chord progression within a lead sheet, using pre-defined mood tags proposed by music experts. Based on this approach, we propose a novel strategy for conditional lead sheet generation that allows us to steer the music generation in terms of valence, phrasing, and time signature. Our approach is similar to a Neural Machine Translation (NMT) problem, as we include high-level conditions in the encoder part of the sequence-to-sequence architectures used (i.e., long-short term memory networks, and a Transformer network). We conducted experiments to thoroughly analyze these two architectures. The results show that the proposed strategy is able to generate lead sheets in a controllable manner, resulting in distributions of musical attributes similar to those of the training dataset. We also verified through a subjective listening test that our approach is effective in controlling the valence of a generated chord progression.

Wed Feb 05 2020
Artificial Intelligence
Continuous Melody Generation via Disentangled Short-Term Representations and Structural Conditions
Automatic music generation is an interdisciplinary research topic that combines computational creativity and semantic analysis of music. An important property of such a system is allowing the user to specify conditions and desired properties of the generated music. In this paper we designed a model for composing melodies given a user specified symbolic scenario combined with a previous music context.
0
0
0
Wed Sep 12 2018
Machine Learning
Music Transformer
Self-reference occurs on multiple timescales, from motifs to phrases to reusing entire sections of music. Existing approaches for representing relative positional information in the Transformer modulate pairwise distance. This is impractical for long sequences such as musical compositions since their memory complexity is quadratic in the sequence length.
0
0
0
Fri Feb 19 2021
Machine Learning
Hierarchical Recurrent Neural Networks for Conditional Melody Generation with Long-term Structure
0
0
0
Wed Nov 16 2016
Artificial Intelligence
Composing Music with Grammar Argumented Neural Networks and Note-Level Encoding
Artificial intelligence has not yet been able to generate natural-sounding music conforming to music theory. A new method combines the LSTM with Grammars motivated by music theory to generate music inheriting the naturalness of human-composed pieces from the original dataset.
0
0
0
Thu Jun 25 2020
Machine Learning
Modeling Baroque Two-Part Counterpoint with Neural Machine Translation
We propose a system for contrapuntal music generation based on a Neural Machine Translation (NMT) paradigm. We collate and edit a bespoke dataset of Baroque pieces to train an attention-based neural network model.
0
0
0
Thu Sep 02 2021our pick
Artificial Intelligence
Controllable deep melody generation via hierarchical music structure representation
This paper introduces MusicFrameworks, a hierarchical music structure representation and a multi-step generative process. We first organize the full melody with section and phrase-level structure. To generate melody in each phrase, we generate rhythm and basic melody using two separate networks.
0
0
0