› Publications
› Software

Cite Details

Ping-Keng Jao, Li Su, Yi-Hsuan Yang and Brendt Wohlberg, "Monaural Music Source Separation using Convolutional Sparse Coding", IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 24, no. 11, doi:10.1109/TASLP.2016.2598323, pp. 2158--2170, Nov 2016


We present a comprehensive performance study of a new time-domain approach for estimating the components of an observed monaural audio mixture. Unlike existing time-frequency approaches that use the product of a set of spectral templates and their corresponding activation patterns to approximate the spectrogram of the mixture, the proposed approach uses the sum of a set of convolutions of estimated activations with prelearned dictionary filters to approximate the audio mixture directly in the time domain. The approximation problem can be solved by an efficient convolutional sparse coding algorithm. The effectiveness of this approach for source separation of musical audio has been demonstrated in our prior work, but under rather restricted and controlled conditions, requiring the musical score of the mixture being informed a priori and little mismatch between the dictionary filters and the source signals. In this paper, we report an evaluation that considers wider, and more practical, experimental settings. This includes the use of an audio-based multi-pitch estimation algorithm to replace the musical score, and an external dataset of audio single notes to construct the dictionary filters. Our result shows that the proposed approach remains effective with a larger dictionary, and compares favorably with the state-of-the-art non-negative matrix factorization approach. However, in the absence of the score and in the case of a small dictionary, our approach may not be better.

BibTeX Entry

author = {Ping-Keng Jao and Li Su and Yi-Hsuan Yang and Brendt Wohlberg},
title = {Monaural Music Source Separation using Convolutional Sparse Coding},
year = {2016},
month = Nov,
urlpdf = {},
journal = {IEEE/ACM Transactions on Audio, Speech, and Language Processing},
volume = {24},
number = {11},
doi = {10.1109/TASLP.2016.2598323},
pages = {2158--2170}