brendt.wohlberg.net
HomePublications
› Publications
› Software

Cite Details

Saiprasad Ravishankar and Brendt Wohlberg, "Learning Multi-Layer Transform Models", in Proceedings of the Annual Allerton Conference on Communication, Control, and Computing, (Monticello, IL), doi:10.1109/ALLERTON.2018.8635913, pp. 160--165, Oct 2018

Abstract

Learned data models based on sparsity are widely used in signal processing and imaging applications. A variety of methods for learning synthesis dictionaries, sparsifying transforms, etc., have been proposed in recent years, often imposing useful structures or properties on the models. In this work, we focus on sparsifying transform learning, which enjoys a number of advantages. We consider multi-layer or nested extensions of the transform model, and propose efficient learning algorithms. Numerical experiments with image data illustrate the behavior of the multi-layer transform learning algorithm and its usefulness for image denoising. Multi-layer models provide better denoising quality than single layer schemes.

BibTeX Entry

@inproceedings{ravishankar-2018-learning,
author = {Saiprasad Ravishankar and Brendt Wohlberg},
title = {Learning Multi-Layer Transform Models},
year = {2018},
month = Oct,
urlpdf = {http://proceedings.allerton.csl.illinois.edu/media/files/0284.pdf},
urlhtml = {http://arxiv.org/abs/1810.08323},
booktitle = {Proceedings of the Annual Allerton Conference on Communication, Control, and Computing},
address = {Monticello, IL},
doi = {10.1109/ALLERTON.2018.8635913},
pages = {160--165}
}