Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Sparse Auto-encoder with Smoothed $$l_1$$ l 1 Regularization

Sparse Auto-encoder with Smoothed $$l_1$$ l 1 Regularization Improving the performance on data representation of an auto-encoder could help to obtain a satisfying deep network. One of the strategies to enhance the performance is to incorporate sparsity into an auto-encoder. Fortunately, sparsity for the auto-encoder has been achieved by adding a Kullback–Leibler (KL) divergence term to the risk functional. In compressive sensing and machine learning, it is well known that the $$l_1$$ l 1 regularization is a widely used technique which can induce sparsity. Thus, this paper introduces a smoothed $$l_1$$ l 1 regularization instead of the mostly used KL divergence to enforce sparsity for auto-encoders. Experimental results show that the smoothed $$l_1$$ l 1 regularization works better than the KL divergence. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Neural Processing Letters Springer Journals

Sparse Auto-encoder with Smoothed $$l_1$$ l 1 Regularization

Loading next page...
 
/lp/springer_journal/sparse-auto-encoder-with-smoothed-l-1-l-1-regularization-C8bHg6DZSE

References (16)

Publisher
Springer Journals
Copyright
Copyright © 2017 by Springer Science+Business Media, LLC
Subject
Computer Science; Artificial Intelligence (incl. Robotics); Complex Systems; Computational Intelligence
ISSN
1370-4621
eISSN
1573-773X
DOI
10.1007/s11063-017-9668-5
Publisher site
See Article on Publisher Site

Abstract

Improving the performance on data representation of an auto-encoder could help to obtain a satisfying deep network. One of the strategies to enhance the performance is to incorporate sparsity into an auto-encoder. Fortunately, sparsity for the auto-encoder has been achieved by adding a Kullback–Leibler (KL) divergence term to the risk functional. In compressive sensing and machine learning, it is well known that the $$l_1$$ l 1 regularization is a widely used technique which can induce sparsity. Thus, this paper introduces a smoothed $$l_1$$ l 1 regularization instead of the mostly used KL divergence to enforce sparsity for auto-encoders. Experimental results show that the smoothed $$l_1$$ l 1 regularization works better than the KL divergence.

Journal

Neural Processing LettersSpringer Journals

Published: Jul 7, 2017

There are no references for this article.