Access the full text.
Sign up today, get DeepDyve free for 14 days.
Improving the performance on data representation of an auto-encoder could help to obtain a satisfying deep network. One of the strategies to enhance the performance is to incorporate sparsity into an auto-encoder. Fortunately, sparsity for the auto-encoder has been achieved by adding a Kullback–Leibler (KL) divergence term to the risk functional. In compressive sensing and machine learning, it is well known that the $$l_1$$ l 1 regularization is a widely used technique which can induce sparsity. Thus, this paper introduces a smoothed $$l_1$$ l 1 regularization instead of the mostly used KL divergence to enforce sparsity for auto-encoders. Experimental results show that the smoothed $$l_1$$ l 1 regularization works better than the KL divergence.
Neural Processing Letters – Springer Journals
Published: Jul 7, 2017
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.