Access the full text.
Sign up today, get DeepDyve free for 14 days.
Hani Almousli, Pascal Vincent (2013)
Semi Supervised Autoencoders: Better Focusing Model Capacity during Feature Extraction
Yann LeCun, L. Bottou, Yoshua Bengio, P. Haffner (1998)
Gradient-based learning applied to document recognitionProc. IEEE, 86
Pascal Vincent, H. Larochelle, Yoshua Bengio, Pierre-Antoine Manzagol (2008)
Extracting and composing robust features with denoising autoencoders
(2017)
learning useful representations in a deep network 1728 Int
Workshop on Unsupervised and Transfer Learning Autoencoders, Unsupervised Learning, and Deep Architectures
Shuxia Lu, Xizhao Wang, Guiqiang Zhang, Xu Zhou (2015)
Effective algorithms of the Moore-Penrose inverse matrices for extreme learning machineIntell. Data Anal., 19
Ke Wu, Zhiqiang Gao, Cheng Peng, Xiao Wen (2013)
Text Window Denoising Autoencoder: Building Deep Architecture for Chinese Word Segmentation
Bin Gu, Xingming Sun, V. Sheng (2017)
Structural Minimax Probability MachineIEEE Transactions on Neural Networks and Learning Systems, 28
Yoshua Bengio, Pascal Lamblin, D. Popovici, H. Larochelle (2006)
Greedy Layer-Wise Training of Deep Networks
Shifei Ding, Jian Zhang, Hongjie Jia, Jun Qian (2016)
An Adaptive Density Data Stream Clustering AlgorithmCognitive Computation, 8
Adam Coates, A. Ng, Honglak Lee (2011)
An Analysis of Single-Layer Networks in Unsupervised Feature Learning
Yuxi Luo, Y. Wan (2013)
A novel efficient method for training sparse auto-encoders2013 6th International Congress on Image and Signal Processing (CISP), 2
Jian-wei Liu, Guang-Hui Chi, Ze-yu Liu, Liu Yuan, Hai-en Li, Xiong-lin Luo (2013)
Predicting protein structural classes with autoencoder neural networks2013 25th Chinese Control and Decision Conference (CCDC)
Bingyin Xia, C. Bao (2014)
Wiener filtering based speech enhancement with Weighted Denoising Auto-encoder and noise classificationSpeech Commun., 60
(2013)
Prediction of proteinprotein interactions from amino acid sequences with ensemble extreme learning machines and principal component analysis
Pascal Vincent, H. Larochelle, Isabelle Lajoie, Yoshua Bengio, Pierre-Antoine Manzagol (2010)
Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising CriterionJ. Mach. Learn. Res., 11
P Baldi (2012)
Autoencoders, unsupervised learning, and deep architecturesICML Unsuperv Transf Learn, 12
W Wei, H Yan, W Yizhou (2014)
2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops Columbus
Wei Wang, Yan Huang, Yizhou Wang, Liang Wang (2014)
Generalized Autoencoder: A Neural Network Framework for Dimensionality Reduction2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops
Y. Ouyang, Wenqi Liu, Wenge Rong, Zhang Xiong (2014)
Autoencoder-Based Collaborative Filtering
Y Ouyang, W Liu, W Rong (2014)
Neural Information Processing
Yulin He, Xizhao Wang, J. Huang (2016)
Fuzzy nonlinear regression analysis using a random weight networkInf. Sci., 364-365
C. Tan, C. Eswaran (2008)
Reconstruction of handwritten digit images using autoencoder neural networks2008 Canadian Conference on Electrical and Computer Engineering
Marc'Aurelio Ranzato, Christopher Poultney, S. Chopra, Yann LeCun (2006)
Efficient Learning of Sparse Representations with an Energy-Based Model
(2009)
This PDF file includes: Materials and Methods
Xizhao Wang, Qing-yan Shao, Qing Miao, Jun-Hai Zhai (2013)
Architecture selection for networks trained with extreme learning machine using localized generalization error modelNeurocomputing, 102
P Vincent, H Larochelle, Y Bengio (2008)
ICML’08
Marc'Aurelio Ranzato, Y-Lan Boureau, Yann LeCun (2007)
Sparse Feature Learning for Deep Belief Networks
Qianhaozhe You, Yujin Zhang (2013)
A New Training Principle for Stacked Denoising Autoencoders2013 Seventh International Conference on Image and Graphics
GE Hinton, RR Salakhutdinov (2006)
Reducing the dimensionality of data with neural networksScience, 313
A. Ng (2004)
Feature selection, L1 vs. L2 regularization, and rotational invarianceProceedings of the twenty-first international conference on Machine learning
A Krizhevsky, GE Hinton (2011)
2011 European Symposium on Artificial Neural Networks
A. Krizhevsky, Geoffrey Hinton (2011)
Using very deep autoencoders for content-based image retrieval
B. Chandra, R. Sharma (2014)
Adaptive Noise Schedule for Denoising Autoencoder
P. Baldi, Sholeh Forouzan, Zhiqin Lu (2011)
Complex-Valued AutoencodersNeural networks : the official journal of the International Neural Network Society, 33
Xizhao Wang, Ai-Xia Chen, Huimin Feng (2011)
Upper integral network with extreme learning mechanismNeurocomputing, 74
Yuhui Zheng, B. Jeon, Danhua Xu, Q.M. Wu, Hui Zhang (2015)
Image segmentation by generalized hierarchical fuzzy C-means algorithmJ. Intell. Fuzzy Syst., 28
Honglak Lee, Chaitanya Ekanadham, A. Ng (2007)
Sparse deep belief net model for visual area V2
Autoencoder can learn the structure of data adaptively and represent data efficiently. These properties make autoencoder not only suit huge volume and variety of data well but also overcome expensive designing cost and poor generalization. Moreover, using autoencoder in deep learning to implement feature extraction could draw better classification accuracy. However, there exist poor robustness and overfitting problems when utilizing autoencoder. In order to extract useful features, meanwhile improve robustness and overcome overfitting, we studied denoising sparse autoencoder through adding corrupting operation and sparsity constraint to traditional autoencoder. The results suggest that different autoencoders mentioned in this paper have some close relation and the model we researched can extract interesting features which can reconstruct original data well. In addition, all results show a promising approach to utilizing the proposed autoencoder to build deep models.
International Journal of Machine Learning and Cybernetics – Springer Journals
Published: May 31, 2016
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.