Access the full text.
Sign up today, get DeepDyve free for 14 days.
R Collobert, F Sinz, J Weston, L Bottou (2006)
Large scale transductive SVMsJ Mach Learn Res, 7
GE Hinton (2002)
Training products of experts by minimizing contrastive divergenceNeural Comput, 14
Y Liu, S Zhou, Q Chen (2011)
Discriminative deep belief networks for visual data classificationPattern Recognit, 44
GY Feng, DW Hu, ZT Zhou (2008)
A direct locality preserving projections (DLPP) algorithm for image recognitionNeural Process Lett, 27
R Salakhutdinov, GE Hinton (2007)
Learning a nonlinear embedding by preserving class neighbourhood structureJ Mach Learn Res, 2
Y LeCun, B Boser, JS Denker, D Henderson, RE Howard, W Hubbard, LD Jackel (1989)
Backpropagation applied to handwritten zip code recognitionNeural Comput, 1
Y Lecun, L Bottou, Y Bengio, P Haffner (1998)
Gradient-based learning applied to document recognitionProc IEEE, 86
O Chapelle, B Scholkopf, A Zien (2006)
Semi-supervised learning
GE Hinton (2010)
Learning to represent visual inputPhilos Trans R Soc B-Biol Sci, 365
X Zhu (2007)
Semi-supervised learning literature survey. Tech. rep.
GE Hinton, S Osindero, YW Teh (2006)
A fast learning algorithm for deep belief netsNeural Comput, 18
GE Hinton, R Salakhutdinov (2006)
Reducing the dimensionality of data with neural networksScience, 313
This paper develops a semi-supervised learning algorithm called convolutional deep networks (CDN), to address the image classification problem with deep learning. First, we construct the previous several hidden layers using convolutional restricted Boltzmann machines, which can reduce the dimension and abstract the information of the images effectively. Second, we construct the following hidden layers using restricted Boltzmann machines, which can abstract the information of images quickly. Third, the constructed deep architecture is fine-tuned by gradient-descent based supervised learning with an exponential loss function. CDN can reduce the dimension and abstract the information of the images at the same time efficiently. More importantly, the abstraction and classification procedure of CDN use the same deep architecture to optimize the same parameter in different steps continuously, which can improve the learning ability effectively. We did several experiments on two standard image datasets, and show that CDN are competitive with both representative semi-supervised classifiers and existing deep learning techniques.
Neural Processing Letters – Springer Journals
Published: Nov 20, 2012
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.