Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Stochastic Backward Euler: An Implicit Gradient Descent Algorithm for k-Means Clustering

Stochastic Backward Euler: An Implicit Gradient Descent Algorithm for k-Means Clustering In this paper, we propose an implicit gradient descent algorithm for the classic k-means problem. The implicit gradient step or backward Euler is solved via stochastic fixed-point iteration, in which we randomly sample a mini-batch gradient in every iteration. It is the average of the fixed-point trajectory that is carried over to the next gradient step. We draw connections between the proposed stochastic backward Euler and the recent entropy stochastic gradient descent for improving the training of deep neural networks. Numerical experiments on various synthetic and real datasets show that the proposed algorithm provides better clustering results compared to k-means algorithms in the sense that it decreased the objective function (the cluster) and is much more robust to initialization. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Journal of Scientific Computing Springer Journals

Stochastic Backward Euler: An Implicit Gradient Descent Algorithm for k-Means Clustering

Loading next page...
 
/lp/springer_journal/stochastic-backward-euler-an-implicit-gradient-descent-algorithm-for-k-KH505woBm1

References (21)

Publisher
Springer Journals
Copyright
Copyright © 2018 by Springer Science+Business Media, LLC, part of Springer Nature
Subject
Mathematics; Algorithms; Computational Mathematics and Numerical Analysis; Mathematical and Computational Engineering; Theoretical, Mathematical and Computational Physics
ISSN
0885-7474
eISSN
1573-7691
DOI
10.1007/s10915-018-0744-4
Publisher site
See Article on Publisher Site

Abstract

In this paper, we propose an implicit gradient descent algorithm for the classic k-means problem. The implicit gradient step or backward Euler is solved via stochastic fixed-point iteration, in which we randomly sample a mini-batch gradient in every iteration. It is the average of the fixed-point trajectory that is carried over to the next gradient step. We draw connections between the proposed stochastic backward Euler and the recent entropy stochastic gradient descent for improving the training of deep neural networks. Numerical experiments on various synthetic and real datasets show that the proposed algorithm provides better clustering results compared to k-means algorithms in the sense that it decreased the objective function (the cluster) and is much more robust to initialization.

Journal

Journal of Scientific ComputingSpringer Journals

Published: May 31, 2018

There are no references for this article.