Stochastic Backward Euler: An Implicit Gradient Descent Algorithm for k-Means Clustering

Stochastic Backward Euler: An Implicit Gradient Descent Algorithm for k-Means Clustering In this paper, we propose an implicit gradient descent algorithm for the classic k-means problem. The implicit gradient step or backward Euler is solved via stochastic fixed-point iteration, in which we randomly sample a mini-batch gradient in every iteration. It is the average of the fixed-point trajectory that is carried over to the next gradient step. We draw connections between the proposed stochastic backward Euler and the recent entropy stochastic gradient descent for improving the training of deep neural networks. Numerical experiments on various synthetic and real datasets show that the proposed algorithm provides better clustering results compared to k-means algorithms in the sense that it decreased the objective function (the cluster) and is much more robust to initialization. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Journal of Scientific Computing Springer Journals

Stochastic Backward Euler: An Implicit Gradient Descent Algorithm for k-Means Clustering

Loading next page...
 
/lp/springer_journal/stochastic-backward-euler-an-implicit-gradient-descent-algorithm-for-k-KH505woBm1
Publisher
Springer Journals
Copyright
Copyright © 2018 by Springer Science+Business Media, LLC, part of Springer Nature
Subject
Mathematics; Algorithms; Computational Mathematics and Numerical Analysis; Mathematical and Computational Engineering; Theoretical, Mathematical and Computational Physics
ISSN
0885-7474
eISSN
1573-7691
D.O.I.
10.1007/s10915-018-0744-4
Publisher site
See Article on Publisher Site

Abstract

In this paper, we propose an implicit gradient descent algorithm for the classic k-means problem. The implicit gradient step or backward Euler is solved via stochastic fixed-point iteration, in which we randomly sample a mini-batch gradient in every iteration. It is the average of the fixed-point trajectory that is carried over to the next gradient step. We draw connections between the proposed stochastic backward Euler and the recent entropy stochastic gradient descent for improving the training of deep neural networks. Numerical experiments on various synthetic and real datasets show that the proposed algorithm provides better clustering results compared to k-means algorithms in the sense that it decreased the objective function (the cluster) and is much more robust to initialization.

Journal

Journal of Scientific ComputingSpringer Journals

Published: May 31, 2018

References

You’re reading a free preview. Subscribe to read the entire article.


DeepDyve is your
personal research library

It’s your single place to instantly
discover and read the research
that matters to you.

Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.

All for just $49/month

Explore the DeepDyve Library

Search

Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly

Organize

Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.

Access

Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.

Your journals are on DeepDyve

Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.

All the latest content is available, no embargo periods.

See the journals in your area

DeepDyve

Freelancer

DeepDyve

Pro

Price

FREE

$49/month
$360/year

Save searches from
Google Scholar,
PubMed

Create lists to
organize your research

Export lists, citations

Read DeepDyve articles

Abstract access only

Unlimited access to over
18 million full-text articles

Print

20 pages / month

PDF Discount

20% off