Neural Process Lett (2018) 47:949–973
Learning Algorithms for Quaternion-Valued Neural
Published online: 25 September 2017
© Springer Science+Business Media, LLC 2017
Abstract This paper presents the deduction of the enhanced gradient descent, conjugate
gradient, scaled conjugate gradient, quasi-Newton, and Levenberg–Marquardt methods for
training quaternion-valued feedforward neural networks, using the framework of the HR cal-
culus. The performances of these algorithms in the real- and complex-valued cases led to the
idea of extending them to the quaternion domain, also. Experiments done using the proposed
training methods on time series prediction applications showed a signiﬁcant performance
improvement over the quaternion gradient descent algorithm.
Keywords Quaternion-valued neural networks · Quickprop · Resilient backpropagation ·
Delta-bar-delta · SuperSAB · Conjugate gradient algorithms · Scaled conjugate gradient
algorithm · Quasi-Newton algorithms · Levenberg–Marquardt algorithm · Time series
The domain of quaternion-valued neural networks has received an increasing interest over
the last few years. Some popular applications of these networks include chaotic time-series
prediction , color image compression , color night vision , polarized signal clas-
siﬁcation , and 3D wind forecasting [25,52,54].
Some signals in the 3D and 4D domains can be more naturally expressed in quaternion-
valued form. Thus, these networks appear as a natural choice for solving problems such
as time series prediction. Several methods have been proposed to increase the efﬁciency
of learning in quaternion-valued neural networks. These methods include different network
architectures and different learning algorithms, some of which are specially designed for this
type of networks, while others are extended from the real-valued case.
Department of Computer and Software Engineering, Polytechnic University Timi¸soara, Blvd. V.
Pârvan, No. 2, 300223 Timi¸soara, Romania