Access the full text.
Sign up today, get DeepDyve free for 14 days.
P. Negi, D. Labate (2012)
3-D Discrete Shearlet Transform and Video ProcessingIEEE Transactions on Image Processing, 21
Philippos Mordohai, G. Medioni (2006)
Tensor Voting: A Perceptual Organization Approach to Computer Vision and Machine Learning
Xiaolan Liu, Tengjiao Guo, Lifang He, Xiaowei Yang (2015)
A Low-Rank Approximation-Based Transductive Support Tensor Machine for Semisupervised ClassificationIEEE Transactions on Image Processing, 24
P. Gastaldo, L. Pinna, L. Seminara, M. Valle, R. Zunino (2014)
A Tensor-Based Pattern-Recognition Framework for the Interpretation of Touch Modality in Artificial Skin SystemsIEEE Sensors Journal, 14
Hadis Takallou, S. Kasaei (2014)
Head pose estimation and face recognition using a non-linear tensor-based modelIET Comput. Vis., 8
R. Chellappa, A. Roy-Chowdhury, S. Zhou (2005)
Recognition of Humans and Their Activities Using Video
N. Sidiropoulos, L. Lathauwer, Xiao Fu, Kejun Huang, E. Papalexakis, C. Faloutsos (2016)
Tensor Decomposition for Signal Processing and Machine LearningIEEE Transactions on Signal Processing, 65
Fuyi Xu, Xinguang Zhang, Yonghong Wu, Lishan Liu (2017)
Global Existence and the Optimal Decay Rates for the Three Dimensional Compressible Nematic Liquid Crystal FlowActa Applicandae Mathematicae, 150
N. Renard, S. Bourennane (2009)
Dimensionality Reduction Based on Tensor Modeling for Classification MethodsIEEE Transactions on Geoscience and Remote Sensing, 47
Zhuang Wang, K. Crammer, S. Vucetic (2012)
Breaking the curse of kernelization: budgeted stochastic gradient descent for large-scale SVM trainingJ. Mach. Learn. Res., 13
He Guo, Pan Xinglong, Zhang Chaojie, M. Tingfeng, Qin Jiufeng (2011)
Multi-sensor information fusion method and its applications on fault detection of diesel engineProceedings of 2011 International Conference on Computer Science and Network Technology, 4
M. Borghi, E. Mattarelli, Jarin Muscoloni, C. Rinaldini, T. Savioli, B. Zardin (2017)
Design and experimental development of a compact and efficient range extender engineApplied Energy, 202
Francesco Orabona, Claudio Castellini, B. Caputo, Jie Luo, G. Sandini (2010)
On-line independent support vector machinesPattern Recognit., 43
Qibin Zhao, Guoxu Zhou, T. Adalı, Liqing Zhang, A. Cichocki (2013)
Kernelization of Tensor-Based Models for Multiway Data Analysis: Processing of Multidimensional Structured DataIEEE Signal Processing Magazine, 30
Fei Xia, Hao Zhang, Kaikiang Zhang, D. Peng (2010)
Research of condenser fault diagnosis method based on neural network and information fusion2010 The 2nd International Conference on Computer and Automation Engineering (ICCAE), 5
Xiaowei Xu, Hongxia Wang, N. Zhang, Zhenxing Liu, Xiaoqing Wang (2017)
Review of the Fault Mechanism and Diagnostic Techniques for the Range Extender Hybrid Electric VehicleIEEE Access, 5
Shuicheng Yan, Dong Xu, Qiang Yang, Lei Zhang, Xiaoou Tang, HongJiang Zhang (2007)
Multilinear Discriminant Analysis for Face RecognitionIEEE Transactions on Image Processing, 16
Z. Hao, Lifang He, Bingqian Chen, Xiaowei Yang (2013)
A Linear Support Higher-Order Tensor Machine for ClassificationIEEE Transactions on Image Processing, 22
Yu-long Zhan, Wei Wei, Chong-fu Huo, Zhiyuan Yang (2009)
Research on delamination fault diagnosis of marine diesel engine based on Support Vector Machine2009 IEEE International Symposium on Industrial Electronics
K. Plataniotis, A. Venetsanopoulos (2000)
Color Image Processing and Applications
Berend Weel, M. d’Angelo, E. Haasdijk, A. Eiben (2017)
Online Gait Learning for Modular Robots with Arbitrary Shapes and SizesArtificial Life, 23
Yanyan Chen, Kuaini Wang, P. Zhong (2016)
One-Class Support Tensor MachineKnowl. Based Syst., 96
P. Zhao, S. Hoi, Rong Jin (2011)
Double Updating Online LearningJ. Mach. Learn. Res., 12
Johann Bengua, H. Phien, H. Tuan, M. Do (2016)
Efficient Tensor Completion for Color Image and Video Recovery: Low-Rank Tensor TrainIEEE Transactions on Image Processing, 26
Mengdi Wang, Ethan Fang, Han Liu (2014)
Stochastic compositional gradient descent: algorithms for minimizing compositions of expected-value functionsMathematical Programming, 161
Hongcheng Wang, N. Ahuja (2004)
Compact representation of multidimensional data using tensor rank-one decompositionProceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004., 1
Jyrki Kivinen, Alex Smola, R. Williamson (2001)
Online learning with kernelsIEEE Transactions on Signal Processing, 52
Heungjae Lee, Bok-Shin Ahn, Young-Moon Park (2000)
A fault diagnosis expert system for distribution substationsIEEE Transactions on Power Delivery, 15
Danushka Bollegala (2014)
Dynamic feature scaling for online learning of binary classifiersArXiv, abs/1407.7584
Haiping Lu, K. Plataniotis, A. Venetsanopoulos (2008)
MPCA: Multilinear Principal Component Analysis of Tensor ObjectsIEEE Transactions on Neural Networks, 19
B. Wahono, W. Santoso, Arifin Nur, Amin (2015)
Analysis of Range Extender Electric Vehicle Performance Using Vehicle SimulatorEnergy Procedia, 68
K. Lau, Qinghua Wu (2003)
Online training of support vector classifierPattern Recognit., 36
Heikki Rasilo, O. Räsänen (2017)
An online model for vowel imitation learningSpeech Commun., 86
M. Ratcliff, A. Dane, Aaron Williams, J. Ireland, J. Luecke, Robert Mccormick, K. Voorhees (2010)
Diesel particle filter and fuel effects on heavy-duty diesel engine emissions.Environmental science & technology, 44 21
D. Tao, Xuelong Li, Xindong Wu, Weiming Hu, S. Maybank (2006)
Knowledge and Information Systems REGULAR PAPER
P. Auer, N. Cesa-Bianchi, C. Gentile (2000)
Adaptive and Self-Confident On-Line Learning Algorithms
Zhou Ron (2013)
Online Support Tensor MachineJournal of Frontiers of Computer Science and Technology
Giovanni Cavallanti, N. Cesa-Bianchi, C. Gentile (2006)
Tracking the best hyperplane with a simple budget PerceptronMachine Learning, 69
V. Chandrasekaran, P. Shah (2017)
Relative entropy optimization and its applicationsMathematical Programming, 161
It is difficult to develop accurate mathematical models to describe the range extender electric vehicles due to the non- linear and complex coupling of the monitoring signal sources resulted from the massive moving parts and complex archi- tecture in range extender and the limited storage space of the diagnostic device. In this study, we proposed the smooth iterative online support tensor machine algorithm, which is combined with support higher-order tensor machine and online stochastic gradient descent method, and applied it to the fault diagnosis of the range indicator. Four methods with different algorithms, support vector machine, smooth iterative online support vector machine, linear support higher- order tensor machine, and smooth iterative online support tensor machine algorithms, were adopted to diagnose and classify the fault samples of the range indicator by comparing the diagnostic accuracy and model learning time. It is found that the fault diagnosis method based on the smooth iterative online tensor machine showed a higher accuracy, shorter learning time, and less storage space. Based on the experimental results, it is feasible to apply smooth iterative online support tensor machine model to the fault diagnosis of electric vehicle extenders. Keywords Electric vehicle range extender, fault diagnosis, smooth iterative online support tensor machine, smooth iterative online support vector machine Date received: 5 April 2018; accepted: 1 November 2018 Handling Editor: Zhixiong Li dynamic testing, and signal processing was developed. Introduction With the improvement of the computational technol- Due to the shortage of energy and environmental pol- ogy, intelligent diagnosis technology methods using lution, the development of traditional electric vehicles artificial intelligence have been developed, which is restricted while the extended electric vehicles have 4 includes expert system fault diagnosis method, particle been developing fast. However, the fault of range extender will directly affect the reliability and safety of the electric vehicles. Therefore, to improve the safety, School of Automobile and Traffic Engineering, Wuhan University of reliability, and economy of the extended range electric Science and Technology, Wuhan, China vehicles, it is of great significance to diagnose the work- 2,3 Corresponding author: ing condition of range extender accurately. For the Xiaowei Xu, School of Automobile and Traffic Engineering, Wuhan extender-fault diagnosis technology, the traditional University of Science and Technology, No. 947, Heping Street, Qingshan simple diagnosis method was first applied and then the District, Wuhan 430081, Hubei, China. conventional diagnostic technology based on sensor, Email: [email protected] Creative Commons CC BY: This article is distributed under the terms of the Creative Commons Attribution 4.0 License (http://www.creativecommons.org/licenses/by/4.0/) which permits any use, reproduction and distribution of the work without further permission provided the original work is attributed as specified on the SAGE and Open Access pages (https://us.sagepub.com/en-us/nam/ open-access-at-sage). 2 Advances in Mechanical Engineering filter diagnosis method, multi-source information SIOSTM fusion diagnosis method, neural network diagnosis In pattern recognition, machine learning, computer method, and support vector machine (SVM) diagnosis vision, and image processing and other research areas, method. such as face image, are represented by second-order Traditional intelligent diagnosis techniques are using 28 29,30 tensor; color image, gray video sequence, silhou- the vector as input samples for classification learning, 31 32 ette sequence, and multispectral image are usually which would easily result in the loss of information and expressed as third-order tensor; color video destroy data correlation structure. To avoid these prob- 33,34 sequence can be seen as fourth-order tensor. The lems, Tao et al. proposed support tensor learning fault diagnosis samples in this article are represented as (STL), which extends the vector classification model to third-order tensor. tensor patterns by making full use of the structural In this article, we propose the SIOSTM and take the information of the data and their correlation. In 2013, fault samples of the third-order tensor as the classifica- Hao et al. proposed a support higher-order tensor tion object. In 2013, R Zhou et al. proposed the machine (SHTM) model, which uses CP decomposition online support tensor machine (OSTM), but with the to reduce computational and storage space greatly. In algorithm for fault diagnosis, it was found that when 2015, Liu et al. designed the concave-convex proce- the number of samples was small, the algorithm was dure-based transient support tensor machine (CCCP- difficult to obtain the optimal hyperplane. In addition, TSTM), which can solve the nonconvex sample prob- the Lagrange multiplier will oscillate in the process of lems and shorten the iteration time. As the expansion online iterative updating, so that it is difficult to con- and complement of the vector model, the tensor model verge. To solve this problem, we propose SIOSTM. has a certain improvement in the fault diagnosis accu- The Lagrangian multiplier in the initial iteration has racy and the sample learning time. In recent years, been close to the optimal hyperplane, and in the pro- many researchers have put forward various learning cess of online learning is a smooth iterative way to algorithms based on tensor pattern, which has aroused achieve the optimal value. wide attention and application, such as face recogni- 12,13 14 15 tion, data mining, machine learning, computer 16 17 vision, and three-dimensional liquid crystal flows. Smooth iterative online support vector machine The above models often use batch or off-line learn- Using the basic formula of tensor and the definition of ing methods. However, in the engineering application n-modulus and norm, the alternate projection algo- of extender failure prediction, the signal data for rithm proposed by Q Zhao et al. is simplified as the machine learning are collected one by one in a sequence SVM model with the tensor as the input sample, also and sent to the learning system in an endless stream. It known as the generalized support tensor model is unrealistic to proceed the machine learning in favor- able performance with one or several stationary classi- 18 1 fiers for the long and changeable data sequence. In min kk W + C j ð1Þ W, b, j this case, the training for large-scale samples will not i = 1 only cost a long time but also have a relatively low pre- s:t: yðÞ hi W, X + b ø 1 j ð2Þ i i i cision. Therefore, the online learning algorithm is intro- duced into the prediction model. The common online j ø 0, i = 1, 2, ... , l ð3Þ learning algorithms include online support vector clas- where the weight parameter W and the input sample X sifiers (OSVC), online independent support vector are same scale-sized tensor-type data. machine (OISVM), double updating online learning Based on model (1)–(3), we have proposed online (DUOL), and adaptive and self-confident on-line learning. Because we only can deal with one sample at a learning. Because of low cost, time consumption, and time by online learning, so select a sample fX , y g arbi- t t high precision of online learning for large-scale sam- trarily, and j = max (0, 1 y (hW, X i + b)), the fol- t t t ples, online learning is applied to multiple fields, such lowing cost functions are considered as dynamic two categories, vowel imitation learn- 25 26 ing, and modular robots. In this article, the smooth iterative online support QðÞ W, b = kk W +C maxðÞ 0, 1 yðÞ hi W, X +bð4Þ t t tensor machine (SIOSTM) algorithm was developed by combining the two algorithms of linear SHTM and Then online stochastic gradient descent (OSGD). Using the tensor-type data as input, the characteristic parameters dQðÞ W, b W, yðÞ hi W, X + b ø 1 t t = ð5Þ of the fault samples of the electric vehicle extension were dW W Cy X , yðÞ hi W, X + b \1 t t t t extracted and the fault parameters were diagnosed. Xu et al. 3 dQðÞ W, b 0, yðÞ hi W, X + b ø 1 Then you can get h t t = ð6Þ Cy , yðÞ hi W, X + b \1 db t t t h = minðÞ 0:01kk W =kk Cy X W , 0:01kk b =kk Cy k1 t t k1 k1 t In order to make the cost function reach the mini- ð17Þ mum, we use the stochastic gradient descent methods to get the updated formula of W and b at step k as where W and b are the initial Lagrangian multi- k1 k1 follows plier W , b , i 2 I after training for the SVM model. 0 0 Then h is dQðÞ W, b W = W h ð7Þ k k1 dW h = minðÞ 0:01kk W =kk Cy X W , 0:01kk b =kk Cy W = W , b = b ,fg X , y k1 k1 i i 0 i i 0 0 i ð18Þ dQðÞ W, b b = b h ð8Þ k k1 db W = W , b = b ,fg X , y k1 k1 i i In this article, the above algorithm is the smooth iterative online support vector machine (SIOSVM). where h is the learning efficiency factor. Based on the above update formula and the OSGD method, we can get the updated formula of the weight SIOSTM parameter as follows: When y (hW , X i + b ) ø 1 When we want to obtain the SHTM formula, we first t k1 t k1 optimize equations (1)–(3) and obtain the following W =ðÞ 1 h W ð9Þ k k1 Lagrange function b = b ð10Þ k k1 LðÞ W, b, a, b, j = kk W + C j When y (hW , X i + b )\1 F t k1 t k1 i = 1 ð19Þ l l X X W =ðÞ 1 h W + Chy X ð11Þ k k1 t t a½ yðÞ hi W, X + b + j 1 b j i i i i i i b = b + Chy ð12Þ i = 1 i = 1 k k1 t When we use OSGD to solve the classification prob- where a ø 0, b ø 0, i = 1, 2, .. . , l. L(W, b, a, b, j) lem, we find that W and b may produce oscillations in derivative for the variables W, b, and j and the result the course of the iterative, that is to say that the W and is equal to zero, so we can get the following formula b values of k – 1 are quite different from those of step k, l l and even they may change from positive to negative. X X W = a y X , a y = 0, 0ła ł C, i = 1, 2, .. . , l i i i i i i When the number of samples is small, W and b may i = 1 i = 1 not reach the optimal values, so that the online learning ð20Þ effect is not ideal. In order to solve this problem, we first use the SVM algorithm to train the training sam- The weight parameter W is only related to the input ples before using the online support vector machine sample X , in the case of a 6¼ 0. Therefore, a 6¼ 0, the i i i model, then get the initial W and b values, and then input sample X is the support tensor, and the sequence carry out online learning. set of the support tensor is defined as When the training samples are sorted by the SVM I = fijg a 6¼ 0, i = 1, 2, .. . , l ,then algorithm, the initial W and b values are close to the 0 0 optimal W and b values. We hope that the W and k1 X X b values of the k – 1 step are close to the W and b k1 k k W = a y X = a y X ð21Þ i i i i i i values of k step, that is, the amount of change in W k i = 1 i2I and b should be small, they are in a gentle and stable Thus, the dual problems of formulas (1)–(3) are as process to achieve the best values. In order to achieve follows this purpose, we set the following conditions l l l X X X kk W W ł 0:013kk W ð13Þ k k1 k1 1 min a a y y X , X a ð22Þ i j i j i j i kk b b ł 0:013kk b ð14Þ i = 1 i = 1 i = 1 k k1 k1 That is s:t: a y = 0 ð23Þ i i i = 1 kk ðÞ 1 h W + Cy hX W ł 0:013kk W ð15Þ k1 t t k1 k1 kk Chy ł 0:013kk b ð16Þ 0 ł a ł C, i = 1, 2, .. . , l ð24Þ t k1 i 4 Advances in Mechanical Engineering P P R (1) (2) (N) R (1) Deduce that Let X = x , x , .. . , x , X = x , i j p = 1 ip ip ip q = 1 jq (2) (N) x , .. . , x , you can get the following formula jq jq a =(1 h)a , i 2 I ð30Þ i i R R X X By formula (10) (1) (2) (N) (1) (2) (N) X , X ’ x , x , .. . , x x , x , .. . , x i j ip ip ip jq jq jq p = 1 q = 1 b = b ð31Þ k k1 R RDEDE DE X X (1) (1) (2) (2) (N) (N) = x , x x , x x , x ip jq ip jq ip jq p = 1 q = 1 2. When y (hW , X i + b )\1, the sample can- t k1 t k1 R R N not be correctly classified in the current model, DE X X Y (n) (n) = x , x ð25Þ there are two kinds of situations: ip jq p = 1 q = 1 n = 1 When the serial number t of the sample X is not in Substituting formula (25) into formulas (22)–(24) to the set I, the sample serial number t will be added to the obtain the SHTM model as follows set I, but the sample set will become more and more as time goes on until it reaches the storage capacity. So l l R R N DE X X X X Y ðÞ n ðÞ n it is required to add the sequence number t of X to the max a a a y y x , x ð26Þ i j k j k jp kq i = 1 j, k = 1 p = 1 q = 1 n = 1 set I and to remove one of the samples in set I and its corresponding a value. Since the smaller the value of a, the smaller the weight of the corresponding X value, s:t: a y = 0 ð27Þ i i and the elimination rule is to remove the smallest value i = 1 of a and its corresponding sample X . From formulas j j 0 ł a ł C, i = 1, 2, .. . , l ð28Þ (11) and (21) X X Before each update (9)–(12), it is necessary to calcu- a y X + a y X =(1 h) a y X + Ch y X ð32Þ i i i t t t i i i t t late the value of y (hW , X i + b ) and compare it t k1 t k1 i2I i2I with the value 1 to determine the update mode of W and b. Deduce that In the OSGD algorithm, each iteration needs to cal- culate the W. The inner product calculation of W and a =(1 h)a , i 2 I, j 62 I, a = Ch, I = I fg t ð33Þ i i t X , which are the high-order high-dimensional tensor By formula (12) type, will lead to a longer calculation time, which is called ‘‘dimension disaster.’’ To save time costs, a can b = b + Chy ð34Þ k k1 t be used directly instead of W for update, and the spe- cific steps are as follows. When the serial number t of the sample X is in the In order to make a and b values reach the optimal set I, keeping the set I unchanged. From formulas (11) value quickly, the SHTM model is used to train the ten- and (21) sor training samples to obtain the initial Alpha½i and b values, and the sequence set I consisting of support 0 a =(1 h)a , i 2 I, i 6¼ t, a =(1 h)a + Ch ð35Þ i i t t tensor. By formula (12) In the iteration process, in order to avoid the values of variables a and b fluctuating frequently, the objec- b = b + Chy ð36Þ k k1 t tive function can reach the optimal values in a smooth and stable way. Firstly, the learning efficiency factor It is necessary to calculate y (hW , X i + b ) t k1 t k1 was induced to perform online update calculations, before each iteration update. However, when the tensor whose value can be obtained by formula (18). And then inner product is used to calculate the formula, the the sample T was brought into the weight parameter model is based on the SVM model with tensor as the calculation. training sample, so that the result is no different from the direct use of SVM. Therefore, tensor data samples 1. When y (hW , X i + b ) ø 1, the sample can t k1 t k1 are processed by canonical polyadic (CP) decomposi- be correctly classified in the current model, not tion, which can not only reflect the spatial structure of the support tensor, and the tensor sequence set tensor and can improve the speed of operation but also I remains unchanged. By formulas (9) and (21) can achieve the purpose of noise reduction, which makes the calculation results more accurate and more X X a y X =(1 h) a y X ð29Þ efficient. i i i i i i i2I i2I According to formula (25) Xu et al. 5 yðÞ hi W , X + b = a y yhi X , X + y b t k1 t k1 i i t i t t k1 signals of crankshaft torque, flywheel moment of iner- i2I tia, the crank pin connecting force, and connecting rod R RDEDE DE X X X (1) (1) (2) (2) (N) (N) axial force were collected under its five working condi- = a y y x , x x , x x , x + y b i i t t k1 ip tq ip tq ip tq i2I p = 1 q = 1 tions of normal and single cylinder failure with different ð37Þ cylinders. Each of the signals is a function of the crank angle at different speeds, and they were used to con- Seen from formula (37), the multiplication number struct the state of extender in tensor space. Then, the of CP decomposition is (I + I + + I + N 1) 1 2 N tensor CP decomposition algorithm is used to decom- 3 R . However, the multiplication number of inner pose the constructed tensor sample, and the core tensor product is I 3 I 3 3 I . It is obvious that the cal- 1 2 N of the extender state sample is extracted. Second, 25% culation speed of the model after CP decomposition is samples are trained by SHTM model, and the initial faster. Meanwhile, the storage space of CP decomposi- Lagrange multiplier is obtained. Finally, the online tion is (I + I + + I )3 R and the storage space 1 2 N algorithm of stationary iteration is used to train core of tensor calculation is I 3 I 3 3 I . It is clear 1 2 N tensor sample set to identify and locate the fault and that the CP decomposition will save a lot of storage compare with SVM, SHTM, and SIOSVM to verify the space, which is related to R. In this paper, the value of feasibility of SIOSTM fault diagnosis method. R as set to 1. Construction of extender fault tensor Multi-class support tensor machine Constructing state samples of range extender in tensor To distinguish the normal and four cylinders in five space, each order can be regarded as a kind of influence cases, the two-class support tensor machine will be factor of the range extender, and each element in the extended to multi-class support tensor machine. tensor can be considered as the interaction results of For the multi-class SVM problem, we choose one I 3 J 3 K various factors. A third-order tensor A 2 R was against one-SVM (OAO-SVM) algorithm. For a class c constructed as the state samples of the range extender. classifier problem, the algorithm needs to construct The first order I of the influence factors stands for the c(c – 1)/2 two-classification STM models. In this algo- category of the signals, including the crankshaft angular rithm, we need to obtain the classification decision velocity, crankshaft angular acceleration, reciprocating function of any two kinds of samples. For the training piston inertia force, and the bearing support force. The samples from the p and q classes, where p 6¼ q, the class second-order J is the crank angle, and the third-order label corresponding to the p class is 1 and the q class is K is the rotation speed which is the average speed of the –1, and the two-class STM model is solved as follows crankshaft. As shown in Figure 2, the third-order ten- sor range extender state feature sample was constructed 2 pq pq by ‘‘signal type I3 crank angle J3 rotate speed K.’’ min kk W + C j ð38Þ F i pq pq w , b , j 2 Each element of the third-order tensor samples is pre- i = 1 sented as x , where i 2 I, j 2 J, k 2 K, which repre- ijk pq pq pq s:t: y = 1,hi W , X + b + j ø 1, i = 1, 2, .. . , l i i sents its j type signal under the rotational speed of k ð39Þ and crankshaft angle of i. pq pq pq y = 1,hi W , X + b + j ø 1, i = 1, 2, .. . , l i i Algorithm flow ð40Þ pq The main differences between SIOSTM and SIOSVM j ø 0, i = 1, 2, .. . , l ð41Þ include the training method of training samples, the Test sample X can be calculated using formula (41) update method of weight parameters, and the calcula- as follows tion method of inner product. For the training model of the training samples, the SIOSTM model adopts the SHTM algorithm; the training samples are the tensor pq pq yxðÞ = arg max sgnðÞ W , X + bð42Þ hi samples after CP decomposition, which achieves the p = 1, 2, ..., C q = 1, q6¼p purpose of noise reduction and dimension reduction. However, SIOSVM model can only deal with vector Extender fault diagnosis method input samples. So tensor pattern samples need to be expanded into vector pattern samples for training and Diagnostic process classification, which will destroy the data structure. For Comprehensive use of SHTM and OSTM algorithm updates, SIOSTM model updates a and b, the number and the diagnosis process are shown in Figure 1. First, of updates’ times is only related to the number of initial the extender of electric vehicles as the object, and the training samples and the number of test samples, 6 Advances in Mechanical Engineering Figure 1. Flow chart of extender fault diagnosis. Figure 2. Structure chart of third-order extender state sample. SIOSVM model updates are W and b, then it needs to more suitable for complex tensor data. The steps of the do I 3 I 3 3 I + I 3 I 3 3 I times SIOSTM model are shown in Table 1. 1 2 n 1 2 N updates, SIOSTM model updates can be more simple. For the inner product, after the CP decomposition of Fault diagnosis results and analysis the SIOSTM model, the storage space and computing Data acquisition time are greatly reduced. For example, in this article, each sample processed by CP decomposition requires To verify the feasibility of SIOSTM in the fault diagno- only 4 + 501 + 31 storage spaces, that is, only 536 sis of the extended-range electric vehicle, the crankshaft feature attributes, and without CP decomposition angular velocity, crankshaft angular acceleration, reci- requires 4 3 y¨ 501 3 31 storage space, that is, 62,124 procating piston inertia force, and bearing support feature attributes. In general, the SIOSTM model is force of four kinds of the signals were selected as Xu et al. 7 Table 1. SIOSTM algorithm process. l I 3I 3I I 2 3 Input: The sample set T = fX , y g , X 2 R , y 2f1 + 1g, maximum number of iterations J i i i i i = 1 Output: Alpha½i,ðÞ i = 1, 2, .. . , l and b Step 1: In the sample set, 25% of the samples were selected as the training set, and trained by the SHTM model to obtain the initial Alpha½iðÞ i = 1, 2, .. . , l , b , and the set I composed of support tensors Step 2: Calculate the learning efficiency h according to formula (21), make k=1 Step 3: A sample X is selected randomly from the sample set T Step 4: According to formulas (4)–(26): H = yðÞ hi W , X + b t k1 t k1 Step 5: If H ø 1, b = b , Alpha½i =ðÞ 1 h Alpha½i, i 2 I k k1 Step 6: If H\1, b = b + Chy , Alpha½i update is divided into two types: k k1 t ffiIf t 62 I, then: Alpha½i=(1 h)Alpha½i, i 2 I, a = min Alpha½i i2I Alpha½t = Ch, j 62 I, I = I [ftg; If t 2 I, then: Alpha½i=(1 h)Alpha½i, i 2 I, i 6¼ t, Alpha½t=(1 h)Alpha½i + Ch, I = I; Step 7: Let k=k + 1, if k<J, go to step 3, otherwise, terminate the algorithm SIOSTM: smooth iterative online support tensor machine; SHTM: support higher-order tensor machine. sample sets are used as the test set for the verification of the diagnosis method. Sample types To verify the rationality of the construction method of the tensor-type extender state, the vector-type extender state sample and tensor-type sample set are input to the classification learning machine for training and diagno- sis, respectively. The vector-type extender state sample set and tensor-type extender state sample set parameters are shown in Tables 2 and 3. Experimental results analysis Figure 3. In situ test bench. The data analysis of this article uses MATLAB R2016b as the data analysis tool. The CPU used by the computer normal and misfire fault diagnosis data source. The in is Intel (R) Core (TM) i3-3110M 2.40 GHz, the memory situ test bench and acquisition equipment are shown in is 4 GB, and the operating system is Windows 7. Figure 3. The oil supply pipe is cut off for a short time In this article, the evaluation index of the algorithm to simulate a single cylinder misfire fault. is based on ‘‘test accuracy’’ and ‘‘learning time,’’ in In the process of data acquisition, the speed range of which the learning time is taken as the average time the extender is 1500–3000 r/min, the sampling interval when the algorithm runs 10 times in MATLAB. The is 100 r/min, sampling times 31 times, the sampling learning time includes the training time and the esti- range 0–720, and the equal interval of 501 data. mated time of the class. The SIOSTM algorithm also Therefore, 31 times sampling data of 5 states, 4 types includes the time of CP decomposition at R = 1, but of signals, and 16 rotational speeds are obtained. does not include the time of building the sample and Moreover, according to the tensor type of ‘‘signal type the time of the output. For all classification models, the 3 y¨ crank angle 3 speed,’’ we can build 80 sizes of training set consists of 25% randomly selected from all 4 3 501 3 31 tensor samples. According to this samples. To verify the stability of the SIOSTM model, method, the construction of the tensor-type extender different learning efficiency factors, h, are chosen and state characteristic samples are completed, and select- compared with the learning efficiency factors calculated ing randomly 20 samples as the training set, and all the by formula (21). The results are shown in Table 4. 8 Advances in Mechanical Engineering Table 2. Vector state sample parameters. The state of the extender Number of samples Data dimension (vector) Number of characteristic attributes (no feature extraction) Normal state, NS 16 13 62,124 62,124 Cylinder 1 Misfire, SCM1 16 13 62,124 62,124 Cylinder 2 Misfire, SCM2 16 13 62,124 62,124 Cylinder 3 Misfire, SCM3 16 13 62,124 62,124 Cylinder 4 Misfire, SCM4 16 13 62,124 62,124 Table 3. Tensor type state sample parameters. The state of the extender Number of samples Data dimension (third-order tensor) Number of characteristic attributes Normal state, NS 16 43 5013 31 536 Cylinder 1 Misfire, SCM1 16 43 5013 31 536 Cylinder 2 Misfire, SCM2 16 43 5013 31 536 Cylinder 3 Misfire, SCM3 16 43 5013 31 536 Cylinder 4 Misfire, SCM4 16 43 5013 31 536 information of the data and achieve the purpose of Table 4. Accuracy rate of different learning efficiency factors h. noise reduction, so that the model test accuracy will be The learning efficiency factors h Accuracy rate (%) higher. Compared with SVM and SIOSVM model, 1e–4 60 SIOSVM model can correctly identify 64 samples, 1e–5 82.5 while the SVM model can only correctly identify 40 h calculated by formula (21) 96.25 samples. The accuracy of SIOSVM model is 30% higher than that of the SVM model, and the accuracy rate of each state of SIOSVM model is also higher than It can be seen from Table 4 that the classification the SVM model. We can see that the accuracy of the accuracy of the learning efficiency factor is higher than model is greatly improved using the OSGD method. that of the other factors because the improper learning Therefore, online learning can not only satisfy the efficiency factors will lead to oscillation of the model. problem that the accuracy of the series changes with Compared with the results obtained by using common time but also can meet the problem of insufficient stor- learning efficiency factors, the classification accuracy of age space. the model can be improved by using the learning effi- The accuracy of SIOSTM model was as high as ciency factors calculated by formula (21). 96.25%, and the accuracy of other models was lower The accuracy of the different learning efficiency fac- than SIOSTM except that the SHTM model was 1.25% tors h is described above. Table 5 compares the accu- higher than that of SIOSTM. Comparison of SIOSTM racy of SIOSTM, SIOSVM, SHTM, and SVM models, and SIOSVM shows that the accuracy of SIOSTM and the accuracy rate equals the total number of sam- model with SIOSVM model is improved by 16.25%, ples identified divided by the total number of samples. and the SIOSTM model only under normal working It can be seen from Table 5 that compared with the state was about 6.25% lower than the SIOSVM model, SHTM model and the SVM model, the SHTM model the accurate rate of other working state is higher than can correctly identify 78 samples, while the SVM model the SIOSVM model, and there are even higher than can only identify 40 samples. The accuracy of the about 25% SIOSVM model. What is more, the SHTM SHTM model is 47.5% higher than that of the SVM model needs to store all the samples, while the SIOSTM model. Considering the classification accuracy of each model only needs to store 0.25% of the samples, so that state of the extender, the accuracy of the normal state the storage space of the SIOSTM model is smaller than of the SHTM model is slightly lower than the SVM that of the SHTM model. Therefore, in order to solve model, while the accuracy rate of the rest of the state is the problems that the natural structural information much higher than that of the SVM model. So, using may be lost and the correlation of the original data may tensor as input samples has certain advantages in the also be broken down in vector mode, and the classifica- sample classification, which can maintain the structure tion effect of offline learning mode becomes worse with Xu et al. 9 Table 5. Accuracy rate of SIOSTM, SIOSVM, SHTM, and SVM models. Classification Extender Total Total Accuracy model status number of number rate (%) samples of samples identified SVM NS 16 16 100 SCM1 16 6 37.5 SCM2 16 6 37.5 SCM3 16 6 37.5 SCM4 16 6 37.5 Total 80 40 50 SIOSVM NS 16 16 100 SCM1 16 12 75 SCM2 16 12 75 SCM3 16 12 75 Figure 4. Comparison of learning time for the four models. SCM4 16 12 75 Total 80 64 80 SHTM NS 16 15 93.75 According to the ‘‘test precision’’ and ‘‘learning SCM1 16 16 100 SCM2 16 15 93.75 time,’’ SIOSTM and SHTM models can achieve the SCM3 16 16 100 purpose of noise reduction and maintain the data’s SCM4 16 16 100 structural information and correlation. The models Total 80 78 97.5 showed a higher test accuracy, shorter learning time, SIOSTM NS 16 15 93.75 SCM1 16 15 93.75 and faster convergence speed. When considering the SCM2 16 15 93.75 problem of storage space, the SIOSTM model, which SCM3 16 16 100 has more advantages, is feasible in extender fault SCM4 16 16 100 diagnosis. Total 80 77 96.25 SIOSTM: smooth iterative online support tensor machine; SIOSVM: smooth iterative online support vector machine; SHTM: support higher- Conclusion order tensor machine; SVM: support vector machine. Lots of issues are involved in the range extender electric vehicles’ fault diagnosis, such as large samples, non- linear data, and high-dimensional data. In addition, to time and the storage space is insufficient, this article avoid the loss of structure information and the destruc- adopts the SIOSTM model. The experimental results tion of data dependency caused by vector input, the verify that SIOSTM can be used in the fault diagnosis off-line learning algorithm often leads to the problem of extended range. The model has a higher test accuracy that the classification effect is worse with the passage of and is able to maintain the effectiveness of the data time and the storage space is not enough. In this article, changing with time and solve the storage space and we proposed the SIOSTM model to diagnose the fault other issues. parameters by combining the two algorithms of SHTM The accuracy of the different classification models is algorithm and OSGD algorithm. described above. Figure 4 compares the learning times The SVM, SIOSVM, SHTM, and SIOSTM algo- of SIOSTM, SIOSVM, SHTM, and SVM models. rithms were used to classify the fault parameters, and As shown in Figure 4, the SIOSTM model of learn- the diagnostic accuracy and model learning time were ing time is 5.09 s, the SHTM model of learning time is used as the evaluation indexes. The experimental results 5.59 s, the SIOSVM model of learning time is 11.25 s, show that the accuracy rate of SIOSTM model is and the SVM model of learning time is 9.42 s. The 96.25%, except that the SHTM model is 1.25% higher SIOSTM model of learning time is quicker than the than SIOSTM, and the accuracy of the other models is SIOSVM model (6.16 s), increased by 54.76%, and the lower than SIOSTM. In terms of learning time, the SHTM model of learning time is faster than the SVM SIOSTM model only needs 5.09 s, which is the shortest model (3.83 s), increased by 40.66%. Therefore, the time in all classification models. In addition, the experimental results show that using tensor samples SIOSTM model can solve the problem of storage space and CP decomposition will greatly shorten the learning as it only needs to store 0.25% of the samples. time. In addition, the SIOSTM and SHTM models are In summary, the fault diagnosis method based on more advantageous in terms of time when having more the SIOSTM can not only reduce the noise but also samples and the data dimension is higher. maintain the data’s structural information and 10 Advances in Mechanical Engineering correlation, which makes the diagnosis model more 8. Zhan YL, Wei W and Huo CF. Research on delamina- tion fault diagnosis of marine diesel engine based on sup- accurate, faster convergence speed, and shorter learn- port vector machine. In: IEEE international symposium ing time. At the same time, the fault diagnosis method on industrial electronics, Seoul, South Korea, 5–8 July based on SIOSTM model can do online learning under 2009, vol. 20, pp.1224–1227. New York: IEEE. the premise of ensuring classification accuracy and 9. Tao D, Li X and Wu X. Supervised tensor learning. solves the problem of data storage capacity. Therefore, Knowl Inf Syst 2007; 13: 1–42. it is feasible to apply the SIOSTM model to the fault 10. Hao Z, He L and Chen B. A linear support higher-order diagnosis of electric vehicle range extender. tensor machine for classification. IEEE T Image Process 2013; 22: 2911–2920. 11. Liu X, Guo T and He L. A low-rank approximation- Declaration of conflicting interests based transductive support tensor machine for semisu- The author(s) declared no potential conflicts of interest with pervised classification. IEEE T Image Process 2015; 24: respect to the research, authorship, and/or publication of this 1825–1838. article. 12. Gastaldo P, Pinna L and Seminara L. A tensor-based pattern-recognition framework for the interpretation of touch modality in artificial skin systems. IEEE Sens J Funding 2014; 14: 2216–2225. The author(s) disclosed receipt of the following financial sup- 13. Takallou HM and Kasaei S. Head pose estimation and port for the research, authorship, and/or publication of this face recognition using a non-linear tensor-based model. article: This work was supported by the National Natural IET Comput Vis 2014; 8: 54–65. Science Foundation of China under Grant No. 51505345 and 14. Chen Y, Wang K and Zhong P. One-class support tensor Grant No. 51509194, the Science & Technology Research machine. Knowl-Based Syst 2016; 96: 14–28. Project of Education Department of Hubei Province under 15. Sidiropoulos ND, Lathauwer LD and Fu X. Tensor Grant No. Q20151105, and the open fund of Hubei Key decomposition for signal processing and machine learn- Laboratory of Mechanical Transmission and Manufacturing ing. IEEE T Signal Proces 2017; 65: 3551–3582. Engineering at Wuhan University of Science and Technology 16. Mordohai P and Medioni G. Tensor voting: a perceptual under Grant No. 2017A12. organization approach to computer vision and machine learning (Synthesis lectures on image video & multimedia ORCID iD processing), vol. 2. San Rafael, CA: Morgan & Claypool Publishers, 2006, 136p. Xiaowei Xu https://orcid.org/0000-0003-4422-8325 17. Xu F, Zhang X and Wu Y. Global existence and the opti- mal decay rates for the three dimensional compressible References nematic liquid crystal flow. Acta Appl Math 2017; 150: 67–80. 1. Wahono B, Santoso WB and Nur A. Analysis of range 18. Cavallanti G, Cesa-Bianchi N and Gentile C. Tracking extender electric vehicle performance using vehicle simu- the best hyperplane with a simple budget perceptron. lator. Enrgy Proced 2015; 68: 409–418. Mach Learn 2007; 69: 143–167. 2. Borghi M, Mattarelli E and Muscoloni J. Design and 19. Kivinen J, Smola AJ and Williamson RC. Online learning experimental development of a compact and efficient with kernels. IEEE T Signal Proces 2004; 52: 2165–2176. range extender engine. Appl Energ 2017; 202: 507–526. 20. Lau KW and Wu QH. Online training of support vector 3. Xu XX, Wang HX and Zhang N. Review of the fault classifier. Pattern Recogn 2003; 36: 1913–1920. mechanism and diagnostic techniques for the range 21. Orabona F, Castellini C and Caputo B. On-line indepen- extender hybrid electric vehicle. IEEE Access 2017; 5: dent support vector machines. Pattern Recogn 2010; 43: 14234–14244. 1402–1412. 4. Lee HJ, Ahn BS and Park YM. A fault diagnosis expert 22. Zhao P, Hoi SCH and Jin R. Double updating online system for distribution substations. IEEE T Power learning. J Mach Learn Res 2011; 12: 1587–1615. Deliver 2000; 15: 92–97. 23. Auer P, Cesa-Bianchi N and Gentile C. Adaptive and 5. Ratcliff MA, Dane AJ and Williams A. Diesel particle self-confident on-line learning algorithms. J Comput Syst filter and fuel effects on heavy-duty diesel engine emis- Sci 2002; 64: 48–75. sions. Environ Sci Technol 2010; 44: 8343–8349. 24. Bollegala D. Dynamic feature scaling for online learning 6. He G, Pan X and Zhang C. Multi-sensor information of binary classifiers. Knowl-Based Syst 2017; 129: 97–105. fusion method and its applications on fault detection of 25. Rasilo H and Ra¨ sa¨ nen O. An online model for vowel imi- diesel engine. In: International conference on computer sci- tation learning. Speech Commun 2017; 86: 1–23. ence and network technology, Harbin, China, 24–26 26. Weel B, D’Angelo M and Haasdijk E. Online gait learn- December 2011, vol. 4, pp.2551–2555. New York: IEEE. ing for modular robots with arbitrary shapes and sizes. 7. Xia F, Zhang H and Zhang K. Research of condenser Artif Life 2017; 23: 80–104. fault diagnosis method based on neural network and 27. Yan S, Xu D and Yang Q. Multilinear discriminant anal- information fusion. In: International conference on com- ysis for face recognition. IEEE T Image Process 2007; 16: puter and automation engineering, Singapore, 26–28 Feb- 212–220. ruary 2010, vol. 5, pp.709–712. New York: IEEE. Xu et al. 11 28. Plataniotis KN and Venetsanopoulos AN. Color image Cambridge, 26 August 2004, vol. 1, p.44–47. New York: processing and applications, vol. 12. Berlin: Springer, IEEE. 2001, p.222. 34. Bengua JA, Phien HN and Tuan HD. Efficient tensor 29. Negi PS and Labate D. 3-D discrete shearlet transform completion for color image and video recovery: low-rank and video processing. IEEE T Image Process 2012; 21: tensor train. IEEE T Image Process 2017; 26: 2466–2479. 2944–2954. 35. Zhou R, Yang X and Guangchao WU. Online support 30. Chellappa R, Roy-Chowdhury A and Zhou S. Recogni- tensor machine. J Front Comput Sci Technol 2013; 7: tion of humans and their activities using video (Synthesis 611–619. lectures on image video & multimedia processing), vol. 1. 36. Zhao Q, Zhou G and Adali T. Kernelization of tensor- San Rafael, CA: Morgan & Claypool Publishers, 2005, based models for multiway data analysis: processing of 173p. multidimensional structured data. IEEE Signal Proc Mag 31. Lu H, Plataniotis KN and Venetsanopoulos AN. MPCA: 2013; 30: 137–148. multilinear principal component analysis of tensor 37. Wang M, Fang EX and Liu H. Stochastic compositional objects. IEEE T Neural Networ 2008; 19: 18–39. gradient descent: algorithms for minimizing compositions 32. Renard N and Bourennane S. Dimensionality reduction of expected-value functions. Math Program 2017; 161: based on tensor modeling for classification methods. 1–31. IEEE T Geosci Remote 2009; 47: 1123–1131. 38. Wang Z, Crammer K and Vucetic S. Breaking the curse 33. Wang H and Ahuja N. Compact representation of multi- of kernelization: budgeted stochastic gradient descent for dimensional data using tensor rank-one decomposition. large-scale SVM training. J Mach Learn Res 2012; 13: In: International conference on pattern recognition, 3103–3131.
Advances in Mechanical Engineering – SAGE
Published: Dec 18, 2018
Keywords: Electric vehicle range extender; fault diagnosis; smooth iterative online support tensor machine; smooth iterative online support vector machine
You can share this free article with as many people as you like with the url below! We hope you enjoy this feature!
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.