This paper presents a web-oriented expert system, named iHurdling, to predict results and generate training loads for 110 and 400 m hurdles races. The database contains 40 annual training programmes for the 110 m hurdles and 48 programmes for the 400 m hurdles. The predictive models include linear regressions in the form of ordinary least squares, ridge, LASSO, elastic net and nonlinear models in the form of a radial basis function neural network and fuzzy rule-based system. The leave-one-out cross-validation method is used to compare, and choose the best model. It shows that the proposed fuzzy-based model has the lowest validation error. The developed web application can support a coach in planning training programmes for hurdles races. It allows the athlete’s results to be predicted and can generate training loads for an athlete, selected from database. The application can be run on a computer or a mobile device. The system was implemented using the R programming language with the Shiny framework and additional packages. The limitations of the presented approach are related to the lack of consideration of an athlete’s physiological and psychological parameters, but the generated training programs might be used as a suggestion for the coach. Keywords Predictive models Expert system Hurdles race R programming language 1 Introduction complex problems by reasoning. Expert system applica- tions include, among others, medicine [29, 53, 60], diag- Expert systems have been widely implemented and nosis and control of power systems [26, 27], evaluation of examined by researchers. They mimic the decision-making journal grades [61], information systems investment eval- abilities of a human expert, and they are designed to solve uation [19], transport management [31, 51], industry [14, 38] and sport [12, 15, 32, 33, 39]. Nowadays, in sports science various types of computer & Tomasz Krzeszowski tools and methods play an important role. Competitors and tkrzeszo@prz.edu.pl coaches are looking for new solutions that can support their Krzysztof Przednowek work. One aspect of such support can be the application of krzprz@ur.edu.pl machine learning methods, which can be used to calculate Krzysztof Wiktorowicz performance results [13, 15, 43], identify sporting talent kwiktor@prz.edu.pl [35, 39, 48, 49] or support the training process Janusz Iskra [30, 40, 41, 45, 50, 52]. j.iskra@awf.katowice.pl For example, in the paper [13], the authors use artiﬁcial 1 neural networks to predict competitive performance in Faculty of Physical Education, University of Rzeszow, ul. swimming. The neural models were cross-validated, and Towarnickiego 3, 35-959 Rzeszow, Poland 2 the results show that the modelling was very precise. The Faculty of Electrical and Computer Engineering, Rzeszow ´ paper [43] describes the use of linear and nonlinear mul- University of Technology, al. Powstancow Warszawy 12, 35-959 Rzeszow, Poland tivariable models as tools to predict the results of 400 m hurdles races. All the models were constructed using the Faculty of Physical Education and Physiotherapy, Opole University of Technology, ul. Proszkowska 76, training data of 21 athletes from the Polish National Team. 45-758 Opole, Poland 123 Neural Computing and Applications The best prediction results were obtained by the LASSO of this system are an easy-to-use interface and compati- regression method. Gu et al. [15] proposed an expert sys- bility with different platforms which means that it can be tem to predict National Hockey League (NHL) game out- run from a computer or a mobile device. come. The prediction accuracy of the system was 77:5%. Another paper [17] presents a review of data mining techniques that are used for prediction in various sports 2 Training data disciplines. Roczniok et al. [48] proposed using Kohonen’s neural The training data contain training plans carried out by networks for the recruitment process in competitive hurdlers in the Polish National Team. One record contains swimming. Experiments were conducted on a group of 140 the parameters of an athlete and the training programme young swimming contestants aged about 10. Another carried out by this athlete during their annual training approach to identifying sporting talent was proposed by cycle. The models for result prediction (PR) and for gen- Rogulj et al. [49]. The authors have developed two erating training loads (GT) were build using 21 variables methodological approaches to recognize an athlete’s mor- (Table 1). For the PR models, the input variables x x 1 5 phological compatibility for various sports. In the paper represent the parameters of the athlete, the input variables [35], Maszczyk et al. determined the usefulness of neural x x represent the training loads and the output variable 6 20 models in optimizing recruitment processes. Statistical y represents the predicted result. For the GT models, x x 1 6 analyses were carried out on the measured results of javelin represent the parameters of the athlete and y y repre- 1 15 throwers using full take off. For the investigated group, the sent the training loads. The training programs were recor- perceptron network with the 4–3–2–1 structure achieved ded according to the classiﬁcation proposed in [22]. The the best predictive results. classiﬁcation consists of two areas of inﬂuence: energy In the paper [50], Ryguła et al. proposed using an arti- (exercise) and information (related to the formation of ﬁcial neural network (ANN) to model swimming perfor- technique). In the analyzed training loads, there are speed, mance in the 200 m individual medley and the 400 m front endurance and strength as well as exercises that develop crawl events. The ANNs were also used to analyze tactics the technique of hurdles clearance. A similar classiﬁcation in team sports [41]. Another study was devoted to the use of exercises can be found in another papers devoted to of ANNs to classify kick techniques [30]. The aim of that sprinters and hurdlers [2, 37]. The values of these loads are paper was to ﬁnd out whether it is possible to distinguish the sum of all loads of the same type realized during the two different kick techniques from a kick impact force annual training cycle. The results for the hurdles races were proﬁle. The paper [52] presents the application of a neural registered before and after the cycle. Both runs were car- network to model swimming performance. The authors ried out under simulated starting conditions of the 110 and created highly realistic models of swimming performance 400 m hurdle race. In this study, the current result at the prediction based on previously selected criteria that were training distance was assumed as the indicator of perfor- related to the dependent variable. Experiments were con- mance level. As concluded in the paper [25], this result is ducted on 138 swimmers (65 males and 73 females) at strongly correlated with performance parameters and other national level. motor skills tests used in hurdles races. For the 110 m Despite the existing methods to predict and support hurdles, the training data contain 40 records. These records training, there is lack of tools that could be used by coaches were collected from 18 highly trained athletes (mean result during the training process. Papic ´ et al. [39] developed a in 110 m hurdles: 14.02 s) aged 18–28. In 400 m hurdles, fuzzy expert system for scouting and evaluating young the 48 records from 21 athletes aged 19–27 were used. The sporting talent. A similar system is presented in [33], where hurdlers practising the 400 m had also a high sport level. the authors perform talent identiﬁcation in soccer using a (Mean result on 400 m hurdles was equal to 51.26 s.) web-oriented expert system. From the review of literature, it can be seen that there is a need to create tools for supporting sports training. The 3 Mathematical models main contribution of this paper is, therefore, to develop a web-oriented expert system, named iHurdling, to predict In this paper, we use the regression methods for building results and generate training loads in the 110 and 400 m multi-input, single-output (MISO) and multi-input, multi- hurdles. The system we have developed can support a output (MIMO) models. The MISO models are used for the coach in planning training programmes in hurdles races. prediction of result, while the MIMO models are used in The system uses linear regression models (OLS, ridge, the generation of training loads. In the simpliﬁed descrip- LASSO, elastic net) and nonlinear models (RBF, fuzzy tion that follows we assume that we have one output, since model, OLS with fuzzy correction). The main advantages 123 Neural Computing and Applications Table 1 Description of variables used to construct the PR and GT modules for 110 and 400 m hurdles PR GT Description 110 m 400 m xx x xx x min max min max yx Result after training (s) 14.02 13.26 15.13 51.27 48.19 53.60 x x Age (years) 21.9 18.0 28.0 22.3 19.0 27.0 1 2 x x Body height (cm) 187.3 181.0 195.0 185.0 177.0 192.0 2 3 x x Body mass (kg) 77.8 71.0 83.0 74.3 69.0 82.0 3 4 x x Body mass index 22.1 20.3 23.5 21.2 19.7 24.4 4 5 x x Result before training (s) 14.33 13.34 15.40 51.91 48.70 54.70 5 6 x y Maximal and technical speed exercises (m) 12,513 5800 17,970 9428 2910 18,920 6 1 x y Technical and speed exercises (m) 5925 2470 10,200 4253 240 9450 7 2 x y Speed and speciﬁc hurdle endurance exercises (m) 11,961 3150 20,400 25,342 6400 101,450 8 3 x y Pace runs exercises (m) 64,087 25,780 100,300 163,796 88,000 393,800 9 4 x y Aerobic endurance exercises (m) 328,631 80,600 550,000 363,257 151,000 692,500 10 5 x y Strength endurance exercises (m) 20,638 1850 46,595 41,069 1750 169,265 11 6 x y Strength of lower limbs exercises (kg) 291,119 96,400 658,600 224,099 96,900 504,540 12 7 x y Trunk strength exercises (amount) 38,442 5240 145,000 46,438 6100 233,680 13 8 x y Upper body strength exercises (kg) 3352 1630 4850 3305 760.0 29,610 14 9 x y Explosive strength of lower limbs exercises (amount) 1244 0 2214 823 282 2138 15 10 x y Explosive strength of upper limbs exercises (amount) 656 213 1850 443 60 1360 16 11 x y Technical exercises – walking pace (min) 456 130 1110 424 45 816 17 12 x y Technical exercises – running pace (min) 574 195 1450 518 150 1500 18 13 x y Runs over hurdles exercises (amount) 778 362 1317 416 121 775 19 14 x y Hurdle runs in varied rhythm exercises (amount) 1077 320 1850 857 36 1680 20 15 a MIMO model will be represented as a set of MISO element x denotes the jth predictor in the ith observation, ij models. and y is the response in the ith observation. In our expert system, we use: The linear regression problem can be written as a matrix equation of the form – linear models in the form of ordinary least squares y ¼ Xw ð2Þ (OLS) [7], ridge regression (RIDGE) [18], least absolute shrinkage and selection operator (LASSO) where [54] and elastic net (ENET) [62], 2 3 x x ... x – nonlinear models in the form of radial basis function 11 12 1p 6 7 network (RBF) [6] and fuzzy rule-based system (FRBS) x x ... x 21 22 2p 6 7 6 7 [47]. X ¼ ð3Þ . . . . 6 7 . . . . 4 5 . . . . x x ... x n1 n2 np 3.1 Linear models T T and w ¼½w ; w ;...; w , y ¼½y ; y ;...; y . Denoting 1 2 p 1 2 n Consider a MISO model with p inputs (predictors) creating by Jðw; Þ a cost function, the problem of ﬁnding a linear the vector x ¼½x ; x ;...; x and one output (response) y. 1 2 p model involves minimizing the function Jðw; Þ, that is The goal is to build the regression function w^ ¼ arg min Jðw; Þ p ð4Þ y ¼ f ðxÞ¼ x w j j ð1Þ j¼1 where w ^ is the vector of the optimal parameter values. For the linear models, the cost functions have the form of based on a data set containing n observations in the form of pairs ðx ; y Þ, where x ¼½x ; x ;...; x , i ¼ 1;...; n. The i i i i1 i2 ip J ðwÞ¼kk y Xw ð5Þ OLS 123 Neural Computing and Applications 2 2 u ðxÞ¼ uðÞ kk x c ð10Þ J ðw; kÞ¼kk y Xw þ kkk w ð6Þ k k k RIDGE 2 2 ð7Þ The norm jj jj is usually taken as the Euclidean distance, J ðw; kÞ¼kk y Xw þ kkk w LASSO 2 1 and uðxÞ is typically taken to be the Gaussian function. 2 2 J ðw; k ; k Þ¼kk y Xw þ k kk w þ k kk w ð8Þ ENET 1 2 1 2 2 1 2 The output layer is composed of a node that receives the outputs of nodes in the hidden layer. This node calculates where k, k and k are non-negative regularization 1 2 the output of the network as a linear combination of non- parameters. The norms jj jj and jj jj denote the Eucli- 2 1 linear functions of the form dean and the Manhattan norms, respectively. The RIDGE, LASSO and ENET regressions are regularized which X y ¼ u ðxÞw ð11Þ means that they can be used when the problem is ill-con- k k¼1 ditioned. The detailed description of the linear models can be found, for example, in [58]. where m is the number of nodes in the hidden layer, u ðxÞ is a basis function and w is the weight of the kth neuron in 3.2 Choosing the best model the output node. The training of the RBF network involves: the number All models were tested using cross-validation method. This of hidden neurons, the parameters of radial functions in the is a method of evaluating the generalization ability (pre- hidden layer and the weights in the output layer. diction for new data, not involved in modelling) of the model being created. In cross-validation, data are divided 3.3.2 Fuzzy models into two subsets: a training set and a testing (validation) set. In this study, due to the small amount of data (n ¼ 40 In this paper, we propose two approaches to use the FRBS for 110 m and n ¼ 48 for 400 m), LOOCV (leave-one-out [47] in regression problems. In the ﬁrst approach, the fuzzy cross-validation) was used [3]. The idea of this method is model is build similarly to the RBF model, that is, it is to extract from the set of data n learning subsets. Each learned from the original data. (This model is called subset is created by removing only one pair from the data FUZZY.) In the second approach, the FRBS is used for the set, which becomes a test pair. Then, for each resulting nonlinear correction of the OLS model. (This model is subset, the model is constructed that is evaluated by called F-OLS.) The idea is to change the output of a linear determining the error for the remaining test pair. The model by adding a nonlinear correction term, in such a way predictive ability of a model is expressed by the root of the that the predictive error is reduced (Fig. 1). First, we build mean square error of cross-validation (RMSE ) calcu- the OLS model and remember its cross-validation errors, CV lated as and next we build a fuzzy model that ‘‘learns’’ these errors. sﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃ The design procedure for building F-OLS models is listed below. ð9Þ RMSE ¼ ðÞ y y^ CV i i¼1 Step 1. Cross-validation of the OLS model y ¼ f ðxÞ OLS where y^ is the output of a model obtained after removing for the data ðx ; y Þ. In the ith step of cross- i i i the pair ðx ; y Þ from the data set. i i validation, the error has the form e ¼ y y ð12Þ i i i 3.3 Nonlinear models where y ¼ f ðx Þ. i OLS i 3.3.1 RBF models An RBF network is a feed-forward network that typically consists of three layers: an input layer, a hidden layer and an output layer. The input layer is composed of nodes that receive input signal x, and there is one node for each predictor variable. The hidden layer is composed of nodes with radially symmetric activation functions. The hidden node measures the distance between the input vector x and the centre c of its radial function: Fig. 1 The idea of calculating the output of the F-OLS model. The variable y ¼ f ðxÞ is the output of the ordinary least squares OLS estimator, and d ¼ f ðxÞ is the output of the fuzzy nonlinear corrector 123 Neural Computing and Applications Step 2. Constructing the fuzzy (nonlinear) corrector Table 2 R functions for models training d ¼ f ðxÞð13Þ Model Function Arguments OLS lm formula ¼ y for the data ðx ; e Þ. This corrector predicts the i i data.train errors obtained in Step 1. The best fuzzy model RIDGE lm.ridge formula ¼ y can be chosen on the basis of cross-validation data.train conducted for different number of fuzzy sets. Step 3. Cross-validation of the OLS model with the lambda 2½0; 1Þ corrected error in the form LASSO enet data.x, data.y lambda = 0, new e ¼ y ðy þ dÞð14Þ i i i normalize = TRUE where y ¼ f ðx Þ and d ¼ f ðx Þ. i OLS i i c i intercept = TRUE Step 4. The predicted output of the F-OLS model is predict.enet model determined as newx s 2½0; 1 y ¼ f ðxÞþ f ðxÞð15Þ FOLS OLS c ENET enet data.x, data.y where f ðxÞ is the function of the fuzzy corrector c lambda 2½0; 1Þ, chosen in Step 3 (Fig. 1). normalize = TRUE intercept = TRUE predict.enet model newx 4 Expert system modules s 2½0; 1 RBF rbf data.x, data.y The expert system consists of two modules, the prediction size 2½1; 10 of result (PR) and the generation of training loads (GT) for maxit = 1000 the 110 and 400 m distances. The regression models for linOut = TRUE both modules were calculated in R language [46]. The Fuzzy frbs.learn data.train functions with arguments used to generate the models are range.data shown in Table 2, and they are described below. method.type = ‘‘WM’’ The function lm was used to calculate the OLS, and the control = list( ridge regressions were calculated using the function lm.ridge from the ‘‘MASS’’ package [55] (with k [ 0 num.labels = l, in 6). The LASSO and the elastic net regressions were type.mf = ‘‘GAUSSIAN’’, obtained with the function enet included in the ‘‘elastic- type.defuz = ‘‘WAM’’, net’’ package [63]. This function has two parameters ðk; sÞ, type.tnorm = ‘‘MIN’’, where k 0 denotes k in the formula (8) and s 2½0; 1 is a 2 type.implication.func fraction of the norm L . The pair ðk; sÞ is used instead of 1 = ‘‘MIN’’) the pair ðk ; k Þ in the formula (8) because the elastic net 1 2 The function predict.enet is used for selecting the parameter regression can be treated as the LASSO regression for an s for LASSO and ENET models augmented data set [62]. Taking k ¼ 0 we get the LASSO regression with one parameter s for the original data. The ENET models were selected by searching the parameters k optimal neural model was determined by searching a and s. number of hidden neurons in the range from 2 to 10. This study uses artiﬁcial neural networks in the form of The fuzzy models were calculated using the function the radial basis function (RBF). The training data were frbs.learn from the ‘‘frbs’’ package [47]. The learning scaled before the RBF training, and the results of the method was the Wang–Mendel (W–M) algorithm [56]. predictions were unscaled. All the analyzed networks have This algorithm generates fuzzy rules from input–output one hidden layer. For the implementation of neural net- data pairs. The input space is divided into fuzzy subspaces, works, the function RSNNS::rbf was used [6]. The 123 Neural Computing and Applications and fuzzy rules are extracted for each subspace. The W–M 4.2 Models for generation of training loads method is a one-pass procedure and does not need time- consuming training. In the fuzzy model, the Gaussian For the GT module, each output of the model (y -y ) was 1 15 membership functions are used, the t-norm is ‘‘minimum’’, considered and analyzed in a similar way as for the result the defuzziﬁcation is ‘‘weighted average method’’, and the prediction module. The errors RMSE for the GT module CV implication is ‘‘minimum’’. The number of fuzzy sets l was are presented in Table 4, while the parameters of the determined by calculating cross-validation errors as models are presented in Table 5. The models in the GT described in Sect. 3.3.2. module were cross-validated similarly to the PR module. For example, for the output y the FUZZY model has the 4.1 Models for result prediction largest errors (200.1 for 110 m and 132.9 for 400 m), and the F-OLS model has the smallest errors (33.18 for 110 m The cross-validation errors RMSE and parameters of the and 105.1 for 400 m). From Table 4, it can be observed CV models for the PR module are presented in Table 3. The that the smallest RMSE for all outputs has the F-OLS CV parameters were chosen on the basis of the plots shown in model. Figs. 2 and 3. In the case of the ridge regression, the reg- ularization parameter k 2½0; 40 was considered with the step 0.1 for both distances. In the case of the LASSO regression, the parameter s 2½0; 1 was considered with the 5 Graphical user interface step 0.01. For the ENET regression, the following param- eters were chosen: k 2½0:1; 0:25 with the step 0.008 and The graphical user interface was implemented in R lan- s 2½0:4; 0:8 with the step 0.021 for the distance of 110 m, guage using the shiny [11], shinyjs [4], shi- and k 2½0; 0:06 with the step 0.0032 and s 2½0:3; 0:5 nythemes [10], shinydashboard [9] and with the step 0.01 for the distance of 400 m. The RBF rmarkdown [1] libraries. This interface is a web-oriented model was analyzed for the number of neurons in the application and therefore requires only a web browser and hidden layer m 2f2; 3;...; 10g, and the fuzzy models were an Internet connection to be used. The current version of analyzed for the number of fuzzy sets l 2f2; 3;...; 13g. the developed system is available on https://hurdles.shi Based on the conducted analysis, the best models (models nyapps.io/ihurdling. The application shown in Fig. 4 con- with the smallest cross-validation error) were selected sists of three panels labelled ‘‘Result prediction’’, (Table 3). It can be seen that for both the 110 and the ‘‘Generation of training loads’’ and ‘‘Athletes’ database’’. 400 m distances, the lowest error was obtained by the On the left side of window is a sidebar menu with links F-OLS regression. The best F-OLS models have eight to each panel. The radio button in this sidebar is used to fuzzy sets for 110 m and nine sets for 400 m. The largest select the PR or GT module. Moreover, the user can choose error for the 110 m was obtained by the OLS regression one of the developed regression models and generate and by the FUZZY model for the 400 m. reports. The footer contains the information about the application and the authors. Table 3 Errors and parameters OLS RIDGE LASSO ENET RBF FUZZY F-OLS for the PR module for 110 and 400 m hurdles 110 m RMSE 0.3807 0.2276 0.2397 0.1996 0.1985 0.2572 0.0851 CV Param. — k ¼ 16:1 k ¼ 0 k ¼ 0:16 m ¼ 8 l ¼ 4 l ¼ 8 s ¼ 0:04 s ¼ 0:56 400 m RMSE 0.6959 0.5743 0.4463 0.4308 0.6953 0.9288 0.1533 CV Param. — k ¼ 11:6 k ¼ 0 k ¼ 0:01 m ¼ 6 l ¼ 3 l ¼ 9 s ¼ 0:28 s ¼ 0:36 The meaning of the parameters is as follows: k and s are tuning parameters for regularized models, m is the number of hidden neurons in the RBF network, l is the number of fuzzy sets in fuzzy models 123 Neural Computing and Applications 110m, RIDGE regression 400m, RIDGE regression 0.4 0.7 0.68 0.35 0.66 0.64 0.3 0.62 0.6 0.25 0.58 0.2 0 10 20 30 40 0 10 20 30 40 λ λ 110m, LASSO regression 400m, LASSO regression 0.55 1.4 0.5 1.2 0.45 0.4 0.35 0.8 0.3 0.6 0.25 0.2 0.4 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 s s 110m, ENET regression 400m, ENET regression 0.245 0.56 0.24 0.235 0.54 0.3 0.7 0.23 0.52 0.25 0.6 0.225 0.5 0.22 0.2 0.5 0.215 0.48 0.4 0.21 0.4 0.1 0.3 0 0.46 0.5 0.35 0.15 0.02 0.205 0.6 0.4 0.44 0.2 0.04 0.7 0.45 0.2 λ λ 0.8 0.25 0.5 0.06 s s Fig. 2 Cross-validation errors for linear models (RIDGE, LASSO, ENET) in result prediction has a range determined on the basis of the minimum and 5.1 Panel for result prediction maximum values in the database (Table 1) and depends on the distance of the hurdles race. For instance, the slider The ‘‘Result prediction’’ panel is used for entering data and for result prediction (Fig. 4). The input variables are ‘‘Pace runs’’ for the 110 m hurdles has a range from 25,000 to 101,000 m with the step equal to one metre. grouped into ﬁve boxes: ‘‘Athlete’s parameters’’, ‘‘Training loads—endurance’’, ‘‘Training loads—technique and In the last box labelled ‘‘Results’’ two textOutput ﬁelds display the current and predicted results. Prediction rhythm’’, ‘‘Training loads—strength’’ and ‘‘Training loads—speed’’. The value of each input can be modiﬁed of the result is performed automatically after changing the position of any slider. Moreover, the result depends on the using appropriately scaled sliders. For example, the box ‘‘Training loads—endurance’’ presented in Fig. 5 has four radio button that selects the method in the sidebar menu. In sliders for changing endurance training loads. Each slider this way, the user can modify the training loads and RMSE RMSE RMSE CV CV CV RMSE CV RMSE CV RMSE CV Neural Computing and Applications 110m, RBF regression 400m, RBF regression 0.4 1.4 1.3 0.35 1.2 1.1 0.3 0.25 0.9 0.8 0.2 0.7 2 4 6 8 10 2 4 6 8 10 m m 110m, FUZZY regression 400m, FUZZY regression 0.36 1.3 0.34 1.2 0.32 0.3 1.1 0.28 0.26 0.24 0.9 2 4 6 8 10 12 14 2 4 6 8 10 12 14 l l 110m, fuzzy corrector 400m, fuzzy corrector 0.75 1.3 0.7 1.2 0.65 1.1 0.6 0.55 0.9 0.5 0.45 0.8 2 4 6 8 10 12 14 2 4 6 8 10 12 14 l l Fig. 3 Cross-validation errors for nonlinear models (RBF, FUZZY, f ) in result prediction observe the changes that occur in the expected result. training—annual cycle’’. The ﬁrst box is used to enter the Generating a report from result prediction creates a .pdf athlete’s data, i.e. age, body height, body weight and his ﬁle, which contains the values from all sliders and the current result. This box also includes an option to choose predicted result. the training generation mode. The user can choose the option of one training generation or the option to generate 5.2 Panel for generation of training loads the training loads for a longer period of his career. The selection of the ﬁrst option will cause a slider with the Another system panel is the generation of training loads for expected result to appear under the slider with the current both hurdles distances (Figs. 6, 7). This module consists of result. If the ‘‘career’’ option is selected, these sliders are two boxes: ‘‘Athletes’ parameters’’ and ‘‘Generated not available. The ‘‘career’’ option makes it possible to RMSE for f RMSE RMSE CV c CV CV RMSE for f RMSE RMSE CV c CV CV Neural Computing and Applications Table 4 Errors for the GT Output OLS RIDGE LASSO ENET RBF FUZZY F-OLS module for 110 and 400 m hurdles 110 m y 2550 2429 2304 2304 2219 2572 1829 y 1700 1632 1617 1606 1528 1675 244.0 y 4351 3985 3933 3814 3843 3629 2630 y 24,040 21,170 21,270 21,120 20,310 20,210 6254 y 112,000 110,900 106,600 106,600 80,681 88,870 70,780 y 12,570 11,240 11,170 11,170 10,693 9644 2546 y 152,100 129,600 140,200 129,600 131,123 112,200 21,270 y 30,190 26,730 27,970 26,360 25,366 22,570 3410 y 654.1 610.9 606.1 606.1 632.8 622.3 519.2 y 579.7 484.0 482.4 482.4 398.3 327.9 155.2 y 267.4 244.7 247.5 239.7 191.5 236.7 143.1 y 302.1 263.1 263.4 259.4 248.9 266.8 98.17 y 385.5 322.8 320.5 320.5 310.7 256.1 39.93 y 190.9 179.7 188.8 186.3 167.7 200.1 33.18 y 436.0 401.5 400.7 391.2 399.1 434.0 205.6 400 m y 4477 4189 4110 4109 3038 2511 597.7 y 1864 1778 1755 1754 1422 1460 855.9 y 15,120 14,270 14,190 14,190 14,308 16,670 3473 y 62,580 60,770 60,040 60,040 63,378 52,860 11,110 y 107,100 98,490 98,120 98,120 98,906 110,600 39,340 y 24,777 23,440 23,380 23,270 25,972 26,070 3261 y 92,100 88,030 84,830 84,830 81,357 73,900 21,150 y 46,360 44,810 44,510 44,510 44,399 47,740 6754 y 4581 4190 4188 4185 4274 4139 1481 y 369.4 350.0 350.2 350.2 365.8 428.1 61.88 y 287.5 275.8 276.6 275.6 290.9 223.3 37.23 y 265.8 237.6 235.8 235.8 239.3 187.3 108.3 y 285.2 273.6 271.2 271.2 282.1 284.7 175.0 y 127.7 124.6 122.8 122.8 115.6 132.9 105.1 y 369.0 362.4 356.3 356.3 314.0 301.2 111.4 generate six training programmes which are consecutive (Fig. 7). The ‘‘career’’ is an additional option that allows us and improve the result by 0.25 s each year (from 15.00 to to generate training loads for six consecutive years. In this 13.50 s) for 110 m hurdles and by 1 s each year (from option, the starting result is always constant and is 14.75 s 53.00 to 48.00 s) for 400 m hurdles. for 100 m and 53 s for 400 m, respectively. Results are The contents of the box ‘‘Generated training—annual generated in the form of a table where each row represents cycle’’ change dynamically, depending on the mode. The the annual training and in the form of graphs where the x- ‘‘one training’’ option will generate a list of training loads axis is the expected result and the y-axis is the value of the with suggested values (Fig. 6). In addition, a graph is training load. Career graphs allow observations of changes generated, in which the values of training loads, expressed in individual loads in terms of a 6-year career. The coach as a percentage of the maximum value of the given output, can observe which load needs to be increased, which are presented. The second option is ‘‘career’’; its selection decreased and which should stay at the same level. generates a table containing six annual training plans and Generating a report from the ‘‘Generated training’’ panel 15 graphs showing the loads over the athlete’s entire career creates a .pdf ﬁle, which contains values from the ‘‘Ath- 123 Neural Computing and Applications Table 5 Parameters for the GT module for 110 and 400 m hurdles. consists of two boxes: ‘‘Athlete’s database’’ and ‘‘Edit’’. In The meaning of the parameters is as follows: k and s are tuning the ﬁrst box, the records of the database loaded from the parameters for regularized models, m is the number of hidden neurons ﬁle are displayed. The system supports ﬁles saved in the in the RBF network, l is the number of fuzzy sets in fuzzy models .csv format with ﬁeld separator ‘‘;’’ and ‘‘.’’ as the decimal Output RIDGE LASSO ENET RBF FUZZY F-OLS point. The database ﬁle contains the following columns: k s ðk; sÞ ml l ‘‘Name’’, ‘‘Surname’’, ‘‘Age’’, ‘‘Body Mass’’ and ‘‘Body 110 m Height’’. This box displays all athletes in a table; the choice of athlete is done by marking the appropriate line in this y 15.9 0.06 (0, 0.06) 4 5 3 table. Furthermore, the name of the selected athlete is y 15.3 0.03 (6.03, 0.40) 2 7 10 displayed in the sidebar. When an athlete is selected, his y 32.5 0.01 (1.80, 0.41) 5 4 3 data can be edited via the ‘‘Edit’’ box. The saving of the y 47.5 0.01 (0.87, 0.59) 7 7 7 edition is approved with the ‘‘Save’’ button. The ‘‘Delete’’ y 4.38 0.01 (0, 0.01) 9 5 3 button removes the athlete from the database. The dese- y 500 0 (0, 0) 9 8 7 lection of the athlete is done by re-selecting him/her in the y 39.6 0.04 (0.65, 0.56) 2 6 10 database. If no athlete is selected, a new athlete can be y 16.6 0.05 (1.30, 0.54) 10 12 12 entered into the database using the ‘‘Edit’’ window. After a y 500 0 (0, 0) 2 13 3 new athlete is entered, you should click ‘‘Save as new’’. y 500 0 (0, 0) 9 5 5 After each operation performed on the database, the user y 15.0 0.06 (1.20, 0.47) 10 4 4 should save the database using the button ‘‘Save database’’ y 256 0 (49.0, 0.17) 2 3 6 on the ﬁrst panel. The second button on the panel (‘‘Clear y 294 0.01 (0, 0.01) 7 10 13 database’’) performs cleaning the database from the y 10.7 0.48 (19.3, 0.38) 5 4 10 application memory. In the ‘‘Athletes’ database’’ panel, it y 500 0 (0.01, 0.16) 5 3 4 is not possible to generate reports. 400 m y 500 0.03 (0.06, 0.09) 9 13 9 y 500 0.02 (0.02, 0.03) 2 3 5 6 Discussion y 288 0.05 (0, 0.05) 2 10 10 y 500 0.04 (0, 0.04) 2 9 9 In this paper, mathematical models for generation of y 500 0.01 (0.01, 0.01) 2 10 6 training loads and prediction of results expected from y 30.7 0.51 (0.04, 0.72) 2 13 13 athletes training the 110 and 400 m hurdle races were y 19.5 0.33 (0, 0.33) 9 13 8 presented. The best model veriﬁed by LOOCV in each of y 156 0.15 (0, 0.15) 2 11 11 the considered tasks and for each distance turned out to be y 322 0.02 (0.02, 0.05) 2 11 7 the model F-OLS proposed by the authors. The application y 88.8 0.08 (0, 0.08) 3 10 11 of fuzzy models in sport was also presented by Mezyk and y 72.6 0.06 (0.01, 0.12) 4 6 13 Unold [36]. The goal was to ﬁnd the rules that can express y 500 0 (0, 0) 2 11 5 swimmers feelings the day after in-water training. Their y 339 0.02 (0, 0.02) 3 2 5 method was characterized by better predictive ability than y 10.5 0.21 (0, 0.21) 7 2 2 the traditional methods of classiﬁcation, and the effec- y 140 0.04 (0, 0.04) 7 5 6 tiveness was at the level of 68.66%. In Papic ´ et al. [39], the fuzzy expert system was also presented. This system was based on knowledge of experts in the ﬁeld of sport, as well as the data obtained as a result of motor tests. The model suggested the most suitable sport, and it was designed to search for prospective sports talents. Evaluation of the system showed high reliability and high correlation with lete’s parameters’’ box and a table with one or six annual top experts in the ﬁeld. training cycles depending on the types of generating While analyzing the literature, it can be also noticed that training loads. mathematical models frequently used in sports are artiﬁcial neural networks [34, 35, 40, 44, 48, 50, 52, 58]. Numerous 5.3 Panel for athletes’ database studies have shown that the ANN is a means of predicting sports results which has a good predictive ability [13, 59]. The third system panel is used to create and change the Thus, the ANN enables a coach to model the future level of database containing athletes’ details (Fig. 8). This panel athletes performance and supports the process of sports 123 Neural Computing and Applications Fig. 4 Screenshot of the iHurdling application with PR panel selection [34, 40, 48, 52]. For example, Silva et al. [52] Sports training is the matter of making decisions about presented high realistic models of swimming performance the quality (type of exercise) and the quantity (volume). prediction based on multilayer perceptron. To establish a This is a classical principle of sports training, emphasized proﬁle of the young swimmer, nonlinear combinations in all textbooks on the theory of sports [8, 42]. Selection of between preponderant variables for each gender and swim training means and their distribution at subsequent stages performance in the 200 m medley and 400 m front crawl of sports training is the main element of hurdlers’ training events were developed. Artiﬁcial neural networks are also optimization on both distances, i.e. 110 and 400 m [50]. widely used in the process of planning training loads The selection of exercises (training means) in hurdling is [44, 50]. In [50], Ryguła presents a new approach for supported by research in the ﬁeld of motor preparation determining training loads in a group of 16- and 17-year- (strength, speed, endurance) as well as in relation to the old girls practising 100 m run. technical structure of the event (kinematic analyses) [21]. 123 Neural Computing and Applications signiﬁcantly varied, taking into account the material (sports performance level of runners), method and period of training. In the review study by Arcelli et al., [2] those parameters adopted values within the range of 28–70% (aerobic) and 30–72% (anaerobic). The authors suggested that the higher the sports performance level, the higher the share of anaerobic element. The determination of the type of runner due to the aerobic and anaerobic processes would certainly make it possible to introduce some additional information in order to develop individual training. However, this problem has a logistic disadvantage, as monitoring of physiological reactions in hurdling is limited to the months when the athletes take part in competitions. Winter conditions are not conducive to speciﬁc running tests, and the choice of substitute distances may negatively affect the individual abilities of the hurdler. Taking into account the extensive scope of hurdlers’ exercises, the basic problem of a coach is the choice of exercises, their volume and proportions during particular training periods. The researchers pay their attention to that speciﬁcity of sports [8, 37]. Apart from the representative collection of training means, the body physique and age, often identiﬁed with the sports performance level, were also used. The impact of those elements on the organiza- Fig. 5 Screenshot of the box for entering endurance training loads tion of training has been already emphasized on several occasions [20, 23]. It seems appropriate to determine the The observation of training programs of the best athletes initial value (record from the given year) and the estimated [24], supported by the analysis of correlation between the scale of progress (plans for the next season), because it results of hurdle run and the tests results including the makes it possible to control training loads depending on the physiological [16, 64] and biochemical basis [28], allows athletes age, their current performance level and the main for selection of groups of the most valuable basic exercises. objective. Each athlete has different predispositions, also to The performance tests carried out during the ergometric perform speciﬁc training tasks. The selection of exercises is effort [5] are of great importance in assessing the speciﬁcs necessary, because it is impossible to perform the same of hurdlers’ effort [5]. It should be emphasized that indi- volume of all exercises at the same time. Such a procedure vidualization of hurdlers’ training programs also requires would also be pointless, since the ‘‘rhythmic’’ type of a an individual approach to the type of capacity, oscillating hurdler prefers running exercises with hurdles, and the between aerobic and anaerobic capacity. Sprinting dis- ‘‘speedy’’ type of the hurdle runner prefers shorter dis- tances in hurdling are considered to be typical running tances of the interval nature [20]. The database used is efforts of anaerobic nature. In the case of 110 m hurdle based on the period of 20 years of training Polish hurdlers, race, anaerobic non-lactic acidic changes with the ﬁnal members of the national team. Those hurdlers represented accent on anaerobic lactic acidic changes are predominant. various types (somatic, efﬁcient and technical); therefore The 400 m hurdle run requires ﬁrst of all an effort of the scope of generalization (approximation) possibilities of anaerobic non-lactic acidic nature [57]. Data concerning the proposed computer system is signiﬁcant and partly the speciﬁcity of the effort at a distance of 400 m indicate representative. that the proportions of aerobic and anaerobic efforts can be 123 Neural Computing and Applications Fig. 6 Screenshot of the panel for generation of training loads for one training programme Summarizing the discussion, it should be noted that 7 Conclusions there are severe limitations of the presented approach connected with using the results in practice. The training In this paper, a web-oriented expert system to predict programs do not consider the individual physiological and results and generate training loads for 110 and 400 m psychological parameters of an athlete. However, the hurdles races was presented. The system uses the linear generated training programs might be used as a suggestion regression models (OLS, ridge, LASSO, elastic net) and for the coach who can perform necessary adjustments in nonlinear regression models (RBF, fuzzy model, OLS with order to adapt them for a particular athlete. fuzzy correction). The lowest errors were obtained by the 123 Neural Computing and Applications Fig. 7 Screenshot of the panel for generation of training loads over the athlete’s entire career proposed F-OLS model, but creating this model is more as personal computers and mobile devices. The easy-to-use complicated. interface allows the parameters of an athlete and the The application was implemented using R programming training loads to be changed. In this way, the coach can language with Shiny framework. The advantage of this predict the expected result and select individual training application is that it can be run on multiple platforms such components for a given athlete. 123 Neural Computing and Applications Fig. 8 Screenshot of the panel for athletes’ database 3. Arlot S, Celisse A (2010) A survey of cross-validation procedures Further work will focus on migrating the developed for model selection. Stat Surv 4:40–79 expert system to mobile application. 4. Attali D (2016) shinyjs: easily Improve the User Experience of Your Shiny Apps in Seconds. https://CRAN.R-project.org/pack Acknowledgements This work has been supported by the Polish age=shinyjs. R package version 0.8. Accessed Feb 2017 Ministry of Science and Higher Education within the research project ´ ´ 5. Balsalobre-Fernandez C, Tejero-Gonzalez CM, del Campo- ‘‘Development of Academic Sport’’ in the years 2016–2019, Project Vecino J, Alonso-Curiel D (2013) The effects of a maximal No. N RSA4 00554. power training cycle on the strength, maximum power, vertical jump height and acceleration of high-level 400-meter hurdlers. J Hum Kinet 36(1):119–126 Compliance with ethical standards 6. Bergmeir C, Benı ´tez JM (2012) Neural networks in R using the Stuttgart neural network simulator: RSNNS. J Stat Softw Conflict of interest The authors declare that they have no conflict of 46(7):1–26. http://www.jstatsoft.org/v46/i07/. Accessed Feb 2017 interest 7. Bishop CM (2006) Pattern recognition and machine learning. Information science and statistics. Springer, New York Open Access This article is distributed under the terms of the Creative 8. Bompa TO, Haff G (1999) Periodization: theory and methodol- Commons Attribution 4.0 International License (http://creative ogy of training, vol 199. Human Kinetics, Champaign commons.org/licenses/by/4.0/), which permits unrestricted use, dis- 9. Chang W (2016) Shinydashboard: create dashboards with tribution, and reproduction in any medium, provided you give ’Shiny’. https://CRAN.R-project.org/package=shinydashboard.R appropriate credit to the original author(s) and the source, provide a package version 0.5.3. Accessed Feb 2017 link to the Creative Commons license, and indicate if changes were 10. Chang W (2016) Shinythemes: themes for Shiny. https://CRAN. made. R-project.org/package=shinythemes. R package version 1.1.1. Accessed Feb 2017 11. Chang W, Cheng J, Allaire J, Xie Y, McPherson J (2016) Shiny: References web application framework for R. https://CRAN.R-project.org/ package=shiny. R package version 0.14.2. Accessed Feb 2017 1. Allaire J, Cheng J, Xie Y, McPherson J, Chang W, Allen J, 12. Curtis KM (2010) Cricket batting technique analyser/trainer: a Wickham H, Atkins A, Hyndman R (2016) rmarkdown: dynamic proposed solution using fuzzy set theory to aid West Indies Documents for R. https://CRAN.R-project.org/package=rmark Cricket. In: Proceedings of the 9th WSEAS international con- down. R package version 1.2. Accessed Feb 2017 ference on artiﬁcial intelligence, knowledge engineering and data 2. Arcelli E, Mambretti M, Cimadoro G, Alberti G (2008) The bases, AIKED’10. World Scientiﬁc and Engineering Academy aerobic mechanism in the 400 metres. New Stud Athl 23(2):15– and Society (WSEAS), Stevens Point, Wisconsin, pp 71–76 123 Neural Computing and Applications 13. Edelmann-Nusser J, Hohmann A, Henneberg B (2002) Modeling 33. Louzada F, Maiorano AC, Ara A (2016) iSports: a web-oriented and prediction of competitive performance in swimming upon expert system for talent identiﬁcation in soccer. Expert Syst Appl neural networks. Eur J Sport Sci 2(2):1–10 44:400–412. https://doi.org/10.1016/j.eswa.2015.09.007 14. Er A, Dias R (2000) A rule-based expert system approach to 34. Maszczyk A, Roczniok R, Waskiewicz Z, Czuba M, Mikołajec process selection for cast components. Knowl Based Syst K, Zajac A, Stanula A (2012) Application of regression and 13(4):225–234. https://doi.org/10.1016/S0950-7051(00)00075-7 neural models to predict competitive swimming performance. 15. Gu W, Saaty TL, Whitaker R (2016) Expert system for ice Percept Motor Skills 114(2):610–626 hockey game prediction: data mining with human judgment. Int J 35. Maszczyk A, Zajac A, Ryguła I (2011) A neural network model Inf Technol Decis Mak 15(04):763–789. https://doi.org/10.1142/ approach to athlete selection. Sports Eng 13(2):83–93 S0219622016400022 36. Mezyk E, Unold O (2011) Machine learning approach to model 16. Gupta S, Goswami A, Mukhopadhyay S (1999) Heart rate and sport training. Comput Hum Behav 27(5):1499–1506 blood lactate in 400 m ﬂat and 400 m hurdle running: a com- 37. Mujika I (2009) Tapering and peaking for optimal performance. parative study. Indian J Physiol Pharmacol 43:361–366 Human Kinetics, Champaign 17. Haghighat M, Rastegari H, Nourafza N (2013) A review of data 38. Najjaran H, Sadiq R, Rajani B (2006) Fuzzy expert system to mining techniques for result prediction in sports. Adv Comput Sci assess corrosion of cast/ductile iron pipes from backﬁll proper- Int J 2(5):7–12 ties. Comput Aided Civ Infrastruct Eng 21(1):67–77. https://doi. 18. Hoerl AE, Kennard RW (1970) Ridge regression: biased esti- org/10.1111/j.1467-8667.2005.00417.x mation for nonorthogonal problems. Technometrics 12(1):55–67 39. Papic ´ V, Rogulj N, Ples ˇtina V (2009) Identiﬁcation of sport 19. Irani Z, Sharif A, Kamal MM, Love PE (2014) Visualising a talents using a web-oriented expert system with a fuzzy module. knowledge mapping of information systems investment evalua- Expert Syst with Appl 36(5):8830–8838. https://doi.org/10.1016/ tion. Expert Syst Appl 41(1):105–125. https://doi.org/10.1016/j. j.eswa.2008.11.031 eswa.2013.07.015 (21st Century Logistics and Supply Chain 40. Pfeiffer M, Hohmann A (2012) Applications of neural networks Management) in training science. Hum Mov Sci 31(2):344–359 20. Iskra J (2012) Athlete typology and training strategy in the 400 m 41. Pfeiffer M, Perl J (2006) Analysis of tactical structures in team hurdles. New Stud Athl 27(1–2):27–37 handball by means of artiﬁcial neural networks. Int J Comput Sci 21. Iskra J, Coh M (2011) Biomechanical studies on running the 400 Sport 5(1):4–14 m hurdles. Hum Mov 12(4):315–323 42. Platonow N (2015) Sistema a podgotowki sportsmienow o 22. Iskra J, Ryguła I (2001) The optimization of training loads in high olimpijskom sportie. Olimpijskaja literatura, Kiev (in Russian) class hurdlers. J Hum Kinet 6:59–72 43. Przednowek K, Iskra J, Przednowek KH (2014) Predictive 23. Iskra J, Walaszczyk A (2003) Anthropometric characteristics and modeling in 400-metres hurdles races. In: 2nd international performance of 110m and 400m male hurdlers. Kinesiology congress on sport sciences research and technology support- 35(1):36–47 icSPORTS 2014. SCITEPRESS, Rome, pp 137–144 24. Iskra J, Widera J (2001) The training preparation of the world 44. Przednowek K, Iskra J, Wiktorowicz K, Krzeszowski T, junior 400 m hurdles champion. Track Coach 156:4980–4984 Maszczyk A (2017) Planning training loads for the 400 m hurdles 25. Iskra J, Zajac A, Waskiewicz Z (2006) Laboratory and ﬁeld tests in three-month mesocycles using artiﬁcial neural networks. in evaluation of anaerobic ﬁtness in elite hurdlers. J Hum Kinet J Hum Kinet 60(1):175–189 16:25 45. Przednowek K, Wiktorowicz K, Krzeszowski T, Iskra J (2016) A 26. Jain MB, Jain A, Srinivas MB (2008) A web based expert system fuzzy-based software tool used to predict 110m hurdles results shell for fault diagnosis and control of power system equipment. during the annual training cycle. In: Proceedings of the 4th In: International conference on condition monitoring and diag- international congress on sport sciences research and technology nosis, CMD 2008, pp 1310–1313 (2008). https://doi.org/10.1109/ support (icSPORTS-2016). SCITEPRESS, pp 176–181 CMD.2008.4580217 46. R Core Team (2016) R: A language and environment for statis- 27. Kiartzis SJ, Bakirtzis AG, Theocharis JB, Tsagas G (2000) A tical computing. R Foundation for Statistical Computing, Vienna. fuzzy expert system for peak load forecasting. Application to the http://www.R-project.org/. Accessed Feb 2017 Greek power system. In: MELECON 2000: 10th Mediterranean 47. Riza LS, Bergmeir C, Herrera F, Benı ´tez JM (2015) frbs: Fuzzy Electrotechnical Conference, 2000, vol 3, pp 1097–1100. https:// rule-based systems for classiﬁcation and regression in R. J Stat doi.org/10.1109/MELCON.2000.879726 Softw 65(6):1–30 28. Kłapcin ´ ska B, Iskra J, Poprzecki S, Grzesiok K (2001) The 48. Roczniok R, Ryguła I, Kwas ´niewska A (2007) The use of effects of sprint (300 m) running on plasma lactate, uric acid, Kohonen’s neural networks in the recruitment process for sport creatine kinase and lactate dehydrogenase in competitive hurdlers swimming. J Hum Kinet 17:75–88 and untrained men. J Sports Med Phys Fit 41(3):306–311 49. Rogulj N, Papic V, Cavala M (2009) Evaluation models of some 29. Kusy M, Obrzut B, Kluska J (2013) Application of gene morphological characteristics for talent scouting in sport. Coll expression programming and neural networks to predict adverse Antropol 33(1):105–110 events of radical hysterectomy in cervical cancer patients. Med 50. Ryguła I (2005) Artiﬁcial neural networks as a tool of modeling Biol Eng Comput 51(12):1357–1365. https://doi.org/10.1007/ of training loads. In: 27th annual international conference of the s11517-013-1108-8 engineering in medicine and biology society, IEEE-EMBS, ´ ´ ˇ ´ ´ ´ ´ 30. Lapkova D, Pluhacek M, Komınkova Oplatkova Z, Adamek M pp 2985–2988 (2014) Using artiﬁcial neural network for the kick techniques 51. Schroder S, Dabidian P, Liedtke G (2015) A conceptual proposal classiﬁcation—an initial study. In: Proceedings 28th European for an expert system to analyze smart policy options for urban cep conference on modelling and simulation ECMS, pp 382–387 transports. In: Smart cities symposium Prague (SCSP), pp 1–6. 31. Lee HJ, Rhee KP (2001) Development of collision avoidance https://doi.org/10.1109/SCSP.2015.7181555 system by using expert system and search algorithm. Int Ship- 52. Silva AJ, Costa AM, Oliveira PM, Reis VM, Saavedra J, Perl J, build Prog 48(3):197–212 Rouboa A, Marinho DA (2007) The use of neural network 32. Lo CY, Chang HI, Chang YT (2009) Research on recreational technology to model swimming performance. J Sports Sci Med sports instruction using an expert system. Springer, Berlin, 6(1):117–125 pp 250–262. https://doi.org/10.1007/978-3-642-04875-3_28 123 Neural Computing and Applications 53. Singh PK, Sarkar R (2015) A simple and effective expert system 60. Zarinbal M, Fazel Zarandi MH, Turksen IB, Izadi M (2015) A for schizophrenia detection. Int J Intell Syst Technol Appl type-2 fuzzy image processing expert system for diagnosing brain 14(1):27–49. https://doi.org/10.1504/IJISTA.2015.072218 tumors. J Med Syst 39(10):1–20. https://doi.org/10.1007/s10916- 54. Tibshirani R (1996) Regression shrinkage and selection via the 015-0311-6 lasso. J R Stat Soc Ser B 58(1):267–288 61. Zhou D, Ma J, Turban E, Bolloju N (2002) Soft decision analysis 55. Venables WN, Ripley BD (2002) Modern applied statistics with a fuzzy set approach to the evaluation of journal grades. Fuzzy S. Springer, New York. http://www.stats.ox.ac.uk/pub/MASS4 Sets Syst 131(1):63–74. https://doi.org/10.1016/S0165- 56. Wang LX, Mendel JM (1992) Generating fuzzy rules by learning 0114(01)00255-X from examples. IEEE Trans Syst Man Cybern 22(6):1414–1427. 62. Zou H, Hastie T (2005) Regularization and variable selection via https://doi.org/10.1109/21.199466 the elastic net. J R Stat Soc Seri B (Stat Methodol) 67(2):301–320 57. Ward-Smith A (1997) A mathematical analysis of the bioener- 63. Zou H, Hastie T (2012) Elasticnet: elastic-net for sparse esti- getics of hurdling. J Sport Sci 15(5):517–526 mation and sparse PCA. https://CRAN.R-project.org/package= 58. Wiktorowicz K, Przednowek K, Lassota L, Krzeszowski T (2015) elasticnet. R package version 1.1. Accessed Feb 2017 Predictive modeling in race walking. Comput Intell Neurosci 64. Zouhal H, Jabbour G, Jacob C, Duvigneau D, Botcazou M, 2015:9. https://doi.org/10.1155/2015/735060 Article ID 735060 Abderrahaman AB, Prioux J, Moussa E (2010) Anaerobic and 59. Wilk R, Fidos-Czuba O, Rutkowski Ł, Kozłowski K, Wis ´niewski aerobic energy system contribution to 400-m ﬂat and 400-m P, Maszczyk A, Stanula A, Roczniok R (2015) Predicting com- hurdles track running. J Strength Cond Res 24(9):2309–2315 petitive swimming performance. Cent Eur J Sport Sci Med 9(1):105–112
Neural Computing and Applications – Springer Journals
Published: May 30, 2018
It’s your single place to instantly
discover and read the research
that matters to you.
Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.
All for just $49/month
Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly
Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.
Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.
Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.
All the latest content is available, no embargo periods.
“Hi guys, I cannot tell you how much I love this resource. Incredible. I really believe you've hit the nail on the head with this site in regards to solving the research-purchase issue.”
Daniel C.
“Whoa! It’s like Spotify but for academic articles.”
@Phil_Robichaud
“I must say, @deepdyve is a fabulous solution to the independent researcher's problem of #access to #information.”
@deepthiw
“My last article couldn't be possible without the platform @deepdyve that makes journal papers cheaper.”
@JoseServera
DeepDyve Freelancer | DeepDyve Pro | |
---|---|---|
Price | FREE | $49/month |
Save searches from | ||
Create lists to | ||
Export lists, citations | ||
Read DeepDyve articles | Abstract access only | Unlimited access to over |
20 pages / month | ||
PDF Discount | 20% off | |
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.
ok to continue