Nested-Stacking Genetic Algorithm for the Optimal Placement of Sensors in Bridge

Nested-Stacking Genetic Algorithm for the Optimal Placement of Sensors in Bridge Abstract According to the sensor location optimization problem in a health monitoring system of an existing bridge, a new optimization algorithm called the Nested-Stacking Genetic Algorithm (NSGA) is proposed. This new algorithm is combined with the traditional Genetic Algorithm (GA) and the Nested-Partitions Algorithm (NPA), which can divide the basic solution domain of the optimization problem into two regions: feasible and unfeasible regions. This division is achieved through the partition operation, and then the GA is used for sampling and selecting. Finally, the optimized results of the discrete variables are obtained through an iterative operation step by step. Subsequently, two benchmark test functions were adopted to test the algorithm. Comparison and analyses of the results were performed by calculating the values of the main control parameters of the new algorithm, such as the crossover and mutation rate, genetic algebra, population size, data volume parameter and the backtrack parameter, through several tentative calculations to the bridge. This new algorithm is found having improved performance compared with the traditional optimization algorithms, and the algorithm is only weakly sensitive to almost all of the parameters and has a strong anti-interference ability. 1. INTRODUCTION During recent few decades, large-scale bridges with abundant structural forms, such as suspension bridge, cable-stayed bridge and some compound systems, are constructed and different degrees of damage will be posed in the process of bridge construction and across the lifetime of a bridge. As a result, it is necessary to establish a perfect Structural Health Monitoring (SHM) system to obtain the bridge operational information. In the bridge health monitoring system, the original information collected by the sensor network can reflect the structural behavior and they can be provided for SHM, structural damage identification and model updating after disposition [1]. And the availability and the accuracy of the collected data are determined by the performance of the sensors which are configured in the bridge. In a sensor subsystem, it is significant to collect an integrated information of the bridge. And the sensors arranged in the whole bridge are inadvisable, which will result in a redundancy of information and a waste of money. Therefore, the selection and optimal arrangement of the sensors which is named as Optimal Sensor Placement (OSP) problem is a hotspot research field and many scholars at home and abroad have performed many research works. In 1991, Kammer [2] proposed the Effective Independence (Efi) method which is a widely used form of the global optimization method. Efi optimizes the Fisher Information Matrix (FIM) by removing the degree of freedom (DOF) of the modal data, which is less sensitive to the independence. Subsequently, information theory-based methods were developed in solving the OSP problem gradually. With the applications of the Efi method in the actual projects, a deficiency that the sensor configuration scheme may include the nodes with low vibration energy was revealed, which will increase the evaluation error of the parameters [3]. By the developing use of FIM, Modal Assurance Criteria (MAC) and Modal Strain Energy (MSE) [4] were established to estimate the dynamic information collected by the layout sensors. Papadimitriou [5] introduced the information entropy norm to measure the uncertainty existed in the system parameters. Wherein the MAC is commonly used for the reasons of its simple calculation process and definite identifiability. With the development of the approaches to OSP problem, the Genetic Algorithm (GA) is usually chosen as the optimization algorithm [6]. The GA, a heuristic random search algorithm derived from Darwin’s biology evolutionary theory [7], was developed when computers were used to simulate natural creatures for study by researchers. The GA uses the encoded string to represent the results of calculation and then evaluates the merits of each individual by calculating the fitness function. Finally, the optimum to the problem can be obtained through the crossover and mutation operations adopted for each individual to simulate natural evolution. The GA evaluates the merits of the individual by calculating the fitness function, simulates the processes in nature, and then receives the optima to the problem. In recent years, the GA has been improved with other algorithms and methods to address many problems [8–15]. Faisal Shabbir [16] adopted an improved GA for the structural model updating process. Moisés Silva [17] invented another type of algorithm based on the GA, which was used to detect the structural damage in bridges. The GA has played an important role in handling general optimal problems. With the heuristic algorithm adopted in resolving the sophisticated global optimization problems, the Particle Swarm Optimization (PSO), Monkey Algorithm (MA) and Simulated Annealing (SA) algorithm were emerged and their applications to the OSP problem were researched lately [18–22]. Firefly Algorithm (FA) was proposed by Yang [23] which is based on the idealized behavior of the flashing characteristic of fireflies. Some improved algorithms were presented in recent researches as well. Guang-dong Zhou [4, 24] proposed a Simple Discrete FA (SDFA) and a Hybrid Discrete FA (HDFA) in dealing with the cabled sensor placement problem and the wireless sensor placement problem respectively. And a Nondirective Movement Glowworm Swarm Optimization (NMGSO) algorithm was proposed based on the basic glowworm swarm optimization algorithm to identify the effective Pareto optimal sensor configurations [25]. As the global optimization algorithms have a wide applications range and strong applicability [26, 27], in addition to the GA, the Nested Partition Algorithm (NPA) performs well in handling most optimization problems. The NPA was first proposed in 1997. Of the large number of examples demonstrated, Shi [28, 29] and other scholars [30, 31] found that the NPA converges rapidly and it has fine global search capability and strong local search capability. For general discrete event dynamic system problems, it is difficult to find a mathematical analytical solving path because the laws of their solution structures are ambiguous; under this situation, the NPA provides a random search approach that bypasses the analytical way. The NPA can be divided into the following four steps: Partition, Sampling, Selection and Backtrack. With the extensive construction of long-span bridges, however, the quantity of sensors arranged in the bridge also increases. The OSP problem has evolved into a combinatorial optimization problem, whose variables are multi-dimensional and discrete; as a result, the problem has a large basic solution domain. With respect to this problem, the GA and some improved algorithms have been unable to obtain the results quickly and accurately. And the debugging of the parameters which exist in lots of algorithms is an enormous workload. To resolve these problems, a Nested-Stacking GA (NSGA) that combines the NPA with the GA is proposed in this paper. The NSGA [32] is proposed based on the theory of the NPA and uses the GA as a computing tool. An innovation point of this algorithm is that it combines the advantages of the NPA and GA, making itself more effective to address large amounts of modal data. For the new algorithm proposed in this paper, the main control parameters are listed and the sensitivity of each parameter is analyzed by conducting a model of an actual example to determine the suitable parameters for similar problems. 2. OPTIMIZATION ALGORITHM 2.1. Target Function for Optimization The optimization process of the proposed algorithm is based on the target function which should be established at first. In the OSP problem, the vibration-based structural condition assessment methodologies require that the measured mode shapes are discriminable from each other, such that they can be reliably identified [4]. Consequently, the MAC [33] are commonly used to build the modal assurance matrix: MACi,j=(Φ⁎,iTΦ⁎,j)2(Φ⁎,iTΦ⁎,i)(Φ⁎,jTΦ⁎,j) (1) where i, j represent the modal order; MACi,j is the modal assurance matrix element value of ith row jth column; Φ⁎,i and Φ⁎,j are the ith column and jth column of the modal matrix, respectively. The MAC can identify whether the complete information of a bridge is collected by the sensors through estimating the orthogonality of the selected mode shape vectors. A small maximum non-diagonal element of the MAC matrix implies less correlation between the corresponding mode shape vectors. Thus, the MAC non-diagonal elements are adopted to induce the sensor placement. Considering the problem that traditional MAC matrix cannot optimize three directions, x, y and z, of a 3D sensor, Yi [34] developed a Three-dimensional MAC (TMAC) by assembling three translational DOF of a structure node as an independent unit. The formula for TMAC is as follows: Ti,j=Qi,j2Qi,iQj,j (2) where Ti,j is an element of the ith row jth column of the 3D modal assurance matrix T, Ti,j∈[0,1]; Qi,j is the ith row jth column element of the FIM Q, and this matrix uses the node as a unit where sensors will be laid. The formula of Q is as follows: Q=∑k=1nspϕ3k,⁎Tϕ3k,⁎=∑k=1nspQk (3) where Qk is the FIM of the node at the position of the kth 3D sensor; ϕ3k,⁎ is the matrix consists of modal vectors which correspond to the three DOF of the kth node in modal assurance matrix ϕ; and nsp is the number of sensors that are proposed to be arranged. Thus, the spatial information of a bridge is considered in the TMAC, which is more nearly to the practical condition. In the process of sensor site selection, the smaller the maximum value of non-diagonal elements in the TMAC matrix is, the better the orthogonality between two selected nodes can be shown and the greater the bridge information can be reflected. And the value range of the non-diagonal elements of TMAC is [0,1]. Thus, 1−T can be used as the optimization target function, which is referred to as the fitness function in the genetic operation hereinafter, and its results are the performance values Fitness=1−max(Ti,j) (4) A better sensor position can be obtained with a larger performance value. 2.2. Nested-Stacking GA When the two algorithms described above, the GA and NPA are applied to the OSP problem, although they are both impressive in global searches, some shortcomings can be found. The NPA is limited to sampling quantities that use a random sampling method. It is possible to extract large amounts of data with weak representation, which will directly lead the selection operation into an incorrect partition easily. In this case, the backtracking operation is triggered, which will reduce the algorithm efficiency. Although searching in a global scope, the GA can easily fall into the local optima in the case of an explosive exponent of the solution space because its global search mechanism will be used in any cases. Therefore, the GA and the NPA are ineffective when applied to the current sensor optimal problems of long-span bridges. For general optimization problems, the mathematical formula of the optimization function is assumed: {G=F(x1,x2,…,xm)xi∈Θ(i=1,2,…,m) (5) where x are the independent variables; m represents the number of the non-interfering independent variables; Θ represents the definition domain of the independent variables; F can be a determined functional expression or mathematical structure, or it can be a blurred black-box problem. Hence, the solution is z=Gmax. Formula (5) could be used to express most optimization problems. The NSGA proposed in this paper establishes an algorithm framework on the basis of the ideology of the NPA, which uses the GA as the basis of the partition step and the random sampling method in the sampling step in the NPA. The core concept is to approximate solutions of the problem by continuously reducing the dimensionality of the independent variables. In this paper, the dimensionality represents the number of sensors. Each loop of the steps in the algorithm is called a Layer, and an operation of each layer can reduce the dimensionality of the variables once, i.e. a certain number of sensor positions can also be determined. Thus, the mode in which the layers are stacking up where one layer is nested to the next layer is called the nested-stacking operation. This algorithm has the following two characteristics: one characteristic is the exponent of the solution space is reduced via the partition operation, which leads to the reduction of the size of the search space, thereby increasing the accuracy of the global optimization; another characteristic is the use of a step by step approach to find the incomplete solutions of discrete variables that have different accuracy rather than to obtain the complete solution in one step. Under this manner, the error of a wide range random search is decreased. According to the algorithm theory, the scheme of the NSGA is shown in Fig. 1. Figure 1. View largeDownload slide Scheme of the NSGA Figure 1. View largeDownload slide Scheme of the NSGA 2.2.1. Coding and Initialization of the Algorithm First, the finite element model of the bridge structure that requires a sensor arrangement is proposed. Next, the modal matrix of the structure is obtained through modal analysis, and the candidate positions of sensors are numbered at the same time. Because the TMAC matrix considers the intersection angle of the modal vector without considering the condition of the same FIM indices that exist between the neighboring nodes, a certain number of sensors are equally distributed into the whole length of the bridge to avoid information redundancy, which shows that different sensors have different locations. This method can effectively prevent a plurality of sensors from being arranged in the same zone. Therefore, the decimal encoded mode used in this algorithm can easily solve the problem of determining the locations of the sensors over the full-bridge range. Assuming that the whole bridge length is L, the number of sensor points that can be arranged is n, the number of sensors is m, the coding and initialization steps of the algorithm are as follows: Step 1: The points that can be arranged in the whole bridge are numbered as 1 ~ n; hence, each point corresponds to a node in bridge model and a position coordinate (si,0,0),i∈{1,2,…,m}. Step 2: With equal division of the full-bridge length L according to the number of sensors, the sensors are arranged in zones, as shown in Fig. 2. In each zone, a different number of points that can be arranged exist. The length of each zone is l=L/m. Figure 2. View largeDownload slide Zone layout of the sensors Figure 2. View largeDownload slide Zone layout of the sensors si are the abscissa values of the points, i∈[1,n],si∈[0,L];tj represent the position number of the sensor positions that can be arranged in each zone, j∈[1,m],tj∈[1,n]. rj represent the points number of the zone boundary, rj∈[1,n]; where i, j, t and r are integers. If a node tj(stj,0,0) satisfies (j−1)l<stj≤jl, then tj∈(rj−1,rj] can be obtained. Therefore, the set of {t1,t2,…,tj,…,tm}, tj∈(rj−1,rj] which consists of the sensor points selected from each of the m sections, is a value belonging to the independent variable of the problem. m independent variables are called genes, and this set is called an individual. A population consists of p individuals that are randomly generated. Step 3: Set the initial related parameters, which include initial genetic algebra Gs, crossover and mutation rate in GA, etc. Next, a population is generated randomly as the first generation set of individuals in the algorithm operations. Last, the modal matrix that was extracted previously is imported. 2.2.2. Free Partition The free partition operation randomly searches for an individual in the subregions through a plurality of parallel GA. Next, the superregion that is selected in the last iteration will be divided into two regions, i.e. the feasible region and the unfeasible region, which are in accordance with the solution repetitiveness of the discrete variables: The number of parallel GAs is set as N in the first operation which is represented by the iteration layer number of 1. The modal data corresponding to the genes position of the individual {t1,t2,…,tm}, which is generated by initialization, can be extracted, and then the performance values can be calculated by substituting the modal data into formula (4). After the selection, the crossover and mutation operations are performed until the number of iterations reaches Gs, and then the algorithm converges to once solutions of the GA, {x1N,x2N,…,xmN}, can be obtained. Compared with the solutions of the N parallel algorithms, if the jth gene satisfies the requirement of xj1=xj2=…=xjN, it is confirmed that this gene has a large impact on the target function. The same u genes will be determined by comparing all the positions of m genes orderly. Thus, {xu,tm−u} is the solution expression, which is named as the first incomplete solution domain, where xu and tm−u are the determined and undetermined variables, respectively. This solution will divide the solution domain into two subregions: the incomplete solution domain (also named the feasible region) and the domain of the remaining solutions (which is called the unfeasible region). If the iteration layer number is a,a>1, the same v genes have been determined after several iterations. The solution in this layer will further divide the feasible region obtained from the last generation into the a times incomplete solution domain {xv,tm−v} and the unfeasible region. If v=m, then {tm} must be the optimal solution. Hence, the steps of the free partition are as follows: Step 1: Perform a comparison with the gene values at the same position of the solution obtained from N parallel GAs. Step 2: Obtain the a + 1 times incomplete solution domain by determining the positions of genes with the same values. 2.2.3. Subregion Sampling The incomplete solution domain, which is obtained from the partition operation, is searched randomly by the GA. In the random search methods, the GA has a strong global dominance, and its searching results have superior representativeness. The GA can better reflect the performance values of subregions. The steps of subregion sampling are as follows: Step 1: In the a times incomplete solution domain {xv,tm−v} obtained from the free partition operation, new values should be assigned to tm−v variables randomly. Step 2: N solutions can be obtained by genetic operation to the tm−v variables, and then the best individual performance value and the average optimal population performance value of each solution can be recorded. 2.2.4. Layered Iteration and Selection The number of variables in the target function is reduced by gradually obtaining the values of the variables that largely impact the operation results. Therefore, the number of the unknown quantities in the incomplete solutions is reduced by the means of layering. This step is described as follows: Without triggering the backtrack mechanism, the values of the remaining m–v variables according to the solution {xv,xm−v} of N parallel algorithms obtained from the subregion sampling operation are compared. If the same solutions exist in m–v variables, they will be moved into the incomplete solution, i.e. the feasible region in next generation is selected, and then the operation goes into the next layer; otherwise, the operation remains at the present layer. 2.2.5. Backtrack Backtrack will be implemented when the sampling performance value declines in the iteration process, which means an error appeared in the random sampling; this process is similar with that in the NPA. In this algorithm, the basic condition is that the number of layers should be greater than 1. In the process of subregion sampling, each GA operation can obtain a best individual and a best population, and the performance value is the criterion for choosing the best individual. If the best individual performance of the a + 1 layer is less than that of the a layer, the algorithm may have a degeneration. In this case, the algorithm backtracks to the last layer for the re-operation process. Moreover, to ensure the heredity of the superior genes, the backtrack operation will also be performed when the average performance value of the best population meets the requirement of Fitness(a+1)<α·Fitness(a), where α is the confidence level, i.e. the population has a degeneration. In addition, the genetic operation in different layers has a different number of variables in this algorithm. The deeper the layer is, the less variables there are. Thus, the initial genetic algebra Gs can follow a linear decreasing dependence of (m−v)/m for linear decrease. The algorithm operation efficiency will be improved with the reduction of the iteration times caused by the reduction in the number of variables. 2.3. Benchmark Test of the NSGA In order to test the effectiveness of the NSGA in calculation, two benchmark functions are provided for test. 2.3.1. Sphere model The formula of the function is as follows: f1(X)=∑i=1nxi2 (6) It is a nonlinear symmetric function with single extremum. The global minimum is f1=0, which will be obtained when x={0,0,…,0}. In this example, 10 independent variables whose values ranges are xi∈[−100,100] with a gap of one are adopted in the function. Therefore, 200 values can be chosen for each variable and the solution space size is 20010. By taking example of the traditional GA, the parameters of the GA and the NSGA in the example are set in Table 1. Table 1. Parameters setting. Crossover rate Mutation rate Population size Chromo size GA 0.7 0.015 40 10 NSGA 0.7 0.015 40 10 Crossover rate Mutation rate Population size Chromo size GA 0.7 0.015 40 10 NSGA 0.7 0.015 40 10 Table 1. Parameters setting. Crossover rate Mutation rate Population size Chromo size GA 0.7 0.015 40 10 NSGA 0.7 0.015 40 10 Crossover rate Mutation rate Population size Chromo size GA 0.7 0.015 40 10 NSGA 0.7 0.015 40 10 Table 2. The results of GA. Genetic Algebra 5000 10 000 1 0 1 2 0 2 3 3 0 4 2 0 5 2 0 Genetic Algebra 5000 10 000 1 0 1 2 0 2 3 3 0 4 2 0 5 2 0 Table 2. The results of GA. Genetic Algebra 5000 10 000 1 0 1 2 0 2 3 3 0 4 2 0 5 2 0 Genetic Algebra 5000 10 000 1 0 1 2 0 2 3 3 0 4 2 0 5 2 0 Table 3. The results of NSGA. Genetic Algebra 5000 10 000 Value Layer Backtrack times Value Layer Backtrack times 1 0 4 5 0 2 0 2 1 5 4 0 2 1 3 0 4 6 0 2 0 4 0 4 4 0 2 0 5 0 4 5 0 2 1 Genetic Algebra 5000 10 000 Value Layer Backtrack times Value Layer Backtrack times 1 0 4 5 0 2 0 2 1 5 4 0 2 1 3 0 4 6 0 2 0 4 0 4 4 0 2 0 5 0 4 5 0 2 1 Table 3. The results of NSGA. Genetic Algebra 5000 10 000 Value Layer Backtrack times Value Layer Backtrack times 1 0 4 5 0 2 0 2 1 5 4 0 2 1 3 0 4 6 0 2 0 4 0 4 4 0 2 0 5 0 4 5 0 2 1 Genetic Algebra 5000 10 000 Value Layer Backtrack times Value Layer Backtrack times 1 0 4 5 0 2 0 2 1 5 4 0 2 1 3 0 4 6 0 2 0 4 0 4 4 0 2 0 5 0 4 5 0 2 1 The genetic algebra is set on two conditions for comparison: 5000 and 10 000 (Tables 2 and 3). Thus, the results tables can be obtained after several calculations. By the comparison of the results between two algorithms, the NSGA shows excellent performance to the function that the results are almost convergent to the global optimum. 2.3.2. Ackley function The formula of the function is as follows: f2(X)=−20exp(−0.2130∑i=1nxi2)−exp(130∑i=1ncos2πxi)+20+e (7) The function with two variables is shown in Fig. 3. It is a continuous, inseparable function with multiple extremums, and the independent variables are separated from each other. The global minimum of the function is f2=0, which will be obtained when x={0,0,…,0} (Fig. 4). Figure 3. View largeDownload slide Ackley function with two variables. Figure 3. View largeDownload slide Ackley function with two variables. Figure 4. View largeDownload slide Layout of the full-bridge example. Figure 4. View largeDownload slide Layout of the full-bridge example. In this example, 30 independent variables are adopted in the function and the values ranges are xi∈[−10,10] with a gap of 0.1. Therefore, 200 values can be chosen for each variable and the solution space size is 20030. The parameters of the GA and the NSGA are set in Table 4. Table 4. Parameters setting. Crossover rate Mutation rate Population size Chromo size GA 0.5 0.01 50 30 NSGA 0.5 0.01 50 30 Crossover rate Mutation rate Population size Chromo size GA 0.5 0.01 50 30 NSGA 0.5 0.01 50 30 Table 4. Parameters setting. Crossover rate Mutation rate Population size Chromo size GA 0.5 0.01 50 30 NSGA 0.5 0.01 50 30 Crossover rate Mutation rate Population size Chromo size GA 0.5 0.01 50 30 NSGA 0.5 0.01 50 30 The genetic algebra is set on three conditions for comparison: 5000, 10 000 and 20 000. Thus, the results which are shown in Tables 5 and 6 can be obtained after calculation. Table 5. The results of GA. Genetic algebra 5000 10 000 20 000 1 2.1585 0.7828 0.2139 2 2.0161 0.8620 0.4714 3 1.8402 1.1048 0.4184 4 2.1369 0.9243 0.3342 5 2.7041 0.8757 0.3405 Genetic algebra 5000 10 000 20 000 1 2.1585 0.7828 0.2139 2 2.0161 0.8620 0.4714 3 1.8402 1.1048 0.4184 4 2.1369 0.9243 0.3342 5 2.7041 0.8757 0.3405 Table 5. The results of GA. Genetic algebra 5000 10 000 20 000 1 2.1585 0.7828 0.2139 2 2.0161 0.8620 0.4714 3 1.8402 1.1048 0.4184 4 2.1369 0.9243 0.3342 5 2.7041 0.8757 0.3405 Genetic algebra 5000 10 000 20 000 1 2.1585 0.7828 0.2139 2 2.0161 0.8620 0.4714 3 1.8402 1.1048 0.4184 4 2.1369 0.9243 0.3342 5 2.7041 0.8757 0.3405 Table 6. The results of NSGA. Genetic algebra 5000 10 000 20 000 Value Layer Backtrack times Value Layer Backtrack times Value Layer Backtrack times 1 0.2139 12 27 0.1775 9 0 0 4 0 2 0 10 36 0.0901 8 9 0 4 1 3 0.0901 10 37 0 5 10 0 4 0 4 0.1374 10 24 0.1374 8 0 0 5 1 5 0.1775 10 43 0 6 5 0 4 0 Genetic algebra 5000 10 000 20 000 Value Layer Backtrack times Value Layer Backtrack times Value Layer Backtrack times 1 0.2139 12 27 0.1775 9 0 0 4 0 2 0 10 36 0.0901 8 9 0 4 1 3 0.0901 10 37 0 5 10 0 4 0 4 0.1374 10 24 0.1374 8 0 0 5 1 5 0.1775 10 43 0 6 5 0 4 0 Table 6. The results of NSGA. Genetic algebra 5000 10 000 20 000 Value Layer Backtrack times Value Layer Backtrack times Value Layer Backtrack times 1 0.2139 12 27 0.1775 9 0 0 4 0 2 0 10 36 0.0901 8 9 0 4 1 3 0.0901 10 37 0 5 10 0 4 0 4 0.1374 10 24 0.1374 8 0 0 5 1 5 0.1775 10 43 0 6 5 0 4 0 Genetic algebra 5000 10 000 20 000 Value Layer Backtrack times Value Layer Backtrack times Value Layer Backtrack times 1 0.2139 12 27 0.1775 9 0 0 4 0 2 0 10 36 0.0901 8 9 0 4 1 3 0.0901 10 37 0 5 10 0 4 0 4 0.1374 10 24 0.1374 8 0 0 5 1 5 0.1775 10 43 0 6 5 0 4 0 It is obviously shown in Table 5 that the GA is converged slowly and the solutions cannot achieve the global optimum value under 20 000 genetic algebra. In Table 6, however, the solution values under 5000 genetic algebra can achieve the optimum and the results become stable with the increase of the genetic algebra. Compared with the GA and the NSGA through two benchmark test functions, the conclusion can be drawn that the NSGA has a better performance than the GA in calculating the normal functions. 3. PARAMETER NUMERICAL EXAMPLE To evaluate the performance of the proposed algorithm in the actual application, the following steps are performed: (1) the optimal sensor arrangement is calculated based on an actual project, (2) a comparison is performed between the NSGA and the GA and (3) the main control parameters are calculated and their recommended value ranges are given. 3.1. The Project Outline The project selected in this paper is the Wuhai Yellow River Extradosed Cable-Stayed Bridge in Inner Mongolia of China. This bridge is an extra-large bridge that crosses the Yellow River on the Wuhai section of State Road #110 Line, and its bridge length is 1130 m, wherein the main bridge, 120 + 220 + 120 = 460 m, is adopted as partial extradosed cable-stayed section, and the girder is a single box of three rooms that uses C55 concrete. This bridge is pylon and girder rigid fixity, and the stayed cable is anchored in the middle room of the box. In this paper, the sensor arrangement along the girder is the main focus. The finite element model can be built for the link system. The pylon and girder rigid fixity structure is conducted for superstructure building. The finite element fish-bone model (Fig. 5) can be established by Ansys, and the main beam has 259 nodes and 258 elements. Extracting the rigidity and mass matrices from the model, the structure mode can be obtained through modal analysis, and the previous 30 modal data are collected in this paper for calculation. Figure 5. View largeDownload slide Finite element model of superstructure. Figure 5. View largeDownload slide Finite element model of superstructure. This problem is refined mathematically such that the distributive position number on the girder n=259. We assume that the number of sensors that must be set on the girder is m=40, based on experience, and the sensor type is the 3D accelerometer. 3.2. Operation Comparison To verify the computing capability of the new algorithm, in the 1D model of the bridge, the z direction data are extracted for tentative calculation first. Then, the parameters in the algorithms which can lead to convergency are determined after several tentative calculations. To have a directive comparison between the GA and the NSGA, the parameters are set in an approximate way which are shown in Table 7 and the genetic algebra is 5000 and 4000 for the GA and the NSGA, respectively, which are better for GA converging and which can reduce the calculation time of NSGA. Table 7. Parameters setting for tentative calculation. Crossover rate Mutation rate Population size Chromo size Genetic algebra Modal order GA 0.7 0.018 40 40 5000 30 NSGA 0.7 0.018 40 40 4000 30 Crossover rate Mutation rate Population size Chromo size Genetic algebra Modal order GA 0.7 0.018 40 40 5000 30 NSGA 0.7 0.018 40 40 4000 30 Table 7. Parameters setting for tentative calculation. Crossover rate Mutation rate Population size Chromo size Genetic algebra Modal order GA 0.7 0.018 40 40 5000 30 NSGA 0.7 0.018 40 40 4000 30 Crossover rate Mutation rate Population size Chromo size Genetic algebra Modal order GA 0.7 0.018 40 40 5000 30 NSGA 0.7 0.018 40 40 4000 30 After several operations of the GA, a part of the iteration process is shown in Fig. 6, and the maximum fitness value is 0.6779. Every convergence value is different from each other, although each iteration is converged. Each convergence value is a local optimum, primarily because of the vast basic solution domain, which results in a enormous amount of local optima from which the convergence of the algorithm cannot easily escape. The results of several operations will lead to different local optima, and the accuracy of these results is unable to be determined. Figure 6. View largeDownload slide Iterations of the GA. Figure 6. View largeDownload slide Iterations of the GA. After several computations of the same data in the NSGA, the result of fitness value is 0.7123, which is much improved and breaks the convergence limit of the traditional GA. The algorithm layers stack six times and backtrack 45 times. The convergence of each layer is in good condition, as shown in Fig. 7. Figure 7. View largeDownload slide Iterations of the NSGA. Figure 7. View largeDownload slide Iterations of the NSGA. As a result of the first layer depicted in Fig. 7a, its convergence is the same as that of the traditional GA, but with the existence of five parallel algorithms, not all of the unknown quantities are ascertained; thus, it is possible to jump out of the local optimum in the subsequent computations. As a result, for the fourth layer shown in Fig. 7b, the convergence values in this layer are quite different from each other, and the backtrack operation is triggered. Therefore, the results eliminate the fraction of the solutions with low fitness values. As a result, for the last layer shown in Fig. 7c, the results of three parallel algorithms converge to the same value, which indicates that the local optimum does not exist in this layer. It is likely that the result is the optimal solution. Fig. 7d shows that the best individual has a great convergence trend under the backtrack mechanism. The distribution of sensors is shown in Fig. 8. Figure 8. View largeDownload slide Distribution of sensors. Figure 8. View largeDownload slide Distribution of sensors. 3.3. Extraction and Calculation of the Control Parameter According to the principles and steps of the algorithm, the choices of the control parameters are as follows: (1) preset parameters before calculation: crossover rate, mutation rate, population size and genetic algebra adopted; (2) extracted data volume in the calculation: the modal order number of data; (3) the backtrack confidence level under the backtrack operation. The ultimate goal of optimizing the performance value, which is also named the fitness value of the best individual in genetic operations, should be the primary criterion in the algorithm evaluation process. The 3D modal data is used here. 3.3.1. Preset Parameters The preset parameters are the calculation-related parameters in the algorithm that should be set beforehand, and they are generally unchanged under the computations. In the later contents, parameter calculations, of this paper, the other parameters will remain constant in addition to the researched parameter, which uses the controlling variable method to guarantee that the results are reliable. Crossover and mutation rate In the traditional empirical operation of GA, crossover rate is varying from 0.3 to 0.7, and the mutation rate is ranged from 0.005 to 0.5, which are used to observe the impact to the computation results at here. Through adequate calculations, the results are shown in Figs. 9 and 10: Figure 9. View largeDownload slide Impact of crossover and mutation rate (NSGA). Figure 9. View largeDownload slide Impact of crossover and mutation rate (NSGA). Figure 10. View largeDownload slide Impact of crossover and mutation rate (GA). Figure 10. View largeDownload slide Impact of crossover and mutation rate (GA). The performance values of the NSGA, approximately 0.71, are found to be good in each case. Compared with the new algorithm, the GA has large fluctuations, and its results are unstable, which are poorer than the results of the new algorithm. The performance values are lower when the mutation rate is too high. In contrast, the values can remain stable when the mutation rate is small. In addition, the varieties of the results are not regular with the changing of crossover rate. Therefore, the crossover rate changes from 0.3 to 0.7 in the NSGA is a proper range, and the mutation rate changes from 0.005 to 0.5 can lead to optima. But considering of a large mutation rate will cause an inferior convergence in the GA operation of the NSGA, a range of 0.005–0.05 of the mutation rate is recommended to avoid a pathological running efficiency. Genetic algebra In this example, the GA convergence times are approximately 4000. Therefore, the parameter sensitivity analysis of genetic algebra is approximately 4000 iterations. The results are as follows: As illustrated in Fig. 11, the performance values of the NSGA remain stable without significant fluctuation when the genetic algebra changes. The performance values of the GA are small when the iterations are less than the convergent algebra, and it has large fluctuation when the iterations are too large. Figure 11. View largeDownload slide Impact of genetic algebra. Figure 11. View largeDownload slide Impact of genetic algebra. The results of the indices that affect the efficiency of the NSGA are summarized in Fig. 12. Figure 12. View largeDownload slide Operation efficiency indices of the NSGA Figure 12. View largeDownload slide Operation efficiency indices of the NSGA Fig. 12 shows that the layer of the new algorithm reduces gradually with the increase of iteration times. In addition, the backtrack times under 1000 iterations are considerably large, but 3000 iterations will not result in more backtrack operations. The data above indicate that under the condition of insufficient convergence of the iterations, the new algorithm can also guarantee the stability of the results. However, the high layer numbers and backtrack times produced in the new algorithm indicate that it has a negative impact on the running efficiency. In addition, the results and the running efficiency have strong stability when the iterations are in the high case. Thus, a range of 3000–10 000 is completely adequate for calculation. With the increasing of the exponent of the solution space, the genetic algebra can be enlarged to a high value. Population size As a group parameter in the GA that ensures the diversity of a single generation, the convergent timing and algorithm efficiency can be influenced by population size. Population size consists of 20–100 individuals and they are calculated by two types of algorithms in this study; the results are shown in Fig. 13. Figure 13. View largeDownload slide Impact of population size. Figure 13. View largeDownload slide Impact of population size. In summary, the performance values and the stability of the NSGA are excellent, and the algorithm shows weak sensitivity to this parameter. However, the results of the GA show great fluctuation, and the performance values are inferior to the new algorithm. To summarize the results of the NSGA, the change of backtrack times and iteration layer can be obtained, as shown in Fig. 14. Figure 14. View largeDownload slide Operation efficiency indices of the NSGA. Figure 14. View largeDownload slide Operation efficiency indices of the NSGA. When the population is small, the layer and backtrack times are comparatively large, but they can remain small when the population size is equal to or greater than 30. Thus, the NSGA can maintain a better computation efficiency when the population size is set from 30 to 100. With regard to the general problem of OSP whose solution domain is approximate to the example in this paper, the ranges of preset parameters are recommended as follows: crossover rate changes from 0.3 to 0.7; mutation rate changes from 0.005 to 0.05; population size in the range of 30–100 and genetic algebra varies from 3000 to 10 000. And a bit fluctuations are permitted under different conditions. 3.3.2. Extracted Data Volume Parameter In the OSP problem, determining how to adopt the order of the modal data is a long-standing problem that is difficult to be solved. Because the higher the order is, the larger the amount of data is and the larger the solution domain obtained by the discrete variables is. A high order will result in a larger amount of data and a huge solution domain obtained by the discrete variables. More data used in this problem require strong robustness and computing capability of the algorithm. In this example, the previous 10, 15, 20, 25 and 30 modal data for calculation and comparison are used. Fig. 15 shows that the same results are obtained by the NSGA and the GA when the amount of modal data is small. The results of the GA are lower than those of the NSGA when the modal data exceed a certain range. With the increase of modal data, the results of the GA have an even greater decline, which demonstrates that the NSGA has a better capability in handling massive amount of data than the GA and it is more sensitive on the discrete solution in centralized codomain. The indices that will affect the efficiency of the NSGA are summarized in Fig. 16. Figure 15. View largeDownload slide Impact of mode number. Figure 15. View largeDownload slide Impact of mode number. Figure 16. View largeDownload slide Operation efficiency indices of the NSGA. Figure 16. View largeDownload slide Operation efficiency indices of the NSGA. When the data amount is small, the iteration layer and the backtrack times are less as well. By the time, the algorithm running efficiency approaches the GA. With the increase of data order, two indices have a certain amount of growth, which indicates that disposing a massive amount of data requires longer computing time. This phenomenon is consistent with the characteristic of the algorithm. 3.3.3. Backtrack Parameter Backtrack is an important step in the algorithm operation that directly determines the evolution effectiveness of the algorithm. In this example, two backtrack conditions are utilized. In the calculation process, once any one of the conditions is satisfied, it is likely to estimate that a degeneration is generated, and then the backtrack operation will be performed. In the NSGA, the backtrack confidence level determines the intensity of its execution. A high confidence level will result in a faster evolution of the algorithm. However, the higher confidence level will require a larger computation pressure on each layer, which will decrease the computation efficiency of a single layer. Conversely, a lower confidence level will ensure that the algorithm operations remain stable; however, it will increase the corresponding number of layers. Therefore, the function of confidence level is to ensure the algorithm has some evolution in each layer of calculation, and the evolution ratio depends on the size of confidence level that is proposed in this paper. It is set an appropriate backtrack confidence level that can guarantee the accuracy and the validity of the final result. In this paper, four types of data are adopted: 0.99, 1.00, 1.01 and 1.02. The value of 0.99 can ensure that the results do not degenerate more than 1%, and the evolution speed is mainly determined by the randomness of the algorithm. The value of 1.00 can ensure the performance values evolving constantly, and the evolution speed is also determined by the randomness. Different from the situation of the 0.99 value, the computation results under 1.00 increase layer by layer, while the 0.99 value will cause the same performance values to be obtained in different layers. The value of 1.01 is that the performance value increases with a speed of greater than 1%, and the increase speed under the value of 1.02 is greater than 2%. Comparing the results under four values, the phenomenon can be found that the computations are smooth and the results are stable under the cases of 0.99 and 1.00. When the value of 1.01 is used, the algorithm has a stagnation in a single layer, which indicates that the computation in a layer is difficult to achieve the expected level of 1% increasing, and the operation time is much longer. No results can be obtained with 1.02. Hence, the backtrack confidence value of 1.00 is recommended for the example in this paper, and it can be extend to almost problems. 3.4. Parameter Sensitivity Analysis 3.4.1. Preset Parameter By disposing the results which obtained through the parameters calculations, the changes of the maximum non-diagonal element value Max(Ti,j) are shown in Figs. 17–20 when the parameters vary. The other parameters are set as 0.7 crossover rate, 0.02 mutation rate, 4000 genetic algebra in NSGA, 5000 genetic algebra in GA and 40 population size when one of the parameters changes. And 30 modal data order is used in these cases. Subsequently, Table 8 shows the quantities of the layer and the backtrack times under different parameters' values which lead to the GA operation in the NSGA converged and premature converged. Similarly, the other parameters can guarantee the GA operation converged when one parameter is sampled. Table 8. Quantity of layer and backtrack times under converged and premature converged parameters Converged parameters Crossover rate Mutation rate Genetic algebra Population size 0.6 0.7 0.02 0.03 4000 5000 40 50 Layer 5 7 7 10 7 9 7 10 Backtrack times 0 0 0 1 0 19 0 16 Max(Ti,j) 0.2864 0.2842 0.2842 0.2834 0.2842 0.2858 0.2842 0.2866 Premature converged parameters Crossover rate Mutation rate Genetic algebra Population size 0.3 0.4 0.005 0.5 1000 2000 20 30 Layer 7 6 9 7 58 26 150 43 Backtrack times 20 10 27 22 3173 1422 3579 256 Max(Ti,j) 0.2844 0.2867 0.2871 0.2834 0.2846 0.2871 0.2880 0.2883 Converged parameters Crossover rate Mutation rate Genetic algebra Population size 0.6 0.7 0.02 0.03 4000 5000 40 50 Layer 5 7 7 10 7 9 7 10 Backtrack times 0 0 0 1 0 19 0 16 Max(Ti,j) 0.2864 0.2842 0.2842 0.2834 0.2842 0.2858 0.2842 0.2866 Premature converged parameters Crossover rate Mutation rate Genetic algebra Population size 0.3 0.4 0.005 0.5 1000 2000 20 30 Layer 7 6 9 7 58 26 150 43 Backtrack times 20 10 27 22 3173 1422 3579 256 Max(Ti,j) 0.2844 0.2867 0.2871 0.2834 0.2846 0.2871 0.2880 0.2883 Table 8. Quantity of layer and backtrack times under converged and premature converged parameters Converged parameters Crossover rate Mutation rate Genetic algebra Population size 0.6 0.7 0.02 0.03 4000 5000 40 50 Layer 5 7 7 10 7 9 7 10 Backtrack times 0 0 0 1 0 19 0 16 Max(Ti,j) 0.2864 0.2842 0.2842 0.2834 0.2842 0.2858 0.2842 0.2866 Premature converged parameters Crossover rate Mutation rate Genetic algebra Population size 0.3 0.4 0.005 0.5 1000 2000 20 30 Layer 7 6 9 7 58 26 150 43 Backtrack times 20 10 27 22 3173 1422 3579 256 Max(Ti,j) 0.2844 0.2867 0.2871 0.2834 0.2846 0.2871 0.2880 0.2883 Converged parameters Crossover rate Mutation rate Genetic algebra Population size 0.6 0.7 0.02 0.03 4000 5000 40 50 Layer 5 7 7 10 7 9 7 10 Backtrack times 0 0 0 1 0 19 0 16 Max(Ti,j) 0.2864 0.2842 0.2842 0.2834 0.2842 0.2858 0.2842 0.2866 Premature converged parameters Crossover rate Mutation rate Genetic algebra Population size 0.3 0.4 0.005 0.5 1000 2000 20 30 Layer 7 6 9 7 58 26 150 43 Backtrack times 20 10 27 22 3173 1422 3579 256 Max(Ti,j) 0.2844 0.2867 0.2871 0.2834 0.2846 0.2871 0.2880 0.2883 Figure 17. View largeDownload slide Changes of Max(Ti,j) by the varying of crossover rate (where mutation rate is 0.02). Figure 17. View largeDownload slide Changes of Max(Ti,j) by the varying of crossover rate (where mutation rate is 0.02). Figure 18. View largeDownload slide Changes of Max(Ti,j) by the varying of mutation rate (where crossover rate is 0.7). Figure 18. View largeDownload slide Changes of Max(Ti,j) by the varying of mutation rate (where crossover rate is 0.7). Figure 19. View largeDownload slide Changes of Max(Ti,j) by the varying of genetic algebra. Figure 19. View largeDownload slide Changes of Max(Ti,j) by the varying of genetic algebra. Figure 20. View largeDownload slide Changes of Max(Ti,j) by the varying of population size. Figure 20. View largeDownload slide Changes of Max(Ti,j) by the varying of population size. The following phenomena can be found straightforwardly through the results of four parameters, crossover and mutation rate, genetic algebra and population size. The preset parameters have a superior impact on the GA, which cannot obtain an accurate result especially when the set parameters cannot allow the algorithm to reach convergence. Compared with the GA, the NSGA has almost no effect when the preset parameters change. The operation time increases when the parameters cannot access the standard of convergence, but it does not affect the results because the NSGA adopts a partitioned method to solve discrete variables, and the performance values will not decrease to the local optima under the premise of premature convergence. Therefore, the NSGA has a lower sensitivity on the preset parameters, and the anti-jamming ability is better than that of the traditional GA. However, note that too small settings of the genetic algebra (1000) and population size (20) will lead to a large number of backtrack mechanisms to be triggered. Thousands of backtrack operations in a single algorithm calculation greatly decreased its efficiency. Although the results are not affected, a fairly small parameter value should be avoided to ensure the operation speed remains reasonable. 3.4.2. Data Volume Parameter In the data volume parameter, high modal order, which will cause a great number of calculations, will result in a large quantity of TMAC non-diagonal element values, and the solution path of the algorithm will be more cumbersome. Thus, the problem will appear in the traditional GA that the performance value declines sharply with the enlargement of data size. In this situation, the NSGA also has a certain decrease of performance value, primarily because of the change of data volume. The maximum non-diagonal elements of TMAC which estimate the orthogonality of the mode shape vectors are shown in Table 9. Table 9. Maximum non-diagonal elements of TMAC with the changes of modal order (the parameters are set as 0.7 crossover rate, 0.02 mutation rate, 40 population size and 5000 genetic algebra). Modal order 10 15 20 25 30 GA 0.1048 0.1125 0.2037 0.2079 0.3221 NSGA 0.1048 0.1125 0.1639 0.2285 0.2811 Modal order 10 15 20 25 30 GA 0.1048 0.1125 0.2037 0.2079 0.3221 NSGA 0.1048 0.1125 0.1639 0.2285 0.2811 Table 9. Maximum non-diagonal elements of TMAC with the changes of modal order (the parameters are set as 0.7 crossover rate, 0.02 mutation rate, 40 population size and 5000 genetic algebra). Modal order 10 15 20 25 30 GA 0.1048 0.1125 0.2037 0.2079 0.3221 NSGA 0.1048 0.1125 0.1639 0.2285 0.2811 Modal order 10 15 20 25 30 GA 0.1048 0.1125 0.2037 0.2079 0.3221 NSGA 0.1048 0.1125 0.1639 0.2285 0.2811 The growing values of both algorithms also show that data increasing will lead to the judgment of the orthogonality become rigorous, which is independent from the algorithm mechanism. Under the high modal data, different non-diagonal element values which consider more bridge dynamic information in dealing with the optimization problem will be obtained to evaluate the orthogonality between two nodes. 3.4.3. Backtrack Confidence Level The backtrack condition set in this example only adds a confidence level in the population average fitness. This approach ensures the evolution of the entire population and controls the evolution rate of the performance values. 4. CONCLUSIONS Based on the principles of the GA and the NPA, a new algorithm for massive data processing was proposed in this paper. From the structure and principles exhibited in the new algorithm by analyzing the sensitivity of the algorithm parameters, the following conclusions can be drawn: The NSGA combines the excellent searching ability of the GA and the fast convergent ability of the NPA; this approach can quickly and effectively solve the problem with large-scale discrete points and ensure the accuracy of the results. The traditional GA cannot access to the solution accurately under a high-index solution domain, and it is sensitive to the disturbances because of the existence of local optima. The NSGA reduces the interference caused by the local optima by decreasing the solution domain index and finding incomplete solutions, which can avoid the occurrence of large-scale errors and improve the accuracy of the results. The optimized results and efficiency are better than those of the traditional algorithm. The NSGA shows low sensitivity regarding the change of preset parameters in the genetic operation. The running speed of the algorithm is affected only in the case of some extremely low values, and the results are excellent and stable. For the data volume parameter, the dimension of performance value will increase as the data volume enlarges. By operational comparison, the NSGA has a stronger ability in processing a massive amount of data and identifying the performance values with great concentrated discrete degree. The running efficiency largely depends on the execution times of the backtrack operation. Therefore, the backtrack operation directly affects the running speed of the algorithm, which demonstrates that the algorithm is sensitive to the backtrack confidence level. A problem of the proposed algorithm lies in the operation process: approximately half of the variables are determined in the first iteration, which is disadvantageous to the final solution. Too many variables are determined at the first layer, where the backtrack operation will not be triggered, which is detrimental for the further computation. If the loop iteration can be used in the operation process, then a more accurate global optimum will be obtained. The NSGA can be used not only in the sensor optimization but also in most massive data processing tasks. Therefore, the NSGA has great application potential. ACKNOWLEDGEMENTS The authors are grateful to Yingqi Liu, Yasi Deng and Weidong Jiang at Wuhan University of Technology in Wuhan, Hubei Province, China for the support for this research, and the paper was supported by Inner Mongolia Traffic Science and Technology Projects and ‘the Fundamental Research Funds for the Central Universities’ (2017-YB-014). REFERENCES 1 Yang , Y. and Nagarajaiah , S. ( 2014 ) Blind denoising of structural vibration responses with outliers via principal component pursuit . Struct. Control. Health. Monit. , 21 , 962 – 978 . Google Scholar CrossRef Search ADS 2 Kammer , D.C. ( 1990 ) Sensor placement for on-orbit modal identification and correlation of large space structures . J. Guid. Control. Dyn. , 14 , 251 – 259 . Google Scholar CrossRef Search ADS 3 Cheng , J.Q. ( 2012 ) Optimal sensor placement for bridge structure based on improved effective independence . J. Vib., Meas. Diagn. , 32 , 812 – 816 . (in Chinese). 4 Zhou , G.D. ( 2015 ) A comparative study of genetic and firefly algorithms for sensor placement in structural health monitoring . Shock. Vib. , 2015 , 1 – 10 . 5 Papadimitriou , C. ( 2004 ) Optimal sensor placement methodology for parametric identification of structural systems . J. Sound Vib. , 278 , 923 – 947 . Google Scholar CrossRef Search ADS 6 Goldberg , D.E. ( 1989 ) Genetic Algorithms in Search, Optimization and Machine Learning . Addison-Wesley Publishing Company , New Jersey . 7 Holland , J.H. ( 1975 ) Adaptation in Natural and Artificial Systems . MIT Press , Cambridge . 8 Zhou , G.D. ( 2013 ) The node arrangement methodology of wireless sensor networks for long-span bridge health monitoring . Int. J. Dist. Sensor. Netw. , 2013 , 1172 – 1175 . 9 Monti , G. ( 2010 ) Genetic-algorithm-based strategies for dynamic identification of nonlinear systems with noise-corrupted response . J. Comput. Civ. Eng. , 24 , 173 – 187 . Google Scholar CrossRef Search ADS 10 Koh , C.G. ( 2006 ) Genetic algorithms in structural identification and damage detection . Intell. Comput. Paradig. Earth. Eng. , 2006 , 316 – 341 . 11 Chisari , C. ( 2015 ) Dynamic and static identification of base-isolated bridges using Genetic Algorithms . Eng. Struct. , 102 , 80 – 92 . Google Scholar CrossRef Search ADS 12 Onnen , C. ( 1997 ) Genetic algorithms for optimization in predictive control . Control. Eng. Pract. , 5 , 1363 – 1372 . Google Scholar CrossRef Search ADS 13 Gao , W.C. ( 2008 ) Optimization of sensor placement by genetic algorithms . J. Harbin. Instit. Technol. , 40 , 9 – 11 . (in Chinese). 14 Zhou , L. ( 2014 ) Optimized arrangement of sensors for continuous rigid frame bridge . Highway , 11 , 36 – 40 . (in Chinese). 15 Cha , Y.J. ( 2013 ) Optimal placement of active control devices and sensors in frame structures using multi-objective genetic algorithms . Struct. Control. Health. Monit. , 20 , 16 – 44 . Google Scholar CrossRef Search ADS 16 Shabbir , F. ( 2016 ) Model updating using genetic algorithms with sequential niche technique . Eng. Struct. , 120 , 166 – 182 . Google Scholar CrossRef Search ADS 17 Moisés , S. ( 2016 ) A novel unsupervised approach based on a genetic algorithm for structural damage detection in bridges . Eng. Appl. Artif. Intell. , 52 , 168 – 180 . Google Scholar CrossRef Search ADS 18 Kennedy , J. and Eberhart , R. ( 1995 ) Particle Swarm Optimization. IEEE Int. Conf. Neural Networks Proceedings, the University of Western Australia, Western Australia, November 27–December 1, pp. 1942–1948. IEEE Service Center, Perth. 19 Li , J.L. ( 2015 ) Optimal sensor placement for long-span cable-stayed bridge using a novel particle swarm optimization algorithm . Civil. Struct. Health. Monit. , 5 , 677 – 685 . Google Scholar CrossRef Search ADS 20 Yi , T.H. , Li , H.N. and Zhang , X.D. ( 2012 ) A modified monkey algorithm for optimal sensor placement in structural health monitoring . Smart. Mater. Struct. , 21 , 52 – 53 . 21 Yi , T.H. , Li , H.N. and Zhang , X.D. ( 2012 ) Sensor placement on Canton Tower for health monitoring using asynchronous-climb monkey algorithm . Smart. Mater. Struct. , 21 , 615 – 631 . 22 Aggelogiannaki , E. and Sarimveis , H. ( 2007 ) A simulated annealing algorithm for prioritized multiobjective optimization—implementation in an adaptive model predictive control configuration . IEEE. Trans. Syst. Man. Cybern. B Cybern. , 37 , 902 – 915 . Google Scholar CrossRef Search ADS PubMed 23 Yang , X.S. ( 2009 ) Firefly Algorithms for Multimodal Optimization. In Stochastic Algorithms: Foundations and Applications, vol. 5792 of Lecture Notes in Computer Science, Sappora, Japan, pp. 169 – 178 , Springer Berlin Heidelberg, Berlin . 24 Zhou , G.D. ( 2015 ) Energy-aware wireless sensor placement in structural health monitoring using hybrid discrete firefly algorithm . Struct. Control. Health Monit. , 22 , 648 – 666 . Google Scholar CrossRef Search ADS 25 Zhou , G.D. ( 2015 ) Optimal sensor placement under uncertainties using a nondirective movement glowworm swarm optimization algorithm . Smart. Struct. Syst. , 16 , 243 – 262 . Google Scholar CrossRef Search ADS 26 Meng , Q.C. ( 2007 ) Research of optimal placement of sensors for health monitoring system of the 3rd Nanjing Changjiang River Bridge . Bridg. Constr. , 5 , 76 – 79 . in Chinese. 27 Huang , M.S. ( 2007 ) Serial method for optimal placement of sensors and its application in modal tests of bridge structure . Bridg. Constr. , 18 , 80 – 83 . in Chinese. 28 Shi , L. ( 2000 ) Nested partitions method for global optimization . Oper. Res. , 48 , 390 – 407 . Google Scholar CrossRef Search ADS 29 Shi , L. ( 2009 ) Nested Partitions Method, Theory and Applications . Springer , US, New York . 30 Chauhdry , M.H.M. ( 2012 ) Nested partitions for global optimization in nonlinear model predictive control . Control. Eng. Pract. , 20 , 869 – 881 . Google Scholar CrossRef Search ADS 31 Zong , D.C. ( 2015 ) Combined nested partitions method based on local search algorithm . Appl. Res. Comput. , 32 , 752 – 758 . in Chinese. 32 Zhang , B.Y. ( 2016 ) Sensor location optimization of large span bridge based on nested-stacking genetic algorithm . J. Wuhan Univ. Technol. , 40 , 745 – 749 . (in Chinese). 33 Carne , T.G. and Dohrmann , C.R. ( 1994 ) A Modal Test Design Strategy for Modal Correlation. Proc. 13th Int. Conf. Modal Analysis, Nashville, TN, United States, December, pp. 927–933. Proceedings of SPIE, Washington DC. 34 Yi , T.H. ( 2014 ) Hierarchic wolf algorithm for optimal triaxial sensor placement . J Build Struct , 35 , 223 – 229 . (in Chinese). © The British Computer Society 2017. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/open_access/funder_policies/chorus/standard_publication_model) http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png The Computer Journal Oxford University Press

Nested-Stacking Genetic Algorithm for the Optimal Placement of Sensors in Bridge

Loading next page...
 
/lp/ou_press/nested-stacking-genetic-algorithm-for-the-optimal-placement-of-sensors-7xAaf3YCLQ
Publisher
Oxford University Press
Copyright
© The British Computer Society 2017. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
ISSN
0010-4620
eISSN
1460-2067
D.O.I.
10.1093/comjnl/bxx105
Publisher site
See Article on Publisher Site

Abstract

Abstract According to the sensor location optimization problem in a health monitoring system of an existing bridge, a new optimization algorithm called the Nested-Stacking Genetic Algorithm (NSGA) is proposed. This new algorithm is combined with the traditional Genetic Algorithm (GA) and the Nested-Partitions Algorithm (NPA), which can divide the basic solution domain of the optimization problem into two regions: feasible and unfeasible regions. This division is achieved through the partition operation, and then the GA is used for sampling and selecting. Finally, the optimized results of the discrete variables are obtained through an iterative operation step by step. Subsequently, two benchmark test functions were adopted to test the algorithm. Comparison and analyses of the results were performed by calculating the values of the main control parameters of the new algorithm, such as the crossover and mutation rate, genetic algebra, population size, data volume parameter and the backtrack parameter, through several tentative calculations to the bridge. This new algorithm is found having improved performance compared with the traditional optimization algorithms, and the algorithm is only weakly sensitive to almost all of the parameters and has a strong anti-interference ability. 1. INTRODUCTION During recent few decades, large-scale bridges with abundant structural forms, such as suspension bridge, cable-stayed bridge and some compound systems, are constructed and different degrees of damage will be posed in the process of bridge construction and across the lifetime of a bridge. As a result, it is necessary to establish a perfect Structural Health Monitoring (SHM) system to obtain the bridge operational information. In the bridge health monitoring system, the original information collected by the sensor network can reflect the structural behavior and they can be provided for SHM, structural damage identification and model updating after disposition [1]. And the availability and the accuracy of the collected data are determined by the performance of the sensors which are configured in the bridge. In a sensor subsystem, it is significant to collect an integrated information of the bridge. And the sensors arranged in the whole bridge are inadvisable, which will result in a redundancy of information and a waste of money. Therefore, the selection and optimal arrangement of the sensors which is named as Optimal Sensor Placement (OSP) problem is a hotspot research field and many scholars at home and abroad have performed many research works. In 1991, Kammer [2] proposed the Effective Independence (Efi) method which is a widely used form of the global optimization method. Efi optimizes the Fisher Information Matrix (FIM) by removing the degree of freedom (DOF) of the modal data, which is less sensitive to the independence. Subsequently, information theory-based methods were developed in solving the OSP problem gradually. With the applications of the Efi method in the actual projects, a deficiency that the sensor configuration scheme may include the nodes with low vibration energy was revealed, which will increase the evaluation error of the parameters [3]. By the developing use of FIM, Modal Assurance Criteria (MAC) and Modal Strain Energy (MSE) [4] were established to estimate the dynamic information collected by the layout sensors. Papadimitriou [5] introduced the information entropy norm to measure the uncertainty existed in the system parameters. Wherein the MAC is commonly used for the reasons of its simple calculation process and definite identifiability. With the development of the approaches to OSP problem, the Genetic Algorithm (GA) is usually chosen as the optimization algorithm [6]. The GA, a heuristic random search algorithm derived from Darwin’s biology evolutionary theory [7], was developed when computers were used to simulate natural creatures for study by researchers. The GA uses the encoded string to represent the results of calculation and then evaluates the merits of each individual by calculating the fitness function. Finally, the optimum to the problem can be obtained through the crossover and mutation operations adopted for each individual to simulate natural evolution. The GA evaluates the merits of the individual by calculating the fitness function, simulates the processes in nature, and then receives the optima to the problem. In recent years, the GA has been improved with other algorithms and methods to address many problems [8–15]. Faisal Shabbir [16] adopted an improved GA for the structural model updating process. Moisés Silva [17] invented another type of algorithm based on the GA, which was used to detect the structural damage in bridges. The GA has played an important role in handling general optimal problems. With the heuristic algorithm adopted in resolving the sophisticated global optimization problems, the Particle Swarm Optimization (PSO), Monkey Algorithm (MA) and Simulated Annealing (SA) algorithm were emerged and their applications to the OSP problem were researched lately [18–22]. Firefly Algorithm (FA) was proposed by Yang [23] which is based on the idealized behavior of the flashing characteristic of fireflies. Some improved algorithms were presented in recent researches as well. Guang-dong Zhou [4, 24] proposed a Simple Discrete FA (SDFA) and a Hybrid Discrete FA (HDFA) in dealing with the cabled sensor placement problem and the wireless sensor placement problem respectively. And a Nondirective Movement Glowworm Swarm Optimization (NMGSO) algorithm was proposed based on the basic glowworm swarm optimization algorithm to identify the effective Pareto optimal sensor configurations [25]. As the global optimization algorithms have a wide applications range and strong applicability [26, 27], in addition to the GA, the Nested Partition Algorithm (NPA) performs well in handling most optimization problems. The NPA was first proposed in 1997. Of the large number of examples demonstrated, Shi [28, 29] and other scholars [30, 31] found that the NPA converges rapidly and it has fine global search capability and strong local search capability. For general discrete event dynamic system problems, it is difficult to find a mathematical analytical solving path because the laws of their solution structures are ambiguous; under this situation, the NPA provides a random search approach that bypasses the analytical way. The NPA can be divided into the following four steps: Partition, Sampling, Selection and Backtrack. With the extensive construction of long-span bridges, however, the quantity of sensors arranged in the bridge also increases. The OSP problem has evolved into a combinatorial optimization problem, whose variables are multi-dimensional and discrete; as a result, the problem has a large basic solution domain. With respect to this problem, the GA and some improved algorithms have been unable to obtain the results quickly and accurately. And the debugging of the parameters which exist in lots of algorithms is an enormous workload. To resolve these problems, a Nested-Stacking GA (NSGA) that combines the NPA with the GA is proposed in this paper. The NSGA [32] is proposed based on the theory of the NPA and uses the GA as a computing tool. An innovation point of this algorithm is that it combines the advantages of the NPA and GA, making itself more effective to address large amounts of modal data. For the new algorithm proposed in this paper, the main control parameters are listed and the sensitivity of each parameter is analyzed by conducting a model of an actual example to determine the suitable parameters for similar problems. 2. OPTIMIZATION ALGORITHM 2.1. Target Function for Optimization The optimization process of the proposed algorithm is based on the target function which should be established at first. In the OSP problem, the vibration-based structural condition assessment methodologies require that the measured mode shapes are discriminable from each other, such that they can be reliably identified [4]. Consequently, the MAC [33] are commonly used to build the modal assurance matrix: MACi,j=(Φ⁎,iTΦ⁎,j)2(Φ⁎,iTΦ⁎,i)(Φ⁎,jTΦ⁎,j) (1) where i, j represent the modal order; MACi,j is the modal assurance matrix element value of ith row jth column; Φ⁎,i and Φ⁎,j are the ith column and jth column of the modal matrix, respectively. The MAC can identify whether the complete information of a bridge is collected by the sensors through estimating the orthogonality of the selected mode shape vectors. A small maximum non-diagonal element of the MAC matrix implies less correlation between the corresponding mode shape vectors. Thus, the MAC non-diagonal elements are adopted to induce the sensor placement. Considering the problem that traditional MAC matrix cannot optimize three directions, x, y and z, of a 3D sensor, Yi [34] developed a Three-dimensional MAC (TMAC) by assembling three translational DOF of a structure node as an independent unit. The formula for TMAC is as follows: Ti,j=Qi,j2Qi,iQj,j (2) where Ti,j is an element of the ith row jth column of the 3D modal assurance matrix T, Ti,j∈[0,1]; Qi,j is the ith row jth column element of the FIM Q, and this matrix uses the node as a unit where sensors will be laid. The formula of Q is as follows: Q=∑k=1nspϕ3k,⁎Tϕ3k,⁎=∑k=1nspQk (3) where Qk is the FIM of the node at the position of the kth 3D sensor; ϕ3k,⁎ is the matrix consists of modal vectors which correspond to the three DOF of the kth node in modal assurance matrix ϕ; and nsp is the number of sensors that are proposed to be arranged. Thus, the spatial information of a bridge is considered in the TMAC, which is more nearly to the practical condition. In the process of sensor site selection, the smaller the maximum value of non-diagonal elements in the TMAC matrix is, the better the orthogonality between two selected nodes can be shown and the greater the bridge information can be reflected. And the value range of the non-diagonal elements of TMAC is [0,1]. Thus, 1−T can be used as the optimization target function, which is referred to as the fitness function in the genetic operation hereinafter, and its results are the performance values Fitness=1−max(Ti,j) (4) A better sensor position can be obtained with a larger performance value. 2.2. Nested-Stacking GA When the two algorithms described above, the GA and NPA are applied to the OSP problem, although they are both impressive in global searches, some shortcomings can be found. The NPA is limited to sampling quantities that use a random sampling method. It is possible to extract large amounts of data with weak representation, which will directly lead the selection operation into an incorrect partition easily. In this case, the backtracking operation is triggered, which will reduce the algorithm efficiency. Although searching in a global scope, the GA can easily fall into the local optima in the case of an explosive exponent of the solution space because its global search mechanism will be used in any cases. Therefore, the GA and the NPA are ineffective when applied to the current sensor optimal problems of long-span bridges. For general optimization problems, the mathematical formula of the optimization function is assumed: {G=F(x1,x2,…,xm)xi∈Θ(i=1,2,…,m) (5) where x are the independent variables; m represents the number of the non-interfering independent variables; Θ represents the definition domain of the independent variables; F can be a determined functional expression or mathematical structure, or it can be a blurred black-box problem. Hence, the solution is z=Gmax. Formula (5) could be used to express most optimization problems. The NSGA proposed in this paper establishes an algorithm framework on the basis of the ideology of the NPA, which uses the GA as the basis of the partition step and the random sampling method in the sampling step in the NPA. The core concept is to approximate solutions of the problem by continuously reducing the dimensionality of the independent variables. In this paper, the dimensionality represents the number of sensors. Each loop of the steps in the algorithm is called a Layer, and an operation of each layer can reduce the dimensionality of the variables once, i.e. a certain number of sensor positions can also be determined. Thus, the mode in which the layers are stacking up where one layer is nested to the next layer is called the nested-stacking operation. This algorithm has the following two characteristics: one characteristic is the exponent of the solution space is reduced via the partition operation, which leads to the reduction of the size of the search space, thereby increasing the accuracy of the global optimization; another characteristic is the use of a step by step approach to find the incomplete solutions of discrete variables that have different accuracy rather than to obtain the complete solution in one step. Under this manner, the error of a wide range random search is decreased. According to the algorithm theory, the scheme of the NSGA is shown in Fig. 1. Figure 1. View largeDownload slide Scheme of the NSGA Figure 1. View largeDownload slide Scheme of the NSGA 2.2.1. Coding and Initialization of the Algorithm First, the finite element model of the bridge structure that requires a sensor arrangement is proposed. Next, the modal matrix of the structure is obtained through modal analysis, and the candidate positions of sensors are numbered at the same time. Because the TMAC matrix considers the intersection angle of the modal vector without considering the condition of the same FIM indices that exist between the neighboring nodes, a certain number of sensors are equally distributed into the whole length of the bridge to avoid information redundancy, which shows that different sensors have different locations. This method can effectively prevent a plurality of sensors from being arranged in the same zone. Therefore, the decimal encoded mode used in this algorithm can easily solve the problem of determining the locations of the sensors over the full-bridge range. Assuming that the whole bridge length is L, the number of sensor points that can be arranged is n, the number of sensors is m, the coding and initialization steps of the algorithm are as follows: Step 1: The points that can be arranged in the whole bridge are numbered as 1 ~ n; hence, each point corresponds to a node in bridge model and a position coordinate (si,0,0),i∈{1,2,…,m}. Step 2: With equal division of the full-bridge length L according to the number of sensors, the sensors are arranged in zones, as shown in Fig. 2. In each zone, a different number of points that can be arranged exist. The length of each zone is l=L/m. Figure 2. View largeDownload slide Zone layout of the sensors Figure 2. View largeDownload slide Zone layout of the sensors si are the abscissa values of the points, i∈[1,n],si∈[0,L];tj represent the position number of the sensor positions that can be arranged in each zone, j∈[1,m],tj∈[1,n]. rj represent the points number of the zone boundary, rj∈[1,n]; where i, j, t and r are integers. If a node tj(stj,0,0) satisfies (j−1)l<stj≤jl, then tj∈(rj−1,rj] can be obtained. Therefore, the set of {t1,t2,…,tj,…,tm}, tj∈(rj−1,rj] which consists of the sensor points selected from each of the m sections, is a value belonging to the independent variable of the problem. m independent variables are called genes, and this set is called an individual. A population consists of p individuals that are randomly generated. Step 3: Set the initial related parameters, which include initial genetic algebra Gs, crossover and mutation rate in GA, etc. Next, a population is generated randomly as the first generation set of individuals in the algorithm operations. Last, the modal matrix that was extracted previously is imported. 2.2.2. Free Partition The free partition operation randomly searches for an individual in the subregions through a plurality of parallel GA. Next, the superregion that is selected in the last iteration will be divided into two regions, i.e. the feasible region and the unfeasible region, which are in accordance with the solution repetitiveness of the discrete variables: The number of parallel GAs is set as N in the first operation which is represented by the iteration layer number of 1. The modal data corresponding to the genes position of the individual {t1,t2,…,tm}, which is generated by initialization, can be extracted, and then the performance values can be calculated by substituting the modal data into formula (4). After the selection, the crossover and mutation operations are performed until the number of iterations reaches Gs, and then the algorithm converges to once solutions of the GA, {x1N,x2N,…,xmN}, can be obtained. Compared with the solutions of the N parallel algorithms, if the jth gene satisfies the requirement of xj1=xj2=…=xjN, it is confirmed that this gene has a large impact on the target function. The same u genes will be determined by comparing all the positions of m genes orderly. Thus, {xu,tm−u} is the solution expression, which is named as the first incomplete solution domain, where xu and tm−u are the determined and undetermined variables, respectively. This solution will divide the solution domain into two subregions: the incomplete solution domain (also named the feasible region) and the domain of the remaining solutions (which is called the unfeasible region). If the iteration layer number is a,a>1, the same v genes have been determined after several iterations. The solution in this layer will further divide the feasible region obtained from the last generation into the a times incomplete solution domain {xv,tm−v} and the unfeasible region. If v=m, then {tm} must be the optimal solution. Hence, the steps of the free partition are as follows: Step 1: Perform a comparison with the gene values at the same position of the solution obtained from N parallel GAs. Step 2: Obtain the a + 1 times incomplete solution domain by determining the positions of genes with the same values. 2.2.3. Subregion Sampling The incomplete solution domain, which is obtained from the partition operation, is searched randomly by the GA. In the random search methods, the GA has a strong global dominance, and its searching results have superior representativeness. The GA can better reflect the performance values of subregions. The steps of subregion sampling are as follows: Step 1: In the a times incomplete solution domain {xv,tm−v} obtained from the free partition operation, new values should be assigned to tm−v variables randomly. Step 2: N solutions can be obtained by genetic operation to the tm−v variables, and then the best individual performance value and the average optimal population performance value of each solution can be recorded. 2.2.4. Layered Iteration and Selection The number of variables in the target function is reduced by gradually obtaining the values of the variables that largely impact the operation results. Therefore, the number of the unknown quantities in the incomplete solutions is reduced by the means of layering. This step is described as follows: Without triggering the backtrack mechanism, the values of the remaining m–v variables according to the solution {xv,xm−v} of N parallel algorithms obtained from the subregion sampling operation are compared. If the same solutions exist in m–v variables, they will be moved into the incomplete solution, i.e. the feasible region in next generation is selected, and then the operation goes into the next layer; otherwise, the operation remains at the present layer. 2.2.5. Backtrack Backtrack will be implemented when the sampling performance value declines in the iteration process, which means an error appeared in the random sampling; this process is similar with that in the NPA. In this algorithm, the basic condition is that the number of layers should be greater than 1. In the process of subregion sampling, each GA operation can obtain a best individual and a best population, and the performance value is the criterion for choosing the best individual. If the best individual performance of the a + 1 layer is less than that of the a layer, the algorithm may have a degeneration. In this case, the algorithm backtracks to the last layer for the re-operation process. Moreover, to ensure the heredity of the superior genes, the backtrack operation will also be performed when the average performance value of the best population meets the requirement of Fitness(a+1)<α·Fitness(a), where α is the confidence level, i.e. the population has a degeneration. In addition, the genetic operation in different layers has a different number of variables in this algorithm. The deeper the layer is, the less variables there are. Thus, the initial genetic algebra Gs can follow a linear decreasing dependence of (m−v)/m for linear decrease. The algorithm operation efficiency will be improved with the reduction of the iteration times caused by the reduction in the number of variables. 2.3. Benchmark Test of the NSGA In order to test the effectiveness of the NSGA in calculation, two benchmark functions are provided for test. 2.3.1. Sphere model The formula of the function is as follows: f1(X)=∑i=1nxi2 (6) It is a nonlinear symmetric function with single extremum. The global minimum is f1=0, which will be obtained when x={0,0,…,0}. In this example, 10 independent variables whose values ranges are xi∈[−100,100] with a gap of one are adopted in the function. Therefore, 200 values can be chosen for each variable and the solution space size is 20010. By taking example of the traditional GA, the parameters of the GA and the NSGA in the example are set in Table 1. Table 1. Parameters setting. Crossover rate Mutation rate Population size Chromo size GA 0.7 0.015 40 10 NSGA 0.7 0.015 40 10 Crossover rate Mutation rate Population size Chromo size GA 0.7 0.015 40 10 NSGA 0.7 0.015 40 10 Table 1. Parameters setting. Crossover rate Mutation rate Population size Chromo size GA 0.7 0.015 40 10 NSGA 0.7 0.015 40 10 Crossover rate Mutation rate Population size Chromo size GA 0.7 0.015 40 10 NSGA 0.7 0.015 40 10 Table 2. The results of GA. Genetic Algebra 5000 10 000 1 0 1 2 0 2 3 3 0 4 2 0 5 2 0 Genetic Algebra 5000 10 000 1 0 1 2 0 2 3 3 0 4 2 0 5 2 0 Table 2. The results of GA. Genetic Algebra 5000 10 000 1 0 1 2 0 2 3 3 0 4 2 0 5 2 0 Genetic Algebra 5000 10 000 1 0 1 2 0 2 3 3 0 4 2 0 5 2 0 Table 3. The results of NSGA. Genetic Algebra 5000 10 000 Value Layer Backtrack times Value Layer Backtrack times 1 0 4 5 0 2 0 2 1 5 4 0 2 1 3 0 4 6 0 2 0 4 0 4 4 0 2 0 5 0 4 5 0 2 1 Genetic Algebra 5000 10 000 Value Layer Backtrack times Value Layer Backtrack times 1 0 4 5 0 2 0 2 1 5 4 0 2 1 3 0 4 6 0 2 0 4 0 4 4 0 2 0 5 0 4 5 0 2 1 Table 3. The results of NSGA. Genetic Algebra 5000 10 000 Value Layer Backtrack times Value Layer Backtrack times 1 0 4 5 0 2 0 2 1 5 4 0 2 1 3 0 4 6 0 2 0 4 0 4 4 0 2 0 5 0 4 5 0 2 1 Genetic Algebra 5000 10 000 Value Layer Backtrack times Value Layer Backtrack times 1 0 4 5 0 2 0 2 1 5 4 0 2 1 3 0 4 6 0 2 0 4 0 4 4 0 2 0 5 0 4 5 0 2 1 The genetic algebra is set on two conditions for comparison: 5000 and 10 000 (Tables 2 and 3). Thus, the results tables can be obtained after several calculations. By the comparison of the results between two algorithms, the NSGA shows excellent performance to the function that the results are almost convergent to the global optimum. 2.3.2. Ackley function The formula of the function is as follows: f2(X)=−20exp(−0.2130∑i=1nxi2)−exp(130∑i=1ncos2πxi)+20+e (7) The function with two variables is shown in Fig. 3. It is a continuous, inseparable function with multiple extremums, and the independent variables are separated from each other. The global minimum of the function is f2=0, which will be obtained when x={0,0,…,0} (Fig. 4). Figure 3. View largeDownload slide Ackley function with two variables. Figure 3. View largeDownload slide Ackley function with two variables. Figure 4. View largeDownload slide Layout of the full-bridge example. Figure 4. View largeDownload slide Layout of the full-bridge example. In this example, 30 independent variables are adopted in the function and the values ranges are xi∈[−10,10] with a gap of 0.1. Therefore, 200 values can be chosen for each variable and the solution space size is 20030. The parameters of the GA and the NSGA are set in Table 4. Table 4. Parameters setting. Crossover rate Mutation rate Population size Chromo size GA 0.5 0.01 50 30 NSGA 0.5 0.01 50 30 Crossover rate Mutation rate Population size Chromo size GA 0.5 0.01 50 30 NSGA 0.5 0.01 50 30 Table 4. Parameters setting. Crossover rate Mutation rate Population size Chromo size GA 0.5 0.01 50 30 NSGA 0.5 0.01 50 30 Crossover rate Mutation rate Population size Chromo size GA 0.5 0.01 50 30 NSGA 0.5 0.01 50 30 The genetic algebra is set on three conditions for comparison: 5000, 10 000 and 20 000. Thus, the results which are shown in Tables 5 and 6 can be obtained after calculation. Table 5. The results of GA. Genetic algebra 5000 10 000 20 000 1 2.1585 0.7828 0.2139 2 2.0161 0.8620 0.4714 3 1.8402 1.1048 0.4184 4 2.1369 0.9243 0.3342 5 2.7041 0.8757 0.3405 Genetic algebra 5000 10 000 20 000 1 2.1585 0.7828 0.2139 2 2.0161 0.8620 0.4714 3 1.8402 1.1048 0.4184 4 2.1369 0.9243 0.3342 5 2.7041 0.8757 0.3405 Table 5. The results of GA. Genetic algebra 5000 10 000 20 000 1 2.1585 0.7828 0.2139 2 2.0161 0.8620 0.4714 3 1.8402 1.1048 0.4184 4 2.1369 0.9243 0.3342 5 2.7041 0.8757 0.3405 Genetic algebra 5000 10 000 20 000 1 2.1585 0.7828 0.2139 2 2.0161 0.8620 0.4714 3 1.8402 1.1048 0.4184 4 2.1369 0.9243 0.3342 5 2.7041 0.8757 0.3405 Table 6. The results of NSGA. Genetic algebra 5000 10 000 20 000 Value Layer Backtrack times Value Layer Backtrack times Value Layer Backtrack times 1 0.2139 12 27 0.1775 9 0 0 4 0 2 0 10 36 0.0901 8 9 0 4 1 3 0.0901 10 37 0 5 10 0 4 0 4 0.1374 10 24 0.1374 8 0 0 5 1 5 0.1775 10 43 0 6 5 0 4 0 Genetic algebra 5000 10 000 20 000 Value Layer Backtrack times Value Layer Backtrack times Value Layer Backtrack times 1 0.2139 12 27 0.1775 9 0 0 4 0 2 0 10 36 0.0901 8 9 0 4 1 3 0.0901 10 37 0 5 10 0 4 0 4 0.1374 10 24 0.1374 8 0 0 5 1 5 0.1775 10 43 0 6 5 0 4 0 Table 6. The results of NSGA. Genetic algebra 5000 10 000 20 000 Value Layer Backtrack times Value Layer Backtrack times Value Layer Backtrack times 1 0.2139 12 27 0.1775 9 0 0 4 0 2 0 10 36 0.0901 8 9 0 4 1 3 0.0901 10 37 0 5 10 0 4 0 4 0.1374 10 24 0.1374 8 0 0 5 1 5 0.1775 10 43 0 6 5 0 4 0 Genetic algebra 5000 10 000 20 000 Value Layer Backtrack times Value Layer Backtrack times Value Layer Backtrack times 1 0.2139 12 27 0.1775 9 0 0 4 0 2 0 10 36 0.0901 8 9 0 4 1 3 0.0901 10 37 0 5 10 0 4 0 4 0.1374 10 24 0.1374 8 0 0 5 1 5 0.1775 10 43 0 6 5 0 4 0 It is obviously shown in Table 5 that the GA is converged slowly and the solutions cannot achieve the global optimum value under 20 000 genetic algebra. In Table 6, however, the solution values under 5000 genetic algebra can achieve the optimum and the results become stable with the increase of the genetic algebra. Compared with the GA and the NSGA through two benchmark test functions, the conclusion can be drawn that the NSGA has a better performance than the GA in calculating the normal functions. 3. PARAMETER NUMERICAL EXAMPLE To evaluate the performance of the proposed algorithm in the actual application, the following steps are performed: (1) the optimal sensor arrangement is calculated based on an actual project, (2) a comparison is performed between the NSGA and the GA and (3) the main control parameters are calculated and their recommended value ranges are given. 3.1. The Project Outline The project selected in this paper is the Wuhai Yellow River Extradosed Cable-Stayed Bridge in Inner Mongolia of China. This bridge is an extra-large bridge that crosses the Yellow River on the Wuhai section of State Road #110 Line, and its bridge length is 1130 m, wherein the main bridge, 120 + 220 + 120 = 460 m, is adopted as partial extradosed cable-stayed section, and the girder is a single box of three rooms that uses C55 concrete. This bridge is pylon and girder rigid fixity, and the stayed cable is anchored in the middle room of the box. In this paper, the sensor arrangement along the girder is the main focus. The finite element model can be built for the link system. The pylon and girder rigid fixity structure is conducted for superstructure building. The finite element fish-bone model (Fig. 5) can be established by Ansys, and the main beam has 259 nodes and 258 elements. Extracting the rigidity and mass matrices from the model, the structure mode can be obtained through modal analysis, and the previous 30 modal data are collected in this paper for calculation. Figure 5. View largeDownload slide Finite element model of superstructure. Figure 5. View largeDownload slide Finite element model of superstructure. This problem is refined mathematically such that the distributive position number on the girder n=259. We assume that the number of sensors that must be set on the girder is m=40, based on experience, and the sensor type is the 3D accelerometer. 3.2. Operation Comparison To verify the computing capability of the new algorithm, in the 1D model of the bridge, the z direction data are extracted for tentative calculation first. Then, the parameters in the algorithms which can lead to convergency are determined after several tentative calculations. To have a directive comparison between the GA and the NSGA, the parameters are set in an approximate way which are shown in Table 7 and the genetic algebra is 5000 and 4000 for the GA and the NSGA, respectively, which are better for GA converging and which can reduce the calculation time of NSGA. Table 7. Parameters setting for tentative calculation. Crossover rate Mutation rate Population size Chromo size Genetic algebra Modal order GA 0.7 0.018 40 40 5000 30 NSGA 0.7 0.018 40 40 4000 30 Crossover rate Mutation rate Population size Chromo size Genetic algebra Modal order GA 0.7 0.018 40 40 5000 30 NSGA 0.7 0.018 40 40 4000 30 Table 7. Parameters setting for tentative calculation. Crossover rate Mutation rate Population size Chromo size Genetic algebra Modal order GA 0.7 0.018 40 40 5000 30 NSGA 0.7 0.018 40 40 4000 30 Crossover rate Mutation rate Population size Chromo size Genetic algebra Modal order GA 0.7 0.018 40 40 5000 30 NSGA 0.7 0.018 40 40 4000 30 After several operations of the GA, a part of the iteration process is shown in Fig. 6, and the maximum fitness value is 0.6779. Every convergence value is different from each other, although each iteration is converged. Each convergence value is a local optimum, primarily because of the vast basic solution domain, which results in a enormous amount of local optima from which the convergence of the algorithm cannot easily escape. The results of several operations will lead to different local optima, and the accuracy of these results is unable to be determined. Figure 6. View largeDownload slide Iterations of the GA. Figure 6. View largeDownload slide Iterations of the GA. After several computations of the same data in the NSGA, the result of fitness value is 0.7123, which is much improved and breaks the convergence limit of the traditional GA. The algorithm layers stack six times and backtrack 45 times. The convergence of each layer is in good condition, as shown in Fig. 7. Figure 7. View largeDownload slide Iterations of the NSGA. Figure 7. View largeDownload slide Iterations of the NSGA. As a result of the first layer depicted in Fig. 7a, its convergence is the same as that of the traditional GA, but with the existence of five parallel algorithms, not all of the unknown quantities are ascertained; thus, it is possible to jump out of the local optimum in the subsequent computations. As a result, for the fourth layer shown in Fig. 7b, the convergence values in this layer are quite different from each other, and the backtrack operation is triggered. Therefore, the results eliminate the fraction of the solutions with low fitness values. As a result, for the last layer shown in Fig. 7c, the results of three parallel algorithms converge to the same value, which indicates that the local optimum does not exist in this layer. It is likely that the result is the optimal solution. Fig. 7d shows that the best individual has a great convergence trend under the backtrack mechanism. The distribution of sensors is shown in Fig. 8. Figure 8. View largeDownload slide Distribution of sensors. Figure 8. View largeDownload slide Distribution of sensors. 3.3. Extraction and Calculation of the Control Parameter According to the principles and steps of the algorithm, the choices of the control parameters are as follows: (1) preset parameters before calculation: crossover rate, mutation rate, population size and genetic algebra adopted; (2) extracted data volume in the calculation: the modal order number of data; (3) the backtrack confidence level under the backtrack operation. The ultimate goal of optimizing the performance value, which is also named the fitness value of the best individual in genetic operations, should be the primary criterion in the algorithm evaluation process. The 3D modal data is used here. 3.3.1. Preset Parameters The preset parameters are the calculation-related parameters in the algorithm that should be set beforehand, and they are generally unchanged under the computations. In the later contents, parameter calculations, of this paper, the other parameters will remain constant in addition to the researched parameter, which uses the controlling variable method to guarantee that the results are reliable. Crossover and mutation rate In the traditional empirical operation of GA, crossover rate is varying from 0.3 to 0.7, and the mutation rate is ranged from 0.005 to 0.5, which are used to observe the impact to the computation results at here. Through adequate calculations, the results are shown in Figs. 9 and 10: Figure 9. View largeDownload slide Impact of crossover and mutation rate (NSGA). Figure 9. View largeDownload slide Impact of crossover and mutation rate (NSGA). Figure 10. View largeDownload slide Impact of crossover and mutation rate (GA). Figure 10. View largeDownload slide Impact of crossover and mutation rate (GA). The performance values of the NSGA, approximately 0.71, are found to be good in each case. Compared with the new algorithm, the GA has large fluctuations, and its results are unstable, which are poorer than the results of the new algorithm. The performance values are lower when the mutation rate is too high. In contrast, the values can remain stable when the mutation rate is small. In addition, the varieties of the results are not regular with the changing of crossover rate. Therefore, the crossover rate changes from 0.3 to 0.7 in the NSGA is a proper range, and the mutation rate changes from 0.005 to 0.5 can lead to optima. But considering of a large mutation rate will cause an inferior convergence in the GA operation of the NSGA, a range of 0.005–0.05 of the mutation rate is recommended to avoid a pathological running efficiency. Genetic algebra In this example, the GA convergence times are approximately 4000. Therefore, the parameter sensitivity analysis of genetic algebra is approximately 4000 iterations. The results are as follows: As illustrated in Fig. 11, the performance values of the NSGA remain stable without significant fluctuation when the genetic algebra changes. The performance values of the GA are small when the iterations are less than the convergent algebra, and it has large fluctuation when the iterations are too large. Figure 11. View largeDownload slide Impact of genetic algebra. Figure 11. View largeDownload slide Impact of genetic algebra. The results of the indices that affect the efficiency of the NSGA are summarized in Fig. 12. Figure 12. View largeDownload slide Operation efficiency indices of the NSGA Figure 12. View largeDownload slide Operation efficiency indices of the NSGA Fig. 12 shows that the layer of the new algorithm reduces gradually with the increase of iteration times. In addition, the backtrack times under 1000 iterations are considerably large, but 3000 iterations will not result in more backtrack operations. The data above indicate that under the condition of insufficient convergence of the iterations, the new algorithm can also guarantee the stability of the results. However, the high layer numbers and backtrack times produced in the new algorithm indicate that it has a negative impact on the running efficiency. In addition, the results and the running efficiency have strong stability when the iterations are in the high case. Thus, a range of 3000–10 000 is completely adequate for calculation. With the increasing of the exponent of the solution space, the genetic algebra can be enlarged to a high value. Population size As a group parameter in the GA that ensures the diversity of a single generation, the convergent timing and algorithm efficiency can be influenced by population size. Population size consists of 20–100 individuals and they are calculated by two types of algorithms in this study; the results are shown in Fig. 13. Figure 13. View largeDownload slide Impact of population size. Figure 13. View largeDownload slide Impact of population size. In summary, the performance values and the stability of the NSGA are excellent, and the algorithm shows weak sensitivity to this parameter. However, the results of the GA show great fluctuation, and the performance values are inferior to the new algorithm. To summarize the results of the NSGA, the change of backtrack times and iteration layer can be obtained, as shown in Fig. 14. Figure 14. View largeDownload slide Operation efficiency indices of the NSGA. Figure 14. View largeDownload slide Operation efficiency indices of the NSGA. When the population is small, the layer and backtrack times are comparatively large, but they can remain small when the population size is equal to or greater than 30. Thus, the NSGA can maintain a better computation efficiency when the population size is set from 30 to 100. With regard to the general problem of OSP whose solution domain is approximate to the example in this paper, the ranges of preset parameters are recommended as follows: crossover rate changes from 0.3 to 0.7; mutation rate changes from 0.005 to 0.05; population size in the range of 30–100 and genetic algebra varies from 3000 to 10 000. And a bit fluctuations are permitted under different conditions. 3.3.2. Extracted Data Volume Parameter In the OSP problem, determining how to adopt the order of the modal data is a long-standing problem that is difficult to be solved. Because the higher the order is, the larger the amount of data is and the larger the solution domain obtained by the discrete variables is. A high order will result in a larger amount of data and a huge solution domain obtained by the discrete variables. More data used in this problem require strong robustness and computing capability of the algorithm. In this example, the previous 10, 15, 20, 25 and 30 modal data for calculation and comparison are used. Fig. 15 shows that the same results are obtained by the NSGA and the GA when the amount of modal data is small. The results of the GA are lower than those of the NSGA when the modal data exceed a certain range. With the increase of modal data, the results of the GA have an even greater decline, which demonstrates that the NSGA has a better capability in handling massive amount of data than the GA and it is more sensitive on the discrete solution in centralized codomain. The indices that will affect the efficiency of the NSGA are summarized in Fig. 16. Figure 15. View largeDownload slide Impact of mode number. Figure 15. View largeDownload slide Impact of mode number. Figure 16. View largeDownload slide Operation efficiency indices of the NSGA. Figure 16. View largeDownload slide Operation efficiency indices of the NSGA. When the data amount is small, the iteration layer and the backtrack times are less as well. By the time, the algorithm running efficiency approaches the GA. With the increase of data order, two indices have a certain amount of growth, which indicates that disposing a massive amount of data requires longer computing time. This phenomenon is consistent with the characteristic of the algorithm. 3.3.3. Backtrack Parameter Backtrack is an important step in the algorithm operation that directly determines the evolution effectiveness of the algorithm. In this example, two backtrack conditions are utilized. In the calculation process, once any one of the conditions is satisfied, it is likely to estimate that a degeneration is generated, and then the backtrack operation will be performed. In the NSGA, the backtrack confidence level determines the intensity of its execution. A high confidence level will result in a faster evolution of the algorithm. However, the higher confidence level will require a larger computation pressure on each layer, which will decrease the computation efficiency of a single layer. Conversely, a lower confidence level will ensure that the algorithm operations remain stable; however, it will increase the corresponding number of layers. Therefore, the function of confidence level is to ensure the algorithm has some evolution in each layer of calculation, and the evolution ratio depends on the size of confidence level that is proposed in this paper. It is set an appropriate backtrack confidence level that can guarantee the accuracy and the validity of the final result. In this paper, four types of data are adopted: 0.99, 1.00, 1.01 and 1.02. The value of 0.99 can ensure that the results do not degenerate more than 1%, and the evolution speed is mainly determined by the randomness of the algorithm. The value of 1.00 can ensure the performance values evolving constantly, and the evolution speed is also determined by the randomness. Different from the situation of the 0.99 value, the computation results under 1.00 increase layer by layer, while the 0.99 value will cause the same performance values to be obtained in different layers. The value of 1.01 is that the performance value increases with a speed of greater than 1%, and the increase speed under the value of 1.02 is greater than 2%. Comparing the results under four values, the phenomenon can be found that the computations are smooth and the results are stable under the cases of 0.99 and 1.00. When the value of 1.01 is used, the algorithm has a stagnation in a single layer, which indicates that the computation in a layer is difficult to achieve the expected level of 1% increasing, and the operation time is much longer. No results can be obtained with 1.02. Hence, the backtrack confidence value of 1.00 is recommended for the example in this paper, and it can be extend to almost problems. 3.4. Parameter Sensitivity Analysis 3.4.1. Preset Parameter By disposing the results which obtained through the parameters calculations, the changes of the maximum non-diagonal element value Max(Ti,j) are shown in Figs. 17–20 when the parameters vary. The other parameters are set as 0.7 crossover rate, 0.02 mutation rate, 4000 genetic algebra in NSGA, 5000 genetic algebra in GA and 40 population size when one of the parameters changes. And 30 modal data order is used in these cases. Subsequently, Table 8 shows the quantities of the layer and the backtrack times under different parameters' values which lead to the GA operation in the NSGA converged and premature converged. Similarly, the other parameters can guarantee the GA operation converged when one parameter is sampled. Table 8. Quantity of layer and backtrack times under converged and premature converged parameters Converged parameters Crossover rate Mutation rate Genetic algebra Population size 0.6 0.7 0.02 0.03 4000 5000 40 50 Layer 5 7 7 10 7 9 7 10 Backtrack times 0 0 0 1 0 19 0 16 Max(Ti,j) 0.2864 0.2842 0.2842 0.2834 0.2842 0.2858 0.2842 0.2866 Premature converged parameters Crossover rate Mutation rate Genetic algebra Population size 0.3 0.4 0.005 0.5 1000 2000 20 30 Layer 7 6 9 7 58 26 150 43 Backtrack times 20 10 27 22 3173 1422 3579 256 Max(Ti,j) 0.2844 0.2867 0.2871 0.2834 0.2846 0.2871 0.2880 0.2883 Converged parameters Crossover rate Mutation rate Genetic algebra Population size 0.6 0.7 0.02 0.03 4000 5000 40 50 Layer 5 7 7 10 7 9 7 10 Backtrack times 0 0 0 1 0 19 0 16 Max(Ti,j) 0.2864 0.2842 0.2842 0.2834 0.2842 0.2858 0.2842 0.2866 Premature converged parameters Crossover rate Mutation rate Genetic algebra Population size 0.3 0.4 0.005 0.5 1000 2000 20 30 Layer 7 6 9 7 58 26 150 43 Backtrack times 20 10 27 22 3173 1422 3579 256 Max(Ti,j) 0.2844 0.2867 0.2871 0.2834 0.2846 0.2871 0.2880 0.2883 Table 8. Quantity of layer and backtrack times under converged and premature converged parameters Converged parameters Crossover rate Mutation rate Genetic algebra Population size 0.6 0.7 0.02 0.03 4000 5000 40 50 Layer 5 7 7 10 7 9 7 10 Backtrack times 0 0 0 1 0 19 0 16 Max(Ti,j) 0.2864 0.2842 0.2842 0.2834 0.2842 0.2858 0.2842 0.2866 Premature converged parameters Crossover rate Mutation rate Genetic algebra Population size 0.3 0.4 0.005 0.5 1000 2000 20 30 Layer 7 6 9 7 58 26 150 43 Backtrack times 20 10 27 22 3173 1422 3579 256 Max(Ti,j) 0.2844 0.2867 0.2871 0.2834 0.2846 0.2871 0.2880 0.2883 Converged parameters Crossover rate Mutation rate Genetic algebra Population size 0.6 0.7 0.02 0.03 4000 5000 40 50 Layer 5 7 7 10 7 9 7 10 Backtrack times 0 0 0 1 0 19 0 16 Max(Ti,j) 0.2864 0.2842 0.2842 0.2834 0.2842 0.2858 0.2842 0.2866 Premature converged parameters Crossover rate Mutation rate Genetic algebra Population size 0.3 0.4 0.005 0.5 1000 2000 20 30 Layer 7 6 9 7 58 26 150 43 Backtrack times 20 10 27 22 3173 1422 3579 256 Max(Ti,j) 0.2844 0.2867 0.2871 0.2834 0.2846 0.2871 0.2880 0.2883 Figure 17. View largeDownload slide Changes of Max(Ti,j) by the varying of crossover rate (where mutation rate is 0.02). Figure 17. View largeDownload slide Changes of Max(Ti,j) by the varying of crossover rate (where mutation rate is 0.02). Figure 18. View largeDownload slide Changes of Max(Ti,j) by the varying of mutation rate (where crossover rate is 0.7). Figure 18. View largeDownload slide Changes of Max(Ti,j) by the varying of mutation rate (where crossover rate is 0.7). Figure 19. View largeDownload slide Changes of Max(Ti,j) by the varying of genetic algebra. Figure 19. View largeDownload slide Changes of Max(Ti,j) by the varying of genetic algebra. Figure 20. View largeDownload slide Changes of Max(Ti,j) by the varying of population size. Figure 20. View largeDownload slide Changes of Max(Ti,j) by the varying of population size. The following phenomena can be found straightforwardly through the results of four parameters, crossover and mutation rate, genetic algebra and population size. The preset parameters have a superior impact on the GA, which cannot obtain an accurate result especially when the set parameters cannot allow the algorithm to reach convergence. Compared with the GA, the NSGA has almost no effect when the preset parameters change. The operation time increases when the parameters cannot access the standard of convergence, but it does not affect the results because the NSGA adopts a partitioned method to solve discrete variables, and the performance values will not decrease to the local optima under the premise of premature convergence. Therefore, the NSGA has a lower sensitivity on the preset parameters, and the anti-jamming ability is better than that of the traditional GA. However, note that too small settings of the genetic algebra (1000) and population size (20) will lead to a large number of backtrack mechanisms to be triggered. Thousands of backtrack operations in a single algorithm calculation greatly decreased its efficiency. Although the results are not affected, a fairly small parameter value should be avoided to ensure the operation speed remains reasonable. 3.4.2. Data Volume Parameter In the data volume parameter, high modal order, which will cause a great number of calculations, will result in a large quantity of TMAC non-diagonal element values, and the solution path of the algorithm will be more cumbersome. Thus, the problem will appear in the traditional GA that the performance value declines sharply with the enlargement of data size. In this situation, the NSGA also has a certain decrease of performance value, primarily because of the change of data volume. The maximum non-diagonal elements of TMAC which estimate the orthogonality of the mode shape vectors are shown in Table 9. Table 9. Maximum non-diagonal elements of TMAC with the changes of modal order (the parameters are set as 0.7 crossover rate, 0.02 mutation rate, 40 population size and 5000 genetic algebra). Modal order 10 15 20 25 30 GA 0.1048 0.1125 0.2037 0.2079 0.3221 NSGA 0.1048 0.1125 0.1639 0.2285 0.2811 Modal order 10 15 20 25 30 GA 0.1048 0.1125 0.2037 0.2079 0.3221 NSGA 0.1048 0.1125 0.1639 0.2285 0.2811 Table 9. Maximum non-diagonal elements of TMAC with the changes of modal order (the parameters are set as 0.7 crossover rate, 0.02 mutation rate, 40 population size and 5000 genetic algebra). Modal order 10 15 20 25 30 GA 0.1048 0.1125 0.2037 0.2079 0.3221 NSGA 0.1048 0.1125 0.1639 0.2285 0.2811 Modal order 10 15 20 25 30 GA 0.1048 0.1125 0.2037 0.2079 0.3221 NSGA 0.1048 0.1125 0.1639 0.2285 0.2811 The growing values of both algorithms also show that data increasing will lead to the judgment of the orthogonality become rigorous, which is independent from the algorithm mechanism. Under the high modal data, different non-diagonal element values which consider more bridge dynamic information in dealing with the optimization problem will be obtained to evaluate the orthogonality between two nodes. 3.4.3. Backtrack Confidence Level The backtrack condition set in this example only adds a confidence level in the population average fitness. This approach ensures the evolution of the entire population and controls the evolution rate of the performance values. 4. CONCLUSIONS Based on the principles of the GA and the NPA, a new algorithm for massive data processing was proposed in this paper. From the structure and principles exhibited in the new algorithm by analyzing the sensitivity of the algorithm parameters, the following conclusions can be drawn: The NSGA combines the excellent searching ability of the GA and the fast convergent ability of the NPA; this approach can quickly and effectively solve the problem with large-scale discrete points and ensure the accuracy of the results. The traditional GA cannot access to the solution accurately under a high-index solution domain, and it is sensitive to the disturbances because of the existence of local optima. The NSGA reduces the interference caused by the local optima by decreasing the solution domain index and finding incomplete solutions, which can avoid the occurrence of large-scale errors and improve the accuracy of the results. The optimized results and efficiency are better than those of the traditional algorithm. The NSGA shows low sensitivity regarding the change of preset parameters in the genetic operation. The running speed of the algorithm is affected only in the case of some extremely low values, and the results are excellent and stable. For the data volume parameter, the dimension of performance value will increase as the data volume enlarges. By operational comparison, the NSGA has a stronger ability in processing a massive amount of data and identifying the performance values with great concentrated discrete degree. The running efficiency largely depends on the execution times of the backtrack operation. Therefore, the backtrack operation directly affects the running speed of the algorithm, which demonstrates that the algorithm is sensitive to the backtrack confidence level. A problem of the proposed algorithm lies in the operation process: approximately half of the variables are determined in the first iteration, which is disadvantageous to the final solution. Too many variables are determined at the first layer, where the backtrack operation will not be triggered, which is detrimental for the further computation. If the loop iteration can be used in the operation process, then a more accurate global optimum will be obtained. The NSGA can be used not only in the sensor optimization but also in most massive data processing tasks. Therefore, the NSGA has great application potential. ACKNOWLEDGEMENTS The authors are grateful to Yingqi Liu, Yasi Deng and Weidong Jiang at Wuhan University of Technology in Wuhan, Hubei Province, China for the support for this research, and the paper was supported by Inner Mongolia Traffic Science and Technology Projects and ‘the Fundamental Research Funds for the Central Universities’ (2017-YB-014). REFERENCES 1 Yang , Y. and Nagarajaiah , S. ( 2014 ) Blind denoising of structural vibration responses with outliers via principal component pursuit . Struct. Control. Health. Monit. , 21 , 962 – 978 . Google Scholar CrossRef Search ADS 2 Kammer , D.C. ( 1990 ) Sensor placement for on-orbit modal identification and correlation of large space structures . J. Guid. Control. Dyn. , 14 , 251 – 259 . Google Scholar CrossRef Search ADS 3 Cheng , J.Q. ( 2012 ) Optimal sensor placement for bridge structure based on improved effective independence . J. Vib., Meas. Diagn. , 32 , 812 – 816 . (in Chinese). 4 Zhou , G.D. ( 2015 ) A comparative study of genetic and firefly algorithms for sensor placement in structural health monitoring . Shock. Vib. , 2015 , 1 – 10 . 5 Papadimitriou , C. ( 2004 ) Optimal sensor placement methodology for parametric identification of structural systems . J. Sound Vib. , 278 , 923 – 947 . Google Scholar CrossRef Search ADS 6 Goldberg , D.E. ( 1989 ) Genetic Algorithms in Search, Optimization and Machine Learning . Addison-Wesley Publishing Company , New Jersey . 7 Holland , J.H. ( 1975 ) Adaptation in Natural and Artificial Systems . MIT Press , Cambridge . 8 Zhou , G.D. ( 2013 ) The node arrangement methodology of wireless sensor networks for long-span bridge health monitoring . Int. J. Dist. Sensor. Netw. , 2013 , 1172 – 1175 . 9 Monti , G. ( 2010 ) Genetic-algorithm-based strategies for dynamic identification of nonlinear systems with noise-corrupted response . J. Comput. Civ. Eng. , 24 , 173 – 187 . Google Scholar CrossRef Search ADS 10 Koh , C.G. ( 2006 ) Genetic algorithms in structural identification and damage detection . Intell. Comput. Paradig. Earth. Eng. , 2006 , 316 – 341 . 11 Chisari , C. ( 2015 ) Dynamic and static identification of base-isolated bridges using Genetic Algorithms . Eng. Struct. , 102 , 80 – 92 . Google Scholar CrossRef Search ADS 12 Onnen , C. ( 1997 ) Genetic algorithms for optimization in predictive control . Control. Eng. Pract. , 5 , 1363 – 1372 . Google Scholar CrossRef Search ADS 13 Gao , W.C. ( 2008 ) Optimization of sensor placement by genetic algorithms . J. Harbin. Instit. Technol. , 40 , 9 – 11 . (in Chinese). 14 Zhou , L. ( 2014 ) Optimized arrangement of sensors for continuous rigid frame bridge . Highway , 11 , 36 – 40 . (in Chinese). 15 Cha , Y.J. ( 2013 ) Optimal placement of active control devices and sensors in frame structures using multi-objective genetic algorithms . Struct. Control. Health. Monit. , 20 , 16 – 44 . Google Scholar CrossRef Search ADS 16 Shabbir , F. ( 2016 ) Model updating using genetic algorithms with sequential niche technique . Eng. Struct. , 120 , 166 – 182 . Google Scholar CrossRef Search ADS 17 Moisés , S. ( 2016 ) A novel unsupervised approach based on a genetic algorithm for structural damage detection in bridges . Eng. Appl. Artif. Intell. , 52 , 168 – 180 . Google Scholar CrossRef Search ADS 18 Kennedy , J. and Eberhart , R. ( 1995 ) Particle Swarm Optimization. IEEE Int. Conf. Neural Networks Proceedings, the University of Western Australia, Western Australia, November 27–December 1, pp. 1942–1948. IEEE Service Center, Perth. 19 Li , J.L. ( 2015 ) Optimal sensor placement for long-span cable-stayed bridge using a novel particle swarm optimization algorithm . Civil. Struct. Health. Monit. , 5 , 677 – 685 . Google Scholar CrossRef Search ADS 20 Yi , T.H. , Li , H.N. and Zhang , X.D. ( 2012 ) A modified monkey algorithm for optimal sensor placement in structural health monitoring . Smart. Mater. Struct. , 21 , 52 – 53 . 21 Yi , T.H. , Li , H.N. and Zhang , X.D. ( 2012 ) Sensor placement on Canton Tower for health monitoring using asynchronous-climb monkey algorithm . Smart. Mater. Struct. , 21 , 615 – 631 . 22 Aggelogiannaki , E. and Sarimveis , H. ( 2007 ) A simulated annealing algorithm for prioritized multiobjective optimization—implementation in an adaptive model predictive control configuration . IEEE. Trans. Syst. Man. Cybern. B Cybern. , 37 , 902 – 915 . Google Scholar CrossRef Search ADS PubMed 23 Yang , X.S. ( 2009 ) Firefly Algorithms for Multimodal Optimization. In Stochastic Algorithms: Foundations and Applications, vol. 5792 of Lecture Notes in Computer Science, Sappora, Japan, pp. 169 – 178 , Springer Berlin Heidelberg, Berlin . 24 Zhou , G.D. ( 2015 ) Energy-aware wireless sensor placement in structural health monitoring using hybrid discrete firefly algorithm . Struct. Control. Health Monit. , 22 , 648 – 666 . Google Scholar CrossRef Search ADS 25 Zhou , G.D. ( 2015 ) Optimal sensor placement under uncertainties using a nondirective movement glowworm swarm optimization algorithm . Smart. Struct. Syst. , 16 , 243 – 262 . Google Scholar CrossRef Search ADS 26 Meng , Q.C. ( 2007 ) Research of optimal placement of sensors for health monitoring system of the 3rd Nanjing Changjiang River Bridge . Bridg. Constr. , 5 , 76 – 79 . in Chinese. 27 Huang , M.S. ( 2007 ) Serial method for optimal placement of sensors and its application in modal tests of bridge structure . Bridg. Constr. , 18 , 80 – 83 . in Chinese. 28 Shi , L. ( 2000 ) Nested partitions method for global optimization . Oper. Res. , 48 , 390 – 407 . Google Scholar CrossRef Search ADS 29 Shi , L. ( 2009 ) Nested Partitions Method, Theory and Applications . Springer , US, New York . 30 Chauhdry , M.H.M. ( 2012 ) Nested partitions for global optimization in nonlinear model predictive control . Control. Eng. Pract. , 20 , 869 – 881 . Google Scholar CrossRef Search ADS 31 Zong , D.C. ( 2015 ) Combined nested partitions method based on local search algorithm . Appl. Res. Comput. , 32 , 752 – 758 . in Chinese. 32 Zhang , B.Y. ( 2016 ) Sensor location optimization of large span bridge based on nested-stacking genetic algorithm . J. Wuhan Univ. Technol. , 40 , 745 – 749 . (in Chinese). 33 Carne , T.G. and Dohrmann , C.R. ( 1994 ) A Modal Test Design Strategy for Modal Correlation. Proc. 13th Int. Conf. Modal Analysis, Nashville, TN, United States, December, pp. 927–933. Proceedings of SPIE, Washington DC. 34 Yi , T.H. ( 2014 ) Hierarchic wolf algorithm for optimal triaxial sensor placement . J Build Struct , 35 , 223 – 229 . (in Chinese). © The British Computer Society 2017. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/open_access/funder_policies/chorus/standard_publication_model)

Journal

The Computer JournalOxford University Press

Published: Sep 1, 2018

There are no references for this article.

You’re reading a free preview. Subscribe to read the entire article.


DeepDyve is your
personal research library

It’s your single place to instantly
discover and read the research
that matters to you.

Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.

All for just $49/month

Explore the DeepDyve Library

Search

Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly

Organize

Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.

Access

Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.

Your journals are on DeepDyve

Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.

All the latest content is available, no embargo periods.

See the journals in your area

DeepDyve

Freelancer

DeepDyve

Pro

Price

FREE

$49/month
$360/year

Save searches from
Google Scholar,
PubMed

Create lists to
organize your research

Export lists, citations

Read DeepDyve articles

Abstract access only

Unlimited access to over
18 million full-text articles

Print

20 pages / month

PDF Discount

20% off