Enrichment of a rational polynomial family of shape functions with regularity C k 0 , k =0,2,4… Applications in axisymmetric plates and shellsOscar A.G. de Suarez; Rodrigo Rossi; Cláudio R.A. da Silva Jr
2012 Engineering Computations: International Journal for Computer-Aided Engineering and Software
doi: 10.1108/02644401211257209
Purpose – The purpose of this paper is to investigate the approximation performance of a family of piecewise rational polynomial shape functions, which are enriched by a set of monomials of order p to obtain high order approximations. To numerically demonstrate the features of the enriched approximation some examples on the mechanical elastic response and free‐vibration of axisymmetric plates and shells are carried out. Design/methodology/approach – The global approximation is based on a particular family of weight function, which is defined on the parametric domain of the element, ξ∈(−1,1), resulting in shape functions with compact support, which have regularity C 0 k , k =0,2,4… in the global domain Σ. The PU shape functions are enriched by a set of monomials of order p to obtain high order approximation spaces. Findings – Based on the numerical results of elastic axisymmetric plates and shells, it is demonstrated that the proposed methodology produces satisfactory results in terms of keeping the ill‐conditioning of the system of equations under accepted levels. Comparisons are established between linear and Hermitian shape functions showing similar results. The observed results for the free‐vibration problem of plates and shells show the potential of the proposed approximation space. Research limitations/implications – In this paper the formulation is limited to the modeling of axisymmetric plate and shell problems. However, it can be applied to model other problems where the high regularity of the approximation is required. Originality/value – The paper presents an alternative approach to construct partition of unity shape functions based on a particular family of weight function.
Investigating a flexible wind turbine using consistent time‐stepping schemesDenis Anders; Stefan Uhlar; Melanie Krüger; Michael Groß; Kerstin Weinberg
2012 Engineering Computations: International Journal for Computer-Aided Engineering and Software
doi: 10.1108/02644401211257218
Purpose – Wind turbines are of growing importance for the production of renewable energy. The kinetic energy of the blowing air induces a rotary motion and is thus converted into electricity. From the mechanical point of view the complex dynamics of wind turbines become a matter of interest for structural optimization and optimal control in order to improve stability and energy efficiency. The purpose of this paper therefore is to present a mechanical model of a three‐blade wind turbine with a momentum and energy conserving time integration of the system. Design/methodology/approach – The authors present a mechanical model based upon a rotationless formulation of rigid body dynamics coupled with flexible components. The resulting set of differential‐algebraic equations will be solved by using energy‐consistent time‐stepping schemes. Rigid and orthotropic‐elastic body models of a wind turbine show the robustness and accuracy of these schemes for the relevant problem. Findings – Numerical studies prove that physically consistent time‐stepping schemes provide reliable results, especially for hybrid wind turbine models. Originality/value – The application of energy‐consistent methods for time discretization is intended to provide computational robustness and to reduce the computational costs of the dynamical wind turbine systems. The model is aimed to give a first access into the investigation of fluid‐structure interaction for wind turbines.
From determinism to probabilistic analysis and stochastic simulationI. Doltsinis
2012 Engineering Computations: International Journal for Computer-Aided Engineering and Software
doi: 10.1108/02644401211257227
Purpose – The purpose of this paper is to expose computational methods as applied to engineering systems and evolutionary processes with randomness in external actions and inherent parameters. Design/methodology/approach – In total, two approaches are distinguished that rely on solvers from deterministic algorithms. Probabilistic analysis is referred to as the approximation of the response by a Taylor series expansion about the mean input. Alternatively, stochastic simulation implies random sampling of the input and statistical evaluation of the output. Findings – Beyond the characterization of random response, methods of reliability assessment are discussed. Concepts of design improvement are presented. Optimization for robustness diminishes the sensitivity of the system to fluctuating parameters. Practical implications – Deterministic algorithms available for the primary problem are utilized for stochastic analysis by statistical Monte Carlo sampling. The computational effort for the repeated solution of the primary problem depends on the variability of the system and is usually high. Alternatively, the analytic Taylor series expansion requires extension of the primary solver to the computation of derivatives of the response with respect to the random input. The method is restricted to the computation of output mean values and variances/covariances, with the effort determined by the amount of the random input. The results of the two methods are comparable within the domain of applicability. Originality/value – The present account addresses the main issues related to the presence of randomness in engineering systems and processes. They comprise the analysis of stochastic systems, reliability, design improvement, optimization and robustness against randomness of the data. The analytical Taylor approach is contrasted to the statistical Monte Carlo sampling throughout. In both cases, algorithms known from the primary, deterministic problem are the starting point of stochastic treatment. The reader benefits from the comprehensive presentation of the matter in a concise manner.
A numerical study on effect of temperature and inertia on tolerance design of mechanical assemblyG. Jayaprakash; K. Sivakumar; M. Thilak
2012 Engineering Computations: International Journal for Computer-Aided Engineering and Software
doi: 10.1108/02644401211257236
Purpose – Due to technological and financial limitations, nominal dimension may not be able achievable during manufacturing process. Therefore, tolerance allocation is of significant importance for assembly. Conventional tolerance analysis methods are limited by the assumption of the part rigidity. Every mechanical assembly consists of at least one or more flexible parts which undergo significant deformation due to gravity, temperature change, etc. The deformation has to be considered during tolerance design of the mechanical assembly, in order to ensure that the product can function as intended under a wide range of operating conditions for the duration of its life. The purpose of this paper is to determine the deformation of components under inertia effect and temperature effect. Design/methodology/approach – In this paper, finite element analysis of the assembly is carried out to determine the deformation of the components under inertia effect and temperature effect. Then the deformations are suitably incorporated in the assembly functions generated from vector loop models. Finally, the tolerance design problem is optimized with an evolutionary technique. Findings – With the presented approach, the component tolerance values found are the most robust to with stand temperature variation during the product's application. Due to this, the tolerance requirements of the given assembly are relaxed to certain extent for critical components, resulting in reduced manufacturing cost and high product reliability. These benefits make it possible to create a high‐quality and cost‐effective tolerance design, commencing at the earliest stages of product development. Originality/value – With the approach presented in the paper, the component tolerance values found were the most robust to withstand temperature variation during the product's application. Due to this, the tolerance requirements of the given assembly are relaxed to a certain extent for critical components, resulting in reduced manufacturing cost and high product reliability. These benefits make it possible to create a high‐quality and cost‐effective tolerance design, commencing at the earliest stages of product development.
A novel approximation method for multivariate data partitioning Fluctuation free integration based HDMRBurcu Tunga; Metin Demiralp
2012 Engineering Computations: International Journal for Computer-Aided Engineering and Software
doi: 10.1108/02644401211257245
Purpose – The plain High Dimensional Model Representation (HDMR) method needs Dirac delta type weights to partition the given multivariate data set for modelling an interpolation problem. Dirac delta type weight imposes a different importance level to each node of this set during the partitioning procedure which directly effects the performance of HDMR. The purpose of this paper is to develop a new method by using fluctuation free integration and HDMR methods to obtain optimized weight factors needed for identifying these importance levels for the multivariate data partitioning and modelling procedure. Design/methodology/approach – A common problem in multivariate interpolation problems where the sought function values are given at the nodes of a rectangular prismatic grid is to determine an analytical structure for the function under consideration. As the multivariance of an interpolation problem increases, incompletenesses appear in standard numerical methods and memory limitations in computer‐based applications. To overcome the multivariance problems, it is better to deal with less‐variate structures. HDMR methods which are based on divide‐and‐conquer philosophy can be used for this purpose. This corresponds to multivariate data partitioning in which at most univariate components of the Plain HDMR are taken into consideration. To obtain these components there exist a number of integrals to be evaluated and the Fluctuation Free Integration method is used to obtain the results of these integrals. This new form of HDMR integrated with Fluctuation Free Integration also allows the Dirac delta type weight usage in multivariate data partitioning to be discarded and to optimize the weight factors corresponding to the importance level of each node of the given set. Findings – The method developed in this study is applied to the six numerical examples in which there exist different structures and very encouraging results were obtained. In addition, the new method is compared with the other methods which include Dirac delta type weight function and the obtained results are given in the numerical implementations section. Originality/value – The authors' new method allows an optimized weight structure in modelling to be determined in the given problem, instead of imposing the use of a certain weight function such as Dirac delta type weight. This allows the HDMR philosophy to have the chance of a flexible weight utilization in multivariate data modelling problems.
A dynamic mixed nonlinear subgrid‐scale model for large‐eddy simulationYang Zhengjun; Wang Fujun
2012 Engineering Computations: International Journal for Computer-Aided Engineering and Software
doi: 10.1108/02644401211257263
Purpose – Large eddy simulation (LES) is widely used in prediction of turbulent flow. The purpose of this paper is to propose a new dynamic mixed nonlinear subgrid‐scale (SGS) model (DMNM), in order to improve LES precision of complex turbulent flow, such as flow including separation or rotation. Design/methodology/approach – The SGS stress in DMNM consists of scale‐similarity part and eddy‐viscosity part. The scale‐similarity part is used to describe the energy transfer of scales that are close to the cut‐off explicitly. The eddy‐viscosity part represents energy transfer of the other scales between smaller than grid‐filter size and larger than grid‐filter size. The model is demonstrated through two examples; one is channel flow and another is surface‐mounted cube flow. The computed results are compared with prior experimental data, and the behavior of DMNM is analyzed. Findings – The proposed model has the following characteristics. First, DMNM exhibits significant flexibility in self‐calibration of the model coefficients. Second, it does not require alignment of the principal axes of the SGS stress tensor and the resolved strain rate tensor. Third, since both the rotating part and scale‐similarity part are considered in the new model, flow with rotation and separation is easily simulated. Compared with the prior experimental data, DMNM gives more accurate results in both examples. Originality/value – The SGS model DMNM proposed in the paper could capture the detail vortex characteristics more accurately. It has the advantage in simulation of complex flow, including more separations.