TY - JOUR AU - Nam,, Byeong-Wook AB - Abstract The importance of operations and maintenance (O&M) and piping inspection in shipbuilding and offshore plant industries has significantly increased recently. Therefore, this study proposes a system that uses augmented reality (AR) to support these operations. AR in O&M and inspection systems can increase work comprehension and efficiency by utilizing 3D graphics, instead of drawings, to describe specific work functions. To realize this improvement, the augmented model should correspond to the reality; if accurate registration is not achieved, it can disrupt the work functions. Therefore, marker-based AR is used to generate specific recognition objects and correct the location of the augmented model. However, owing to certain characteristics of the shipbuilding and offshore plant industries, the markers are likely to be damaged, and thus the cameras fail to clearly detect them. In this study, a 3D camera was used to generate a point cloud based on 3D image information. Accordingly, the area around the model to be detected was designated as the region of interest (ROI). Furthermore, it is difficult to support a high-end device environment because it is tested as a portable device that can be used in a working environment; therefore, superfluous data were removed by detecting the 3D edges in the ROI, thereby minimizing data operation. The scalability of this work was enhanced using the computer-aided design (CAD) file information extracted from the CAD tool (used in the shipbuilding industry) and converting it into a point cloud. The proposed system is expected to eliminate the issues related to understanding the O&M work by addressing registration errors that might occur in constrained environments and to improve the torsional phenomenon of the augmented models. Graphical Abstract Open in new tabDownload slide Graphical Abstract Open in new tabDownload slide virtual and augmented reality, inspection and tolerance control Highlights Object extraction from scanning point-cloud data by region of interest and edge detection. Creation of design point-cloud data based on drawing data. Evaluation of registration performance through registration of two point-cloud models using iterative closest point algorithm. 1. Introduction The lifespan of shipbuilding and offshore plant industries has increased in recent times, owing to complex and large-scale trends, and the costs associated with operations and maintenance (O&M) are considerably higher than those of construction (Jang, Mun, Sohn, Suh, & Han, 2011). Accordingly, considering the importance of O&M systems, significant related technological developments have been made in recent years. Over 75% of the explosions that have occurred in offshore plants and ship operations in the sea indicated that the high rate of accidents was due to the failure of internal members (Yang, Kwon, & Keum, 2004). Typical examples include the Piper Alpha and Deepwater Horizon accidents in the Gulf of Mexico, shown in Fig. 1. Figure 1: Open in new tabDownload slide Deepwater Horizon (left) and Piper Alpha accidents (right). Figure 1: Open in new tabDownload slide Deepwater Horizon (left) and Piper Alpha accidents (right). The fire in Deepwater Horizon was caused by hydrocarbon leaks, resulting in considerable environmental damage, including massive oil spills (35 000 to 60 000 barrels per day) and casualties (35 people). The Piper Alpha disaster is the worst recorded accident in the history of marine oil fields caused by pipe gas leaks that resulted in the death of 167 of the 229 individuals on the platform. These leaks have been attributed to gas-pump defects due to a lack of maintenance and repair. To support maintenance and minimize such accidents, various systems are continuously being developed. Among them, augmented reality (AR)-based maintenance systems have become increasingly popular because they compare real models with virtual models and enhance the necessary information by visualizing virtual models in a real-world context. Recently, although AR-based support systems have been actively studied in various fields such as training, design, and maintenance, numerous issues arise when they are used in complex environments, such as the shipbuilding and offshore industries. One of these representative issues is registration. Therefore, in this study, a fast and accurate registration methodology was investigated in a complex environment and a study was conducted on a piping model, one of the most important members of a ship's light offshore plant structure. The remainder of this paper is structured as follows. Section 2 introduces the AR-based system used in the industry. Section 3 describes the overall composition of this system and the methodology used. In Section 4, the application of the system to the manufactured test model and the quantitative evaluation of the results are discussed. 2. Related Research AR can visualize virtual information in real-world contexts and interact with the real world. This technology can be used to provide work-related information to workers in the form of images instead of drawings to improve the work environment and efficiency. However, the augmented information cannot be accurately registered with the real model, owing to industrial environmental factors, resulting in some ambiguity in the work functions. To resolve this issue, AR-based registration technology has recently been studied using various techniques including marker detection, marker registration, tracking, and 3D object rendering (Kim & Lee, 2014). These technologies can be classified as marker or markerless tracking technologies. 2.1 Marker AR Marker-based AR can augment the virtual model easily and quickly by creating a special pattern image, attaching it to the actual space, recognizing the image pattern, and finally enhancing the event defined by the user. In addition, the recognized marker is used as a reference coordinate point, thereby simplifying the registration of the augmented model. Lee and Omer (2011) studied marker-based O&M support systems. Kang and Han (2014) studied how to use a marker-based AR mouse as a controller to improve screen obstruction that occurs in the process of controlling portable equipment. Tomohiro, Kazuki, Nobuyoshi, and Ali (2019) studied a system that intuitively provides indoor heat flow and analysis information analyzed through computational fluid dynamics, using marker-based AR technology. Martin (2008) used the random sample consensus (RANSAC) algorithm to detect lines and studied marker-detection methods through line-based edge detectors for faster and more accurate marker detection. To improve the performance of markers that are sensitive to illumination and brightness, Lee, Lee, and Kim (2012) studied the marker-recognition method through electromagnetic induction. To achieve a stable tracking performance in the misrecognition, blindness, and blur phenomena of markers, Jung and Park (2019) proposed a hybrid tracking method using an RGB camera along with an infrared camera. Index AR Solutions of the United States developed a marker-based system that aids the workers to simplify their jobs by enabling them to view the installations in the industrial field through an augmented model before the actual installation. However, the marker technology is based on a 2D image-tracking method that is sensitive to real-time changes in the surrounding environment. Therefore, it is important to check for foreign substances or damage to the marker. Clearly, this technique has limitations in terms of application to actual industrial sites; to overcome this issue, numerous studies are being conducted on markerless AR technology that processes the image data entered in real time without creating fixed-shape recognition objects. 2.2 Markerless AR Markerless AR is a method used to visualize events based on the results derived from analyzing the data that are normalized from real-time image signals (or normalized data) without using markers. In addition, detection and tracking technologies that detect specific patterns in the images are combined and implemented. Therefore, this method is more complex than the existing marker-based methods and its methodology is also different. In this study, we introduce research related to markerless methods, such as the traditional method, to enhance the reliability of registration by improving the tracking algorithm based on image processing, a spatial mapping method to create virtual maps based on real environments and corresponding virtual models, a reverse engineering method to reorganize the virtual computer-aided design (CAD) shapes and mapping, and a machine-learning method to compare the differences in the obtained results. 2.2.1 Traditional method Traditional methods employ a technique to determine the location and posture of the object to be detected. Bok, Hwang, and Kweon (2007) estimated the position and posture of the object using speeded-up robust features (SURF) and P3P to achieve the complete registration of an unmanned vehicle. Daniel, Dieter, and Horst (2009) recognized the object through the scale-invariant feature transform algorithm and demonstrated a system that tracked the object in a mobile environment through normalized cross-correlation (NCC). Kwon and Chae (2015) studied a more stable tracking method by increasing the registration probability through weighted NCC with the weighted value by distance during NCC registration. However, these techniques are ineffective in darker environments and not suitable for use in the typically dim shipbuilding work environment. Therefore, methods using additional sensors with the cameras are being studied. Lee, Lee, and Kim (2013) reported the module implementation of detecting the surrounding objects by linking the information obtained from image processing and the automatic identification system of the ship to provide navigation information about the ships located at long distances from the navigator in AR. However, because the maintenance work under consideration in this study was not performed on a model located far from the worker, it was not appropriate or necessary to apply the distance-based precision algorithm. Lee and Ko (2018) studied the initial registration position adjustment method to improve the registration speed and accuracy using the global positioning system and inertial measurement unit sensors. While global positioning system is a highly effective sensor for object-location tracking, it is not suitable for indoor work owing to its low efficiency. Kim et al. (2015) proposed a method to estimate the position and posture of the existing ship block in 2D images and enhance the corresponding 3D CAD model. Nam, Lee, Lee, Lee, and Hwang (2019) studied the registration method for the markerless AR management system using a point cloud for ship blocks. Sun, Hiekata, Yamato, Nakagaki, and Sugawara (2014) studied the point cloud-based registration method to perform the evaluation of the correct curved shell plate. However, previous research models significantly differed in shape from that of the piping model, which is the object of this study. Therefore, it was difficult to apply these methodologies to this system. In this study, because a 3D camera was used to acquire and use clear 3D object-location information, calculation errors that often occur when detecting 3D information from 2D images were minimized. 2.2.2 Spatial mapping method Spatial mapping is a technology that reverse transforms and maps virtual models by scanning real environments using cameras, similar to the simultaneous location and mapping (SLAM) technology frequently used in robot cleaners. Simon and Berger (2002) studied the reconfiguration of stereo-based real space into a polygon and the registration of multiple models. Georg and David (2007) studied realistic AR-content environments using visual SLAM and a keyframe technique, which are image-based map-generation techniques. Robert, Georg, and David (2008) stored the generated maps to overcome the problems related to the overloading of hardware with an increase in map size using the SLAM technology. They studied parallel tracking and multiple mapping techniques that generate the most appropriate form of map files through feature and registration work, extracted from the real locations, and mapped them to reality. In addition, Alessandro, Pier, Alfredo, and Cees (2019) evaluated the applicability and performance of SLAM-based AR and additive manufacturing technologies to maintenance in aviation, resulting in shorter work execution time and minimal work errors. Spatial mapping techniques can easily match multiple types of augmented models. However, to change the hardware according to the size of the designed map is tedious; in addition, this methodology tracks the position and posture of the cameras instead of tracking the objects. Consequently, several errors usually occur in each registration model. Therefore, these techniques are not suitable for precise work. In this study, rather than reconstructing the surrounding environment, we present a process to clearly detect and register only the target model. Therefore, the map data, which are more precisely reconstructed than those from the spatial mapping technique, do not require continuous management, and hence the hardware burden is less. 2.2.3 Reverse engineering method Francesca et al. (2011) implemented a recognition environment for faster objects based on SURF algorithms for the implementation of maintenance and training systems. They changed the CAD model required for operation for an augmented one to study the consistency with real objects. CurvSurf of Korea developed a system that could detect the actual pipe shape patterns based on the image data and reverse engineer the 3D virtual CAD shape based on the RANSAC algorithm. Unlike spatial mapping, the reverse engineering-based registration method produces a minimal error, and it is significantly accurate. This method frequently utilizes RANSAC algorithms that can thoroughly ignore the outliers based on the uncertainty of the variables of image data. However, because it requires the detection of accurate shape information of the model, it is difficult to detect complex shapes. Because the test system scenarios pursued in this study must be compared with the pre-designed virtual models and the real models, the reverse engineering system is not appropriate for reconstructing a model based on the real model. 2.2.4 Machine-learning method Ahn, Choi, and Kweon (2017) used the non-maximum suppression technique to detect the bounding box and studied the improvements in the posture-estimation performance using the bounding-box session technique to recognize the driver's facial posture for intelligent automobile implementation. Chae, Ko, Lee, and Kim (2019) estimated the pipe location based on the threshold, using a deep learning-based convolutional neural network model that used images obtained from ground-penetrating radar. In this study, the piping model as the target model exhibits a simple form without any special characteristics. Because both the material and shape use a material pattern with no significant external features, owing to the industrial characteristics, it is difficult to implement a reliable system through machine learning, which requires diverse datasets. In addition, this study is expected to require an even more diverse and large number of datasets because it needs an approach that only detects the information required for the work, instead of other information related to the piping models in the image, as a system to support the field workers. 2.2.5 Previous study In a previous study (Lee, Lee, Lee, & Nam, 2019), we set a region of interest (ROI) in the image using a depth camera and studied the method of registering the data by removing the noise around the target object to be detected using a color filter and the k-nearest neighborhood (KNN) technique. A CAD model was used and the two models were registered using the iterative closest point (ICP) algorithm. However, the process was slow owing to the measured accuracy. To resolve this issue, this study excluded high-cost algorithms, such as the color filter and KNN. In addition, because the ICP algorithm used in the matching stage is calculated using the point-to-point method, the computational quantity exponentially increases based on the amount of data. Therefore, the boundary of the target model was extracted to minimize the amount of data calculated in the ICP algorithm. In addition, previous studies focused on registration, which used the scan data from the cameras instead of the CAD model; therefore, it is not suitable for the direction of O&M research that requires a comparative analysis between the real and designed models. Moreover, difficulties were encountered with respect to missing data and delicate handling. Therefore, in this study, the registration data were converted into a 3D point cloud. 3. Registration Process and Development Environment This study involves a markerless AR-based registration system. The overall process is illustrated in Fig. 2. The process can be divided into two processes—extraction of a target model based on camera data and generation of a 3D model based on drawing data. The steps of the process are given below. Import RGB (2D Image) and XYZ (3D depth data) data from an RGB-D (depth) camera. Create 3D point cloud in XYZ-RGB format through preprocessing. Define the ROI through UI (user interface). Detect the boundaries of the ROI. Convert the 3D point-cloud model based on drawing data (PCF). Register the two point clouds. Figure 2: Open in new tabDownload slide Registration process. Figure 2: Open in new tabDownload slide Registration process. The applied development environment is illustrated in Fig. 3. This study was conducted in a tablet PC environment because it was based on portable devices that can be used in an industrial environment. The detailed specifications are presented in Fig. 3. A Kinect v2 camera was used and the CAD model was converted to a point-cloud model using a drawing exchange file format called PCF. The UI used was QT and a Point-Cloud Library was used as the point-cloud visualization environment. C++ was used as the development language. Figure 3: Open in new tabDownload slide Development environment—software and hardware. Figure 3: Open in new tabDownload slide Development environment—software and hardware. 3.1 Input data preprocessing The object detection methodology using the existing camera (2D camera) analyzes the object patterns based on the image data (color and contrast) obtained from the camera. However, because several limitations to color-based data processing exist in terms of industrial applications, various methods are being developed to detect and utilize the 3D spatial information (X, Y, Z) from images. Prados and Fangeras (2005) studied the reconstructed variable objects that include faces using a light source at the center of the camera based on shape-from-shading to perform geometrical estimation from intensity images. However, tracking 3D objects with only light and color requires a complex computational process with real-time processing, which is a considerable burden on the system. Additionally, noise data that are not standardized owing to their sensitivity to the surrounding environmental factors (light, material) are also generally observed. Therefore, in this study, a 3D camera with a depth sensor operating under the time-of-flight (ToF) principle was used. ToF cameras either modulate near-infrared lasers/LEDs and produce a shift in the phase of the return signal (usually referred to as continuous mode devices) or use high-frequency (physical or electronic) shutters in front of the image sensor to gate the returning pulses of light according to its time of arrival (usually referred to as shuttered or gated devices) (Sean et al., 2014). In addition, they provide very fast, accurate, and intuitive 3D coordinate information that enables easy access for additional algorithm application or problem analysis. In this study, as shown in Fig. 4, XYZ-RGB-type point-cloud data were generated by combining the coordinate information (XYZ) obtained from the depth sensor and color information (RGB) obtained from the image sensor. Figure 4: Open in new tabDownload slide XYZ-RGB point-cloud data. Figure 4: Open in new tabDownload slide XYZ-RGB point-cloud data. 3.2 Defining ROI A ToF-based 3D camera was used. It scanned over 200 000 points at 30 fps in real time. However, the target model to be detected constitutes only a small area in the entire image area. Various studies have been conducted to find a method to detect a pipe. However, considering the working environment, an operator can only use a portable hardware device, which is considered low quality. Therefore, when the pipe recognition algorithm is used in this environment, it is difficult to develop a program with the desired speed due to the considerable cost implications. Therefore, in this study, the ROI was defined based on the target model and unnecessary regions were removed for exclusion from real-time processing. The ROI generated, in the form of a cube, from the complete scanned area is shown on the right side of Fig. 5. The cube area is defined by a total of six parameters—minimum and maximum of X, Y, and Z—and is manually controlled by a UI, as shown on the left side of Fig. 5. The unnecessary noise data distributed outside the region are removed, leaving only the area defined around the target model. Figure 5: Open in new tabDownload slide UI to define the ROI (left) and denoise the ROI (right). Figure 5: Open in new tabDownload slide UI to define the ROI (left) and denoise the ROI (right). 3.3 Extracting boundary of point cloud (3D edge detection) Even after the ROI is determined through the aforementioned process, superfluous data such as the background (wall) remain, which can significantly affect the registration accuracy. In addition, the residual data negatively affect the processing speed because the object pattern is detected in real time. The most commonly used method for tracking or recognizing objects in an image is to detect the edges of the objects. The detected edges can be defined as a key point pattern that can represent the object while distinguishing it from its surrounding background. Therefore, this type of detection is critical in the preprocessing stage of object recognition. Huan, Xiangguo, Xiaogang, and Jixian (2016) proposed an efficient edge-detection method based on RANSAC using a 3D point cloud, called the Analysis of Geometric Properties of Neighborhoods. Shaobo and Ruisheng (2017) defined the displacement between the current point and its local geometric center as the gradient of the 3D point cloud, as computed through images, to reduce the edge-detection errors caused by periodically changing the image data. These studies showed that the edge-detection method is generally suitable for situations where high-performance PCs or real-time repetitive tasks are not required. However, its implementation is difficult in this study because we require real-time processing in a device environment that can be implemented in the industrial field. Therefore, in this study, 3D edges were detected by applying a simple algorithm, as compared to those in related studies introduced earlier. The data scanned from the 3D camera are stored in 2D resolution, as shown in Fig. 5. In the general camera, the color value is saved for each pixel, but in the 3D camera, the X, Y, and Z values are saved for each pixel. The edge is defined as a boundary line that meets the surface, appearing when no objects remain connected to each other on the same plane. However, the data stored in 2D resolution contain information regarding the position-plate object at different depths in 3D space with no empty space between the objects. Specifically, the core parameter for edge determination is the depth value. Thus, the edge was determined as the case where the difference between the central point (edge candidate point) and neighboring points is more than 20 cm in a 3 × 3 kernel unit, as shown in Fig. 6. Figure 6: Open in new tabDownload slide Edge detection. Figure 6: Open in new tabDownload slide Edge detection. Because it is difficult for the algorithm to detect a clear edge based on the camera scanning angle, some residual noise persists. However, because the ICP algorithms that will be used for registration employ an iteration process, these noise errors can be ignored. The implementation results appear similar to those shown in Fig. 7 and this process can reduce the amount of data from 200 000 to approximately 8000. Figure 7: Open in new tabDownload slide Number of data instances left after ROI (left) and 3D edge detection (right). Figure 7: Open in new tabDownload slide Number of data instances left after ROI (left) and 3D edge detection (right). 3.4 Conversion of CAD model to point-cloud model The method used to assess the registration performance calculates the correlation between the two sets of data. The greater the similarity between the two data, the higher the reliability. Therefore, a point-cloud CAD model with the same shape as that of the actual model is required. Even though commercial tools are available to convert a CAD model into a 3D point-cloud shape, a CAD model is essentially defined as a mesh unit and is implemented based on minimum vertical information. Therefore, when the 3D CAD model is converted into a point-cloud model, it is difficult to create a dense point model similar to that from the scanning data used earlier. Therefore, in this study, a 3D point-cloud restoration based on PCF, which is a drawing exchange file, was performed. The PCF provides a simplified format for data exchange because it includes the model routing information and component information in the ISO drawing as a .txt file and only the minimum model shape information is stored and shared, as shown in Fig. 8. Figure 8: Open in new tabDownload slide PCF viewer in AFT Fathom (left) and in text file (right). Figure 8: Open in new tabDownload slide PCF viewer in AFT Fathom (left) and in text file (right). 3.4.1 Transforming pipe geometry The PCF provides two points—the lower center P1 and the upper center P2—and the radius based on the pipe shape, as shown in Fig. 9. As the pipe model is essentially cylindrical, the point-cloud shape is defined by repeatedly generating a circle at regular intervals from P1 to P2. Figure 9: Open in new tabDownload slide Converting pipe model into point cloud. Figure 9: Open in new tabDownload slide Converting pipe model into point cloud. Most pipes exhibit expanded shapes based on a single axis, as shown in Fig. 10; however, they can be diagonal depending on the situation. In this study, the shape of the pipe was rotated using the following equation to define the diagonal shape. An arbitrary point P3 (coordinate axis values of P2 and P1 were the same), which was at the same distance as P1 and P2, was created as shown in Fig. 10. The pipe was created between P3 and P1. The dot product was used to calculate the angle between the P1-P3 and P1-P2 vectors. The pipe created in Step (ii) was rotated according to the interval angle obtained through Equation (1). $$\begin{eqnarray} \left( {\begin{array}{@{}*{1}{c}@{}} {x^{\prime}}\\ {y^{\prime}} \end{array}} \right) = \left( {\begin{array}{@{}*{2}{c}@{}} {cos\theta }& \quad { - sin\theta }\\ {sin\theta }& \quad {cos\theta } \end{array}} \right)\,\,\left( {\begin{array}{@{}*{1}{c}@{}} {x - a}\\ {x - b} \end{array}} \right) \end{eqnarray}$$(1) Figure 10: Open in new tabDownload slide Diagonal pipe definition process. Figure 10: Open in new tabDownload slide Diagonal pipe definition process. 3.4.2 Transforming elbow geometry In PCFs, the elbow provides both the endpoints (P1 and P2), the point between them, the center point, and the radius value. As shown in Fig. 11, the elbow has a shape that requires multiple rotational conversions. However, multiple rotational conversions in 3D space can cause the gimbal lock phenomenon. Therefore, the elbow shape was defined here using the quaternion interpolation equation, as defined in Equation (2). This equation defines the path of the arc connecting two points. It can be nodeized by dividing the path at certain intervals using the variable t. Accordingly, the coordinate information for each node was obtained and the arc-shaped point cloud was generated. $$\begin{eqnarray} {\rm{P}}3 = \frac{{rsin\theta \left( {t - 1} \right)}}{{sin\theta }}P1 + \frac{{rsin\theta t}}{{sin\theta }}P2 \end{eqnarray}$$(2) Figure 11: Open in new tabDownload slide Elbow model (left) and concept of quaternion interpolation (right). Figure 11: Open in new tabDownload slide Elbow model (left) and concept of quaternion interpolation (right). The procedure followed to define the elbow shape is presented below. Move the center point, as shown in Fig. 11. Generate a circle from P1 and P2. Configure a triple pair to connect one point of each original shape to the center point. Create a punctuated cloud-type arc from the center point through quaternion interpolation. Return to Step (iii) and repeat the process (refer to Fig. 12). Figure 12: Open in new tabDownload slide Converting elbow model to point cloud. Figure 12: Open in new tabDownload slide Converting elbow model to point cloud. The advantage of achieving the elbow-shaped transformation using quaternion interpolation is that it can simplify the problem to be solved during the conversion. If the previous process was performed accurately, the elbow shape would be appropriately generated. Because the center point was fixed, it was not considered; it was only necessary to form the circle based on P1 and P2, which formed the bottom of the elbow shape. Correspondingly, it was necessary to determine the state of the elbow, and thus as shown in Fig. 13, the normal vector was first calculated from two vectors (v1 and v2) to determine the shape of the elbow model. Figure 13: Open in new tabDownload slide Direction of torus model. Figure 13: Open in new tabDownload slide Direction of torus model. The torus model was divided into four regions. The scalar values of vectors v1 and v2 were calculated as shown in Fig. 14 to determine the area of the torus that corresponds to the elbow. Figure 14: Open in new tabDownload slide Part of torus model. Figure 14: Open in new tabDownload slide Part of torus model. 3.4.3 Transforming reducer, flange, tee, and valve geometry Other transformation models include reducers, flanges, tees, and valves. The shape definitions of the reducer, flange, and valve are very similar to that of the pipe, as shown in Fig. 15. The valve shape is produced through forging and casting; therefore, it has a very complex shape and does not provide specific shape information in PCF. The transformation of the wheel model was omitted here and the valve shape was defined as a form that connects two flanges and reducer shapes, as shown in Fig. 15. Figure 15: Open in new tabDownload slide Converting reducer, flange, tee, and valve models into point clouds. Figure 15: Open in new tabDownload slide Converting reducer, flange, tee, and valve models into point clouds. Although this enabled us to adjust the number of points required for registration, insufficient data may lead to more errors during registration. As the edge of the point cloud generated from the camera was detected, the CAD shape was matched with that of a similar shape. Applying the CAD shape to the advanced edge-detection method was difficult because the data from the camera with the scanner property were not compatible with the data storage method. Therefore, in this study, the pipe shape was created by taking points at 90° intervals, as shown in Fig. 16. Figure 16: Open in new tabDownload slide Creating point cloud by dividing points at 90° intervals. Figure 16: Open in new tabDownload slide Creating point cloud by dividing points at 90° intervals. 3.5 Registration Registration is performed to analyze the position correlation between the reference and target models. To execute this, it is necessary to acquire the distance images from various directions and synthesize them. Here, the ICP algorithm was used to derive a matrix that can estimate the correlation between the rotation and transformation components of the two models (point clouds). The ICP algorithm is a technology that is frequently used in the field of image processing. The closest matches between two data models are composed of matching pairs, and the rotation and transformation components between the two data forms are calculated. In addition, this algorithm is gradually moved through a repetition method based on the rotation and movement components using gradient descent. Finally, the distance error observed in the two data forms is converged to zero. Because each movement reconfigures a new matching pair, it has a considerable amount of computational complexity. In general, it is computational for all remaining data, and thus various approaches are being studied, such as using points extracted at a certain interval to reduce the computational complexity or only applying the corresponding points to the ICP algorithm by extracting characteristics from distance images. Majd (2007) introduced a line-based registration variant called the iterative closest line (ICL). In ICL, the line features are extracted from the range scans and aligned to obtain a rigid body transformation. However, it was difficult to implement this method to the piping model used in this study owing to the problems encountered in extracting the line units. Dror, Niloy, and Daniel (2008) proposed a 4PCS algorithm that sampled coplanar four-points based on RANSAC because the congruent coplanar four-point sets can be efficiently extracted with affine invariance. However, because the RANSAC algorithm also involves significant computational complexity, it is not appropriate here. Men and Pochiraju (2014) improved the speed and accuracy of the ICP algorithm through Hue-based down sampling. However, color data can be used when sufficient color information is reflected in reality, which is difficult to expect in shipbuilding and offshore industry environments. In this study, a 3D edge data-based ICP algorithm was used and the two models were matched by applying the transformation and rotation components extracted between the CAD and scanning models to the CAD model. The results indicate that the two point-cloud models were well matched despite the presence of residual noise, as shown in Fig. 17 . And the final result is shown in Fig. 18. Figure 17: Open in new tabDownload slide Registration of two point-cloud models. Figure 17: Open in new tabDownload slide Registration of two point-cloud models. Figure 18: Open in new tabDownload slide Pipeline recognition and overlaying with CAD data. Figure 18: Open in new tabDownload slide Pipeline recognition and overlaying with CAD data. 4. Implementation and Evaluation As a result of applying the aforementioned registration methodology, the registration is maintained if the camera or model remains static and the object is tracked and rematched in real time even if it moves. To evaluate the accuracies of the two models, the hexahedron region containing the overall shape was extracted by calculating the maximum and minimum values of each coordinate parameter (X, Y, and Z) based on the CAD shape. The average point was calculated based on the coordinate values of the points in each area by dividing the corresponding area at a certain interval. The average point coordinates were calculated by dividing each model data into 125 regions, as shown in Fig. 19. The red and blue points represent the average points calculated from the CAD and camera data, respectively. The average error was computed by calculating the difference in the distances between the two points for each parameter. Figure 19: Open in new tabDownload slide Extraction of mean point for each area. Figure 19: Open in new tabDownload slide Extraction of mean point for each area. However, as the camera demonstrated the characteristics of a scanner, the object behind included fewer instances of scanned data owing to it being obscured by the object in front. This phenomenon was reflected in the evaluation of the registration accuracy performed in this study. The evaluation element may be ineffective; therefore, the registration errors were calculated and compared only for the frontal area. The errors were in the range of 10–20 mm, as listed in Table 1. Even if one of the two data models did not include a point in the area, it was excluded from the evaluation target. Table 1: Error distance calculated based on mean points. Region . Error-X (mm) . Error-Y (mm) . Error-Z (mm) . R10 −6.403 −5.045 −6.162 R11 −2.852 −4.346 −8.048 R32 1.3516 −3.491 −4.4504 R33 38.6404 −1.818 −25.580 R36 −5.1408 −6.704 7.802 R37 −1.549 3.067 10.0845 R38 −7.392 6.052 6.169 R40 47.594 −46.454 −26.1228 R43 2.746 −6.291 4.0886 R44 −3.593 2.888 −11.4424 R45 −12.719 −5.661 −5.644 R47 17.435 33.697 −26.977 Mean error 12.28 10.46 11.88 Region . Error-X (mm) . Error-Y (mm) . Error-Z (mm) . R10 −6.403 −5.045 −6.162 R11 −2.852 −4.346 −8.048 R32 1.3516 −3.491 −4.4504 R33 38.6404 −1.818 −25.580 R36 −5.1408 −6.704 7.802 R37 −1.549 3.067 10.0845 R38 −7.392 6.052 6.169 R40 47.594 −46.454 −26.1228 R43 2.746 −6.291 4.0886 R44 −3.593 2.888 −11.4424 R45 −12.719 −5.661 −5.644 R47 17.435 33.697 −26.977 Mean error 12.28 10.46 11.88 Open in new tab Table 1: Error distance calculated based on mean points. Region . Error-X (mm) . Error-Y (mm) . Error-Z (mm) . R10 −6.403 −5.045 −6.162 R11 −2.852 −4.346 −8.048 R32 1.3516 −3.491 −4.4504 R33 38.6404 −1.818 −25.580 R36 −5.1408 −6.704 7.802 R37 −1.549 3.067 10.0845 R38 −7.392 6.052 6.169 R40 47.594 −46.454 −26.1228 R43 2.746 −6.291 4.0886 R44 −3.593 2.888 −11.4424 R45 −12.719 −5.661 −5.644 R47 17.435 33.697 −26.977 Mean error 12.28 10.46 11.88 Region . Error-X (mm) . Error-Y (mm) . Error-Z (mm) . R10 −6.403 −5.045 −6.162 R11 −2.852 −4.346 −8.048 R32 1.3516 −3.491 −4.4504 R33 38.6404 −1.818 −25.580 R36 −5.1408 −6.704 7.802 R37 −1.549 3.067 10.0845 R38 −7.392 6.052 6.169 R40 47.594 −46.454 −26.1228 R43 2.746 −6.291 4.0886 R44 −3.593 2.888 −11.4424 R45 −12.719 −5.661 −5.644 R47 17.435 33.697 −26.977 Mean error 12.28 10.46 11.88 Open in new tab The results summarized in Table 1 indicate that areas with large errors exist, such as R33, R40, and R47. These errors are caused by the wheel model of the valve that omits the conversion process in this study. The average error rate was recalculated by excluding these areas and the resulting error rate was found to be less than 5 mm. This enables users to experience the mapping results between the real model and the CAD shape without any sense of heterogeneity when using the AR system. The factors contributing to the residual error (5 mm) were analyzed as follows. The first factor was the error resulting from the scanner property. As the camera is a scanner, only the surface was detected. Therefore, it did not demonstrate a complete model for a cylindrical shape, such as a pipe, which was connected to the registration error of the CAD shape by a full shape. In the future, this error between the two model shapes can be eliminated by combining the technology to realize a complete 3D shape based on multi-scanning data captured from various fields. Denoising processes, such as ROI designation and edge extraction, were performed; however, some noise persisted owing to the background, shadow, and the dead bending phenomenon. Because each of these noises remained in the form of a small cluster, the cluster algorithm can be used to achieve efficient noise removal and object boundary. Finally, a few errors were caused by the weight. The real model was affected by the weight, which caused the deflection phenomenon. However, because the virtual CAD model was unaffected by this, it was matched to a higher position than that of the real model, as shown in Fig. 17. This was difficult to determine due to the inaccuracy of registration; however, it was reflected in the error factors listed in Table 1, which were derived according to the quantitative evaluation method implemented in this study. In addition, the amount of residual data and the time required to perform the registration have been summarized in Table 2 to analyze the performance difference in comparison to previous research results. Table 2: Registration speed comparison analysis. Previous study . Scanning data . Angle (deg) . Time . Algorithm (ROI, color, KNN) 34 240 10 25.4 33 921 20 27.9 34 308 30 33.2 34 121 40 35.2 34 221 50 X In this study Scanning data Angle (deg) Time Algorithm (ROI, 3D edge) 7291 10 4.2 7309 20 4.3 7295 30 4.4 7312 40 5.0 7323 50 X Previous study . Scanning data . Angle (deg) . Time . Algorithm (ROI, color, KNN) 34 240 10 25.4 33 921 20 27.9 34 308 30 33.2 34 121 40 35.2 34 221 50 X In this study Scanning data Angle (deg) Time Algorithm (ROI, 3D edge) 7291 10 4.2 7309 20 4.3 7295 30 4.4 7312 40 5.0 7323 50 X Open in new tab Table 2: Registration speed comparison analysis. Previous study . Scanning data . Angle (deg) . Time . Algorithm (ROI, color, KNN) 34 240 10 25.4 33 921 20 27.9 34 308 30 33.2 34 121 40 35.2 34 221 50 X In this study Scanning data Angle (deg) Time Algorithm (ROI, 3D edge) 7291 10 4.2 7309 20 4.3 7295 30 4.4 7312 40 5.0 7323 50 X Previous study . Scanning data . Angle (deg) . Time . Algorithm (ROI, color, KNN) 34 240 10 25.4 33 921 20 27.9 34 308 30 33.2 34 121 40 35.2 34 221 50 X In this study Scanning data Angle (deg) Time Algorithm (ROI, 3D edge) 7291 10 4.2 7309 20 4.3 7295 30 4.4 7312 40 5.0 7323 50 X Open in new tab In the comparison experiment, all CAD data were used with the same model. Because the registration performance varies according to the initial angle of the two data forms, the initial generation angle of the CAD model was changed to calculate the time for each case. The angle indicates a value that must be rotated while the registration is performed. In a previous study, the time required for registration was much less, approximately 25–35 s, while the time increment was large because the required repetitions based on the ICP algorithm were high owing to the low frame rate. The method used in this study required an average time of approximately 4–5 s, which is feasible considering the time taken for workers to compare and analyze the drawings and models in the industrial field. Additionally, when the CAD generation angle showed an error of more than 50°, the matching did not perform well owing to the local minima. This is considered to be a phenomenon that can be improved sufficiently in the future by performing simple position correction using a marker. 5. Conclusions This study proposed an approach to reduce the limitations of existing AR-based systems used for the maintenance of ships and plants. ROI and 3D-based edge search technologies were used to address the issues related to the registration speed reported in previous studies and reduce the computation by over five times. Consequently, an 8-fold improvement in the registration speed was achieved. In addition, the drawing file used in the industry was examined and a point-cloud converter was developed to build a more flexible data-processing environment. Through this study, a registration system, that can be used even in a complex worksite environment, was implemented and a high-speed registration system was implemented in a low-spec portable device environment. This system can transmit work information to the worker more intuitively than the drawing system, which can minimize their work error and improve efficiency. In addition, this approach can be used for several tasks, such as pipe installation, maintenance, and remote collaboration. This study is expected to be extended to various fields because it uses internal algorithms that impose no restrictions on elements based on their shapes. However, even in the proposed system, some noise data persist, which can lead to registration errors. Therefore, the focus of future research will be to increase registration reliability. The most influential factors in achieving registration reliability are the formation posture and location of the initial CAD shape. The ICP algorithm used for registration ignored small noise data and indicated that the overall shapes of the two models were well matched. However, this algorithm might fall into local minima because it uses the gradient-centered method. The results of the experiment indicated that the registration system used in this study was influenced by the initial torsion (rotation) of the CAD shape. To resolve this issue, we are developing a method to detect the piping model in the area scanned from the camera based on the drawing data (PCF) utilized in this study. Through this, we can determine the location information of the installed piping and correct its initial position on the CAD model, accordingly. In addition, we expect that this method will eliminate the inconvenience of manually specifying an ROI. Acknowledgement This work was supported by Inha University Research Grant. Conflict of interest statement None declared. References Ahn B. T. , Choi D. G., Kweon I. S. ( 2017 ). Multi-scale, multi-object and real-time face detection and head pose estimation using deep neural networks . Journal of Korea Robotics Society , 12 ( 3 ), 313 – 321 . Google Scholar Crossref Search ADS WorldCat Alessandro C. , Pier M., Alfredo L., Cees B. ( 2019 ). Maintenance in aeronautics in an industry 4.0 context: The role of augmented reality and additive manufacturing . Journal of Computational Design and Engineering , 6 ( 4 ), 516 – 526 . Google Scholar Crossref Search ADS WorldCat Bok Y. S. , Hwang Y. B., Kweon I. S. ( 2007 ). UGV localization based on scene matching and pose estimation . Proceeding of 2007 Anuual Conference, KIMST , 1144 – 1150 . Google Scholar OpenURL Placeholder Text WorldCat Chae J. H. , Ko H. Y., Lee B. G., Kim N. G. ( 2019 ). A study on the pipe position estimation in GPR images using deep learning based convolutional neural network . Korean Society for Internet Information , 20 ( 4 ), 39 – 46 . Google Scholar OpenURL Placeholder Text WorldCat Daniel W. , Dieter S., Horst B. ( 2009 ). Multiple target detection and tracking with guaranteed framerates on mobile phones . In Proceedings of the IEEE International Symposium on Mixed and Augmented Reality 2009 . (pp. 19 – 22 .). Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Dror A. , Niloy J. M., Daniel C. O. ( 2008 ). 4-Points congruent sets for robust pairwise surface registration . ACM Transactions on Graphics , 27 , 1 – 10 . Google Scholar OpenURL Placeholder Text WorldCat Francesca D. C. , Massimiliano F., Franco P., Luigi D., Pietro A., Samuele S. ( 2011 ). Augmented reality for aircraft maintenance training and operations support . IEEE Computer Graphics and Applications , 31 ( 1) , 96 – 101 ., http://www.curvsurf.com/ . Google Scholar OpenURL Placeholder Text WorldCat Georg K. , David M. ( 2007 ). Parallel tracking and mapping for small AR workspaces . In Proceedings of the Sixth IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR’07) , Nara, Japan . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Huan N. , Xiangguo L., Xiaogang N., Jixian Z. ( 2016 ), Edge detection and feature line tracing in 3D-point clouds by analyzing geometric properties of neighborhoods . Remote Sensing , 8 ( 9 ), 710 . Google Scholar Crossref Search ADS WorldCat Jang D. H. , Mun H. G., Sohn S. W., Suh H. W., Han S. W. ( 2011 ). Development of maintenance management system for ships and offshore plants facilities . The Korean Association of Ocean Science and Technology Societies , 1262 – 1271 . Google Scholar OpenURL Placeholder Text WorldCat Jung H. K. , Park H. J. ( 2019 ). Feature points-based object tracking using infrared sensors in augmented reality environments . Korean Journal of Computational Design and Engineering , 24 ( 3 ), 347 – 360 ., https://www.indexarsolutions.com/ . Google Scholar Crossref Search ADS WorldCat Kang Y. , Han S. ( 2014 ). An alternative method for smartphone input using AR markers . Journal of Computational Design and Engineering , 1 ( 3 ), 153 – 160 . Google Scholar Crossref Search ADS WorldCat Kim D. W , Ji J. H., Yun D. H., An H. G., Song G. l., Abid Hasan S. M., Mamona A., Ko K. H. ( 2015 ). Ship block detection and pose estimation with augmented reality . In Proceedings of the Society of CAD/CAM Conference . (pp. 988 – 991 .). Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Kim J. P. , Lee D. C. ( 2014 ). Development of mobile location based service app using augmented reality . Journal of the Korea Institute of Information and Communication Engineering , 18 , 1481 – 1487 . Google Scholar Crossref Search ADS WorldCat Kwon Y. H. , Chae Y. G. ( 2015 ). An improved object recognition and tracking algorithm based on block matching . Journal of Korean Institute of Information Technology , 13 , 61 – 68 . Google Scholar Crossref Search ADS WorldCat Lee J. H. , Ko K. H. ( 2018 ). Utilization of GPS and IMU sensors in the initial registration of two point clouds . Korean Journal of Computational Design and Engineering , 23 ( 2 ), 173 – 183 . Google Scholar Crossref Search ADS WorldCat Lee J. M. , Lee K. H., Kim D. S. ( 2012 ). Mobile-AR inspection system based on RF-Marker to improve marker detection . Korean Journal of Computational Design and Engineering , 17 , 208 – 215 . Google Scholar Crossref Search ADS WorldCat Lee J. M. , Lee K. H., Kim D. S. ( 2013 ). Image analysis module for AR-based navigation information display . Journal of Ocean Engineering and Technology , 27 , 22 – 28 . Google Scholar OpenURL Placeholder Text WorldCat Lee W. H. , Lee K. H., Lee J. J., Nam B. W. ( 2019 ). A study on pipe model registration for augmented reality based O&M environment improving . Journal of the Computational Structural Engineering Institute of Korea , 32 ( 3 ), 191 – 197 . Google Scholar Crossref Search ADS WorldCat Lee S. H. , Omer A. ( 2011 ). Augmented reality-based computational fieldwork support for equipment operations and maintenance . Automation in Construction , 20 ( 4 ), 338 – 352 . Google Scholar Crossref Search ADS WorldCat Majd A. ( 2007 ). ICL: Iterative closest line. A novel point cloud registration algorithm based on linear features . International Society for Photogrammetry and Remote Sensing , ( 10 ), 53 – 59 . Google Scholar OpenURL Placeholder Text WorldCat Martin H. ( 2008 ). Marker Detection for Augmented Reality Applications , Institute of Computer Graphics and Vision, Graz University of Technology , Austria . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Men H. , Pochiraju K. ( 2014 ). Hue-assisted automatic registration of color point clouds . Journal of Computational Design and Engineering , 1 ( 4 ), 223 – 232 . Google Scholar Crossref Search ADS WorldCat Nam B. W. , Lee K. H., Lee W. H., Lee J. D., Hwang H. J. ( 2019 ). A study on smart accuracy control system based on augmented reality and portable measurement device for shipbuilding . Journal of the Computational Structural Engineering Institute of Korea , 32 , 65 – 73 . Google Scholar Crossref Search ADS WorldCat Prados E. , Fangeras O. ( 2005 ). Shape from shading: A well-posed problem? In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition . (pp. 870 – 877 .), San Diego, USA . Google Scholar Crossref Search ADS Google Preview WorldCat COPAC Robert C. , Georg K., David W. M. ( 2008 ). Video-rate localization in multiple maps for wearable augmented reality . In Proceedings of the International Symposium on Wearable Computers (ISWC'08) . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Sean R. F. , Cem K., Shahram I., Pushmeet K., David K., David S., Antonio C., Jamie S., Sing B. K., Tim P. ( 2014 ). Learning to be a depth camera for close-range human capture and interaction . ACM Transactions on Graphics , 33 ( 4 ), 86:1 – 86:11 . Google Scholar OpenURL Placeholder Text WorldCat Shaobo X. , Ruisheng W. ( 2017 ). A fast edge extraction method for mobile LiDAR point clouds . IEEE Geoscience and Remote Sensing Letters , 14 ( 8 ), 1288 – 1292 ., https://www.autodesk.com/products/recap/overview, https://www.pix4d.com/, https://www.alias.ltd.uk/pcf.asp . Google Scholar Crossref Search ADS WorldCat Simon G. , Berger M. O. ( 2002 ). Reconstructing while registering: A novel approach for markerless augmented reality . In Proceedings of the IEEE International Symposium on Mixed and Augmented Reality . (pp. 285 – 294 .). Google Scholar Crossref Search ADS Google Preview WorldCat COPAC Sun J. , Hiekata K., Yamato H., Nakagaki N., Sugawara A. ( 2014 ). Efficient point cloud data processing in shipbuilding: Reformative component extraction method and registration method . Journal of Computational Design and Engineering , 1 ( 3 ), 202 – 212 . Google Scholar Crossref Search ADS WorldCat Tomohiro F. , Kazuki Y., Nobuyoshi Y., Ali M. ( 2019 ). An indoor thermal environment design system for renovation using augmented reality . Journal of Computational Design and Engineering , 6 ( 2 ), 179 – 188 . Google Scholar Crossref Search ADS WorldCat Yang W. J. , Kwon S. J., Keum J. S. ( 2004 ). An analysis of human factor in marine accidents . Journal of the Korean Society of Marine Environment & Safety , 24 , 7 – 11 . Google Scholar OpenURL Placeholder Text WorldCat © The Author(s) 2020. Published by Oxford University Press on behalf of the Society for Computational Design and Engineering. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited. TI - Registration method for maintenance-work support based on augmented-reality-model generation from drawing data JO - Journal of Computational Design and Engineering DO - 10.1093/jcde/qwaa056 DA - 2020-12-10 UR - https://www.deepdyve.com/lp/oxford-university-press/registration-method-for-maintenance-work-support-based-on-augmented-R2D31dGpWn SP - 775 EP - 787 VL - 7 IS - 6 DP - DeepDyve ER -