Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Self-Calibration and Crosshair Tracking with Modular Digital Imaging Total Station

Self-Calibration and Crosshair Tracking with Modular Digital Imaging Total Station The combination of a geodetic total station with a digital camera opens up the possibilities of digital image analysis of the captured images together with angle measurement. In general, such a combination is called image-assisted total station (IATS). The prototype of an IATS called MoDiTa (Modular Digital Imaging Total Station) developed at i3mainz is designed in such a way that an existing total station or a tachymeter can be extended by an industrial camera in a few simple steps. The ad hoc conversion of the measuring system opens up further areas of application for existing commercial measuring systems, such as high-frequency aiming, autocollimation tasks or tracking of moving targets. MoDiTa is calibrated directly on site using image-processing and adjustment methods. The crosshair plane is captured for each image and provides identical points in the camera image as well as in the reference image. However, since the camera is not precisely coaxially mounted and movement of the camera cannot be ruled out, the camera is continuously observed during the entire measurement. Vari- ous image-processing algorithms determine the crosshairs in the image and compare the results to detect movement. In the following, we explain the self-calibration and the methods of crosshair detection as well as the necessary matching. We use exemplary results to show to what extent the parameters of self-calibration remain valid even if the distance and thus the focus between instrument and target object changes. Through this, one calibration is applicable for different distances and eliminates the need for repeated, time-consuming calibrations during typical applications. Keywords Self-calibration · Crosshair · Image Assisted Total Station · Image processing · Image matching · Tracking Zusammenfassung Selbstkalibrierung und Strichkreuzverfolgung mittels einer modularen digitalen bildgebenden Totalstation. Die Kombina- tion einer geodätischen Totalstation mit einer digitalen Kamera eröffnet die Möglichkeit der digitalen Bildanalyse der auf- genommenen Bilder zusammen mit der Winkelmessung. Im Allgemeinen wird eine solche Kombination als Image Assisted Total Station (IATS) bezeichnet. Der am i3mainz entwickelte Prototyp einer IATS namens MoDiTa (Modular Digital Imaging Total Station) ist so konzipiert, dass eine bestehende Totalstation oder ein Tachymeter mit nur wenigen Handgriffen um eine Industriekamera erweitert werden kann. Die Ad-hoc-Erweiterung des Messsystems eröffnet weitere Anwendungsbereiche für bestehende kommerzielle Messsysteme wie hochfrequente Zielerfassung, Autokollimations-aufgaben oder die Verfolgung bewegter Ziele. MoDiTa wird direkt vor Ort mittels Bildverarbeitungs- und Ausgleichungsmethoden kalibriert. Die Faden- kreuzebene wird für jedes Bild erfasst und liefert identische Punkte sowohl im Kamerabild als auch im Referenzbild. Da die Kamera jedoch nicht exakt koaxial montiert ist und eine Bewegung der Kamera nicht auszuschließen ist, wird die Kamera während der gesamten Messung kontinuierlich beobachtet. Verschiedene Bildverarbeitungsalgorithmen bestimmen während der Messung das Fadenkreuz im Bild und vergleichen diese Ergebnisse, um Bewegungen zu erkennen. Im Folgenden werden * Kira Zschiesche Kira.Zschiesche@hs-mainz.de i3mainz Institute for Spatial Information and Surveying Technology, Mainz University of Applied Sciences, Lucy-Hillebrand-Straße 2, 55128 Mainz, Germany Vol.:(0123456789) 1 3 PFG die Selbstkalibrierung und die Methoden der Fadenkreuzerkennung sowie der notwendige Abgleich erläutert. Anhand exem- plarischer Ergebnisse wird gezeigt, inwieweit die Parameter der Selbstkalibrierung auch dann gültig bleiben, wenn sich der Abstand und damit der Fokus zwischen Instrument und Zielobjekt ändert. Damit ist eine Kalibrierung für unterschiedliche Entfernungen anwendbar und erspart bei typischen Anwendungen die Wiederholung der zeitraubenden Kalibrierungen. 1 Introduction crosshairs in the image. No additional optical component is added in between. This makes it necessary to attach a menis- Modular Digital Imaging Total Stations show a wide range cus lens to the front of the telescope for distances of 13 m or of experimental applications in the fields of engineering sur - more. This front lens shifts the focal plane to the image sen- veying and metrology (Atorf et al. 2019; Guillaume et al. sor for focused images. Similar to Huang and Harley (1989), 2016; Wagner et al. 2014). Recent reviews have been pre- the calibration is carried out by virtual control points, where sented by Paar et al. (2021) and Zschiesche (2022). In short, the central projection is expressed by an affine approach. the development of surveying instruments into more power- Instrument errors are not taken into account. ful and user-friendly tools is taking place through automa- The developed application of the University of Zagreb tion and the addition of new sensor technology. This can be attached a GoPro5 directly to the ocular (Paar et al. 2017, seen, for example, in automatic target recognition (ATR) or 2021). Another setup where the camera is directly attached the autofocus of so-called total stations. By extending a total to the eyepiece can also be found in Schlüter et al. (2009). station with one or more cameras, the possibilities of image For the measurement with the IATS at the University of processing also become available. In addition, a further Zagreb, videos are recorded which are later split into images. advantage is the users’ independence due to the subjective Here, photo targets with predefined circles of known diam- sense of the observer’s eye. However, the cameras used so eter and distance between the circle centres are used. This far by instrument manufacturers primarily serve to improve enables the evaluation of the image data. For the frequency interactive user workflows, but they currently do not provide analysis, raw image coordinates are used and no camera a real-time interface for user-specic fi image analysis or deep calibration is required. However, the photo target must be learning applications in the measurement process. Further- attached to the object to be observed. more, the highest possible frame rate is significantly lower The second design offers the advantage of a fixed cam- than the frame rate achievable by industrial cameras. For era with the instrument like commercial IATS. Commer- example, a multistation MS50 from the manufacturer Leica/ cial IATS often have too low speed of image acquisition Hexagon achieves 20 frames per second from a 5 MP coaxial for kinematic measurements (e.g. for frequency analysis in camera and thus allows smooth user interaction. Looking structural health monitoring). The fixed camera provides into detail, the frame rate of 20 Hz is only achieved with constant calibration parameters as opposed to the modular respect to VGA resolution of the display, 640 × 480 pixels version which requires calibration after reconfiguration. An (Grimm and Zogg 2013). Saving a full 2560 × 1920 pixel early prototype is mentioned in Walser (2004), and the pro- image to an SD card usually takes more than 2 s with JPEG totype series IATS2 from the manufacturer Leica in Reiterer compression and even more than 6 s in raw format. To be and Wagner (2012), Wagner et al. (2013, 2016), Wasmeier able to target applications that we believe require frame rates (2009b). Walser (2004) describes the camera with an affine of around 1 Hz–1000 Hz, we have decided to continue the chip model and uses a combined approach to take camera concept of external cameras to achieve these frame rates and instrument errors into account. Wasmeier (2009a) shows (Hauth et al. 2013), while integrating the motorised focus a comparison of different methods. support of the multistations. We consider it an advantage The measuring system MoDiTa developed at i3mainz that the external camera does not disturb the thermal design extends an existing instrument modularly by an external of the multistation even at high pixel clock rates. industrial camera. The self-calibration based on the photo- During the process of prototype development, differ - grammetric camera model fully integrates the external cam- ent types of construction emerged. External implementa- era into the measurement process. By permanently tracking tions make it possible to mount the camera on the ocular the crosshair, the accuracy characteristics of the total station or replace it. These are used in combination with commer- are maintained. In the following, we explain the measure- cial total stations or tacheometers and can be converted ment system and the calibration. The necessary image-based and adapted to the particular conditions and requirements. acquisition of the crosshair for the calibration and the further One example of such a modular system is DAEDALUS of measurement process will be discussed in the following in ETH Zurich (Bürki et al. 2010; Charalampous et al. 2014; more detail. The approach used here shows how a calibration Guillaume et al. 2012, 2016). In this concept, a CCD chip can be calculated flexibly on site using software and various replaces the eyepiece. The camera does not capture the cameras and total stations (compatible to TCA, TPS, TS 1 3 PFG and MS series from the manufacturer Leica) without any contact. The use of the total station’s motorised autofocus is additional equipment. advantageous because, among other things, it enables sim- ple self-calibration. After self-calibration, we calculate the corresponding horizontal or vertical angle for each point of 2 Measurement System interest in the image. Due to the modular design of the measuring system, a The Modular Digital Imaging Total Station (MoDiTa) com- camera can be selected depending on the respective project bines a high-end industrial camera with a digital total station requirements. Project requirements might include: in a modular and flexible way and is currently on prototype level (Fig. 1). As described in Hauth et al. (2013), the stand- a monochrome, NIR or RGB (Bayer pattern) sensor, ard eyepiece of the total station is replaced by an industrial low light suitability (usually by large pixel pitch) or high camera via a bayonet ring. To balance the weight of the resolution, camera, we attached a counterweight to the telescope. The a global or rolling shutter, cameras can be mounted in any rotation around the target availability of a hardware trigger, axis by means of a simple clamping screw. By means of availability of line scan modes, a corresponding adapter, the eyepiece camera used takes high frame rate (frames per second). images directly from the crosshair plane. The crosshair is thus captured in every image. Among other things, this ena- The industry standard C-mount used makes it easy to bles automatic, image-based targeting, which is within the replace components. Depending on the industrial cam- accuracies of the total station (standard deviation according era used, images can be captured in different modes. By to ISO 17123-3 2001). With the help of template match- selecting an area of interest (AoI), the range of captured ing, non-signalled distinctive features are captured without lines and columns can be defined. In line-wise mode, only one line is captured over the width of the image. The data to be transmitted can thus be reduced, enabling a higher image capture frequency. A more detailed description can be found in Hauth et al. (2013). 3 Self‑Calibration To obtain measurement results within the measurement accuracy of the total station, calibration of the entire system is required. This is done by self-calibration directly on site. Given the speed of the self-calibration process, we do not intend to achieve repeatability of the calibration parameters of the camera in different setups. The aim is rather to be able to use the measuring system quickly and in an application- oriented manner. The determination of interpolable param- eters for a particular combination of camera and total station was never attempted. The user installs or replaces the camera on site and the measurement can be continued after calibration. Due to the simple mounting of the camera and the modular design, it is near impossible to recreate an identical setup. As a result, there are minimal differences in the optical path for each setup. Die ff rences of several pixels in the image are possible. Fig. 1 The upper pictures show the ready-to-measure system MoDiTa Calibration is mainly carried out automatically and only in combination with a multistation MS50. The picture below shows needs to be operated manually by the user at the beginning. the schematic structure of the eyepiece adapter for attaching the dig- Before calibration, it is necessary to detect the crosshair to ital camera with the optics. The optics are attached to the eyepiece holder via an S-mount connection. This holder is connected to the provide a reference image of the crosshair. The crosshair ref- total station via a bayonet connection for the eyepiece. The length erence image ensures consistency of visual aiming through of the eyepiece holder determines the magnification and thus how the eyepiece to camera-based aiming. Furthermore, the ref- much of the crosshair is imaged onto the sensor. The digital camera is erence image of the crosshairs is used to correct any camera attached to the camera mount via a C-mount or CS-mount connection 1 3 PFG � � � � movements computationally, cf. Sect. 4. The telescope is ⎡ x ⎤ ⎡ x − x −Δx ⎤ P 0 � � � � moved relative to a fixed target point in such a way that the ⎢ ⎥ ⎢ ⎥ y = y − y −Δy (1) P 0 ⎢ ⎥ ⎢ ⎥ target point is imaged at favourably distributed locations on c c ⎣ ⎦ ⎣ ⎦ the image plane (Schlüter et al. 2009). This allows for the collection of data for an overdetermined linear system of The corresponding point in object space is equations. The software provides for different patterns with X x ⎡ ⎤ ⎡ ⎤ different distributions of the observation points in the image. P T � ⎢ ⎥ ⎢ ⎥ Y = DR R R R R R y , (2) The selection of patterns makes it possible to open up new IP IA IP H V K P P P P P ⎢ ⎥ ⎢ ⎥ �[…]� ⎣ ⎦ ⎣ ⎦ fields of application in an applied, scientific environment by Z means of an inexpensive measuring system. For example, with a comprehensive high-precision calibration with up to 36 measurements can be carried out for an investigation into � � 2 2 2 �[…]� = x + y + c . (3) atmospheric refraction. The implemented maximum number of observation points in the image is set to 9 points per quad- The total vector in image space is normalised to unity, rant (4 quadrants × 9 observation points = 36). It is possible which is indicated by the division by […] . The spatial dis- to define fewer observation points, thus reducing the over tance D is actually not required for the calibration. D pro- determination. For a simpler example, see Fig. 11a. After longs the unit vector to the object point. the measurement, we have 12 images, each with the target The matrix R describes the rotation of the camera sensor at different positions in the image. In this case, we use a around the optical axis. black and white laser-scanning target with a checkerboard cos  − sin  0 pattern. For the automatic, rough approach of these target ⎡ ⎤ ⎢ ⎥ R = sin  cos  0 directions, the knowledge of a rough start transformation is (4) ⎢ ⎥ 0 01 sufficient, which only includes the camera constant and the ⎣ ⎦ rotation of the camera coordinate system around the target R and R follow from the graduated circle reading. axis of the total station. We merely tilt the telescope to the H V P P The required values are supplied by the total station. The side by a small, fixed amount to determine the start transfor - matrices describe the necessary rotations to transform the mation. During the measurement, the software continuously direction vector from the system of the total station into a observes the crosshair, the so-called matching. Due to the coordinate system of its ancestries. Thus, when the instru- simple mounting of the camera, the crosshair is not in the ment is previously stationed, the coordinates are converted centre of the image. It is also possible to rotate it around the into the used system directly. optical axis. In the context of self-calibration, we calculate the param- ⎡ cos H − sin H 0 ⎤ P P eters via parameter estimation based on the least squares ⎢ ⎥ R = sin H cos H 0 (5) H P P method. The functional model is based on the mapping rela- ⎢ ⎥ 0 01 ⎣ ⎦ tions between sensor space and object space (Walser 2004). Optical distortion and a possible tilt of the camera are compensated by the distortion approach according to Luh- ⎡ 10 0 ⎤ ⎢ ⎥ mann et al. (2020). With c as the camera constant for the R = 0 sin V cos V (6) V P P ⎢ ⎥ entire optics, the unknown angles H and V for the target. 0 cos V − sin V ⎣ ⎦ P P , c and the photogrammetric radial, tangential and asym- R and R describe the rotation of non-compensator cor- metric distortions (A , A, A, B, B, C, C ) are obtained. IP IA 1 2 3 1 2 1 2 P P rected total station readings into compensator corrected ones We describe the illustration of the camera chip by a 2D (I and I are calculated from the inclination in the direction transformation using a photogrammetric distortion model. P A � � of the target axis and transversely to the direction of the Δx and Δy represent the parameters of the distortion, x target axis). X , Y and Z are calculated and denote the and y represent the principal point, respectively, and the P P P normalised direction vector to the target point. detected crosshair. The angle readings to the fixed target are By using (7) and (8), the angles H and V can be not measured directly, but result from the pixel coordinates P P calculated. of the reference crosshair. The index P represents the meas- ured values to the target point. The unit vector to the searched target point is formed H = atan , (7) from the total station readings and the pixel position of the image point of one measurement: 1 3 PFG The unknowns and the termination criterion are calcu- X ∕ sin H P P V = atan for sin H  > cos H , lated per iteration. Termination occurs after the limit has P P P been reached. T T T T else x A Pl = l Pl − v Pv < 0.00000001 (13) As a result, the compensated direction angles to the tar- Y ∕ cos H P P V = atan . (8) get are provided. A transformation into Cartesian coordi- nates can be done afterwards by a distance measurement, if required. For this purpose, the determined target point The different telescope positions H V result in the cor- P, P is directly entered by the total station and a reflectorless responding image point x ,y . From Eq. (2) follows H , V P P P P measurement is carried out. The measured distance is used to the target point. We use this concept for self-calibration. to extend the unit vector to the target point. This means that the target point does not have to be aimed ̃ ̃ directly, but introduced as an unknown H and V . H − H = 0 + v (9) 4 Crosshair Tracking ̃ Based on the modular adapter for mounting a camera, it is V − V = 0 + v (10) possible to capture the crosshair. The crosshair is a geodetic crosshair that is not located in the exact centre of the image. We calculate residuals through a summary modelling of We distinguish between detecting and matching. Detection all stochastic influences. Due to practical reasons, the sto- of the reference crosshair should take place as soon as pos- chastic portions of image coordinates, circle readings and sible after the camera is mounted. As with manual eyepiece compensator readings are not modelled separately from each adjustment, a monotone image background is preferred for other. Atmospheric flicker can be reduced by grouped mul- this step, e.g. a sky or grossly out-of-focus image, so as to tiple exposures suitably. get an even background. The position and orientation of the Here, the corrected tachymeter readings to the target point crosshair in the pixel coordinate system of the camera is correspond to the direct measurement to the target. We did determined with the help of further crosses, which we refer not differentiate between total station and camera-related to as (virtual) réseau crosses in the following. The software corrections. continuously observes the position of the crosshair during x and y represent the pixel position of the crosshair. Ch Ch further measurement. Any image coordinate is transformed Distortion and  have no effect on the principal point. to the reference crosshair using 2D transformation including � � ⎡ x − x − 0 ⎤ ⎡ 0 ⎤ two translations and one rotation. Smaller deviations are rec- Ch 0 � � ⎢ ⎥ ⎢ ⎥ R R R y − y − 0 = R R 0 ̃ ̃ ̃ ̃ (11) ognised and taken into account by matching. If the change is H V K H V Ch 0 �[…]� ⎢ ⎥ ⎢ ⎥ c 1 ⎣ ⎦ ⎣ ⎦ too large, a new detection is necessary. Figure 9 provides a simplified overview of the single steps. In the following, we We consider measurements independently and equally distinguish between the inner and the outer crosshair. The accurately. The weight matrix P = I is defined with the ones two inner lines mark the centre of the crosshair. The outer on the main diagonal or zeros if the measurement is not to crosshair is composed of the six outer lines of the geodetic be included in the equation as an error. crosshair (Fig. 2). The elaborate modelling of the reference The Cholesky factorisation according to Förstner and crosshair makes it possible to ensure a largely continuous Wrobel (2016) is used to solve the system of normal equa- tracking later on, even if line elements are only recognis- tions. The normal equation matrix is split into an upper and able in parts. lower triangular matrix C and CT. We solve the system of normal equations by subsequent forward and backward sub- 4.1 Crosshair Detection stitution. This saves computing time because instead of the entire normal equation matrix N, only one triangular matrix During crosshair detection, first the outer crosshair with its needs to be inverted (Förstner and Wrobel 2016; Luhmann six lines will be roughly determined. According to Steger et al. 2020). (1996a, b, 1998), a Gaussian smoothing filter in combination From the linear dependent residuals of the unknowns fol- with its partial directional derivatives is applied. According lows the estimation of the unknowns as to Canny (1983), the smoothing and the threshold values are −1 T T determined. If the value of the second partial derivative of x ̂ = A Pl A Pl. (12) an image point exceeds the upper threshold in a pixel, this 1 3 PFG The mean value then gives the line width. The contour width can differ depending on the camera and the total stations crosshairs, but it must be at least one pixel wide in order to be detectable. The results are the start and end points, the straight line equation of the six crosshair lines and the line width. We use the six compensated lines of the outer crosshair to determine the rough crosshair centre. All intersections of the straight lines are formed. We eliminate negative coordi- nates. By forming the median of the nine intersections, the rough centre is calculated. The maximum distance from the Fig. 2 Geodetic crosshair with pixel coordinate system. Also shown is the distinction between inner (red) and outer (green) crosshair rough centre to the intersection points and the approach of an isosceles triangle are used to calculate the distance between two parallel crosshair lines. is detected as a line point with sub-pixel accuracy. If the For the definition of the inner crosshair, we define a cir - second partial derivative is smaller than the lower threshold, cle with the centre equal to the roughly determined centre we dismiss the pixel. If the value is between both thresholds, and the radius equal to the distance between two parallel the pixel is only used if it can be connected by detected line crosshair lines. According to Luhmann et al. (2020), one- points. As a result, we obtain several line segments. These dimensional grey value profiles are formed vertically to the are then examined for false detections according to Haralick circumference. and Shapiro (1992) or Suesse and Voss (1993). By calculat- These are obtained by averaging all existing grey val- ing a regression line through the image points of the lines, ues of a line that lies vertically to the circular ring. Using the mean distance of the individual points to the line can a Gaussian smoothing filter and a Laplace operator, we are be calculated. We reject points with greater distance than able to calculate the edge positions with sub-pixel accuracy. the mean. The regression line is then calculated again. By We compare the edge amplitudes with a previously defined defining a limit value for the direction difference and the threshold value. If the amplitude is greater than the threshold distance of the end points of neighbouring lines, these are value, an edge is present at the corresponding image posi- merged if necessary (Fig.  3). The regression line is then tion. A total of eight points are detected on the inner cross- calculated again. This is repeated until the maximum val- hair, two points per line (Fig. 4a). If more than four lines ues for the direction difference and the distance of the line intersect with the circular ring, only the best four edges are end points are no longer undercut. The longest six lines of used. The selection is made via the calculated edge ampli- the calculations correspond to the outer geodetic crosshair. tude. The greater the amplitude, the higher is the contrast of These are still in the form of polylines. They are calculated the contour at the image position. The start and end points individually by adjustment according to Haralick and Sha- of the edges per line are calculated as a mean so that there piro (1992), Suesse and Voss (1993) as a straight line equa- is one point for each inner line of the crosshair (Fig. 4b). tion. In addition, the line width is determined for later use For the orientation of the cross, the direction angle from the in the precise determination of the crosshair. Simplified, we rough centre to the point with the largest edge amplitude is calculate the width of the longest line by edge detection. used. Starting from a point on the line, we search the perpendicu- For the precise determination of the crosshair, we con- lar distance on both sides up to the edge. The length of the sider the lines individually and points are determined at perpendicular is determined for each pixel on the vector line. equal distances on each line. With the help of the direction Fig. 3 Detection of the outer crosshair according to Canny (1983), Fig. 4 Rough detection of the inner crosshair. a Definition of an inter - Steger (1996a, b, 1998), and Suesse and Voss (1993). Solving the section circle around the rough centre. Edge detection following the ambiguities using an example of an end of a line. a Line detection. b circle. b Mean of the edge points Merging lines based on limits. c Discarding falsely detected lines 1 3 PFG angles, we form sectors of circles around the rough cross- hairs. Within two defined circles with different radii, the edges per pixel of the lines are detected again according to Luhmann et al. (2020) (Fig. 5a). As a result, the edge begin- nings and ends of the inner crosshair are available for each line as coordinates between the two circles. These are aver- aged again so that the centre points of the line contours are available. Points with the same direction on the inner cross- hair are combined. This results in two lines: one horizontal and one vertical. These are equalised according to the same Fig. 6 Point detection of the outer crosshair using the example of a double line. a Intersection of the inner circle with an inner line. b principle of Haralick and Shapiro (1992) and Suesse and Line points are formed via edge detection perpendicular to the inter- Voss (1993) (Fig. 5c). The exact image coordinates of the section point crosshair centre are now available with sub-pixel accuracy via the point of intersection. We calculate réseau crosses for the alignment of the pre- the line contour perpendicular to the line is performed. The points on the opposite crosshair lines are combined so that cise crosshair. This is done by calculating two points per line. By defining two circles with different radii (small cir - the final result is a horizontal and a vertical line (Fig.  7). The adjustment calculation is carried out according to the least cle: 3 times the distance of the parallel crosshair lines, large circle: shortest distance from the centre to the edge of the squares method. We detect and eliminate outliers before the adjustment. image reduced by 10%). The resulting intersections with the outer cross lines are all within the image area. The calculation of the straight lines is based on the pro- cedure according to Kampmann and Renner (2004), method The smaller circle is intersected with the precise inner crosshair lines (Fig. 6a). Similar to the procedure already 3. We transferred the model of the adjustment calculation mentioned to the model of the crosshair with two straight described for the rough determination of the inner cross- hair, the edge contours are determined perpendicular to the lines and adapted to its special features. The double lines of the crosshair result in an additional unknown so that the line from the crosshair centre according to Luhmann et al. (2020) (Fig. 6b). The detection of the contours is again car- functional model in coordinate form is as follows: ried out via grey value profiles and the start and end points a ∗ x + b ∗ y + d ±  = 0. (14) 1,2 i 1,2 i 1,2 1,2 are then averaged. We repeat the steps for the outer radius so that two points are available for each outer crosshair line. The variable δ corresponds to the half parallel distance Finally, we sort the detected points separately for the inner of a double line to the adjusted straight line. These are cho- and outer circle in the correct quadrant. The 12 points on the sen with different signs for the double lines and should be outer cross lines are determined precisely and are available defined the same for both straight lines. The parameter δ is in the correct position as a pair of points per line. One hun- omitted for the single line. The equation is set up indepen- dred further points are then determined between two points dently for the two lines, so that the parameters must be deter- of a line according to the procedure shown in Fig. 5. We mined separately for each equation. The eight parameters defined the number for which a large number of points is of the two lines to be determined are listed in the unknown available for the definition of the straight line and the follow - vector X. ing adjustment. A different definition is also possible. The The adjustment of both straight lines is done in one cal- one hundred points to be detected are distributed at equal culation. The approximate values for the variables d and 1/2 intervals along the length between the start and end point of a line. As in the previous step, the detection for each point of Fig. 5 Precise detection of the inner crosshair. a Definition of circles around the rough centre. Creating sectors. b Mean of the edge points. Fig. 7 Principle of adjustment (simplified example). a Before the c Compensated lines with end points adjustment. b After the adjustment 1 3 PFG δ are determined empirically. b and a are given the value of the circle area, is generated on several image pyramids 1,2 1 1 zero and are thus defined as parallel straight lines to the in different planes and rotations. We generate the image image coordinate system. Since the underlying equation of pyramids until the top level still provides enough informa- the functional model is linear, the partial derivatives accord- tion about the image. The processing effort is higher than ing to the parameters are equal to the observations used. with other correlation methods due to the large number of Simplified, the condition equation can be regarded as an images generated. However, for the selected image area and observation equation for the software solution, but with a due to today’s technology standards, this is not a disadvan- significantly higher weight than the observations. The weight tage (Luhmann et al. 2020). Finally, the origin of the model for all observations equals 1, and the conditional equation is image is set to the precise crosshair centre. given the weight 10 (Kampmann and Renner 2004). As pre- For the crosshair matching, the NCC model of the last viously described in the section on calibration, the solution detected crosshair centre together with the current cross- of the system of normal equations is carried out by means of hair is required. We define the search area for the inner Cholesky factorisation according to Luhmann et al. (2020) crosshair by a circle around the last detected crosshair. We or Förstner and Wrobel (2016). The adjustment is iterative define the radius with 1.5 times of the distance between the until the termination criterion is reached. For the second parallel lines. This saves unnecessary computing time since iteration the parameter estimates are updated to the extent the centre of the crosshair cannot be located at the edge of that the unknown X are defined as parameter estimates X the image. Then, according to Luhmann et al. (2020), all for the following iterations. For the determination of the two defined instances of the crosshair are detected from within straight lines of the outer crosshair in the sub-pixel area with the image section of the current crosshair. We only use the one decimal place, a few iterations are already sufficient. We best instance for the crosshair since it is unique in the image. choose the termination criterion in such a way that in normal The best instance is characterised as the highest value of the cases only a few iterations are necessary. The three opposite correlation coefficient, whereby this can take values between lines with approximately the same orientation are balanced zero and one. The calculation of the precise crosshairs to form a straight line (Fig. 7). The outer line cross has thus together with the resulting réseau crosses is then carried out been determined precisely so that the réseau crosses can according to the procedure already described. The software then be determined as described. These are located on the saves the coordinates again in a local file. However, the inner straight lines determined at this point and, together with the crosshair can be determined mathematically by detecting crosshair centre, define the precise position and orientation the outer crosshair (Fig. 8). By determining the crosshair of the crosshair in the image coordinate system. lines and the geometric reference to the target axis in sub- pixel accuracy, the intersection of the crosshair lines does 4.2 Crosshair Matching Due to the possibility that the crosshair position changes after a longer period of time and when the telescope position changes, this must be determined continuously (Atorf et al. 2019). In the case of smaller movements of the crosshair, this can be matched by calculating a normalised correla- tion coefficient (NCC). The current centre of the crosshair in the image is compared with the last detected crosshair. The current position of the crosshair centre must be within a generated model in order to be matched. If the difference in position is too large, the crosshairs must be detected again. Matching is again carried out using the NCC pro- cedure according to Luhmann et al. (2020). This procedure corresponds to a simplified detection of the precise cross- hair centre since no homogeneous background is required throughout. The current crosshair centre should be able to be continuously tracked in the image during a measurement. Fig. 8 a, b Goal of crosshair matching (simplified example). Matched For the model image, we define a circle with 0.75 times points on at least two lines in two directions (a) or on one line in combination with a successful matching of the inner crosshair ensure the distance between the parallel lines around the precise tracking of camera motions. c, d Examples of an unmatchable inner centre of the crosshair. Within this image area, the simi- crosshair due to dark background (c) and overexposure (d). Never- larity comparison is carried out by means of normalised theless, the inner crosshair is calculable by the detection of the outer cross-correlation. The model image, in this case the pixels crosshair 1 3 PFG not have to be determined repeatedly in every image, but, for example, may also be overlapped. In the user interface of the control software, coordinate differences between the last detected and the matched crosshair centre are displayed to the user (see Fig. 9). 5 Practical Applications In the following, we cover exemplary studies on different applications of MoDiTa in the structural health monitoring (SHM) of existing structures. 5.1 Studies on Distance Independence A practical application for IATS is the SHM of structures, such as factory chimneys, dams or bridges (Paar et al. 2021; Zschiesche 2022). What all these structures have in common is that they usually have elongated dimensions. Often it is impossible to stand directly perpendicular to the structure or to measure the entire structure from the same distance due to environmental conditions, such as rivers or railway lines Fig. 9 A simplified overview of the single steps. In the following, we (Fig. 10). Changes in the distance to the measured object distinguish between the inner and the outer crosshair. The two inner lead to the refocusing of the optics and thus also to changes lines mark the centre of the crosshair. The outer crosshair is com- in the distortions. To discuss this aspect in more detail, we posed of the six outer lines of the geodetic crosshair (Fig. 2) have carried out measurements from different distances to the instrument. For the measurement, we used a TS30 (Leica Geosystems AG 2009) and an industrial camera UI-3250 ML-M (IDS Imaging Development Systems GmbH 2015) with 1.92 MPixel. The measurement took place on 3 February 2022 in the courtyard of Mainz University of Applied Sciences between 10 am and 1 pm (CET). We limited the distance between 3.5 and 100 m. Over this distance calibration measurements were carried out with MoDiTa with a sample of 12 measurements for one distance. These measurements were taken over the entire image area. For comparison, we applied the calibration of the measure- ment with 20 m distance also to the measurements with shorter or longer distance. We compared the results in the form of residuals or deviations in Fig. 11. The different posi- tions that IATS moves to are visible. Fig. 10 Exemplary view of two IATS (MoDiTa) on an elongated Figure  11b shows residuals of the 20 m measurement structure with different forced distances. Here, the observation of a calculated with the 20 m calibration in the range of − 0.3 reference point (green dashed line) and simultaneous observation of other monitoring points on the bridge (yellow) are shown to 0.2 mgon for the horizontal and zenith angle, which is within the expected angular accuracy of the measurement system. We assume that the adjustment modulates the sys- tem successfully. In comparison, even at a longer distance have values of max. 0.3 mgon; in the outer range of − 0.6 to the object, deviations appear only slightly. However, the to 0.9 mgon horizontally and − 0.1 to 0.5 mgon zenith dis- use of the same calibration shows that at a greater distance tance. Therefore, the middle part is still modelled within the significantly larger deviations occur in the marginal area of accuracy of measurement, but systematics can already be the measurement image (Fig. 11c). Close to the crosshair, in identified in the outer area. From a distance of approximately this case also close to the centre of the image, the deviations 7 m, the deviations increase significantly. For clarification, 1 3 PFG Fig. 11 a A measurement image with displayed positions for self-cal- 20 m distance to the instrument. d The resulting residuals of a meas- ibration. The defined target is located at the positions marked in red. urement at 70 m distance to the instrument. e The resulting deviations In this way, different positions (in this case 12) are approached across of a measurement at 4.5  m distance calculated with the self-calibra- the image for adjustment. b The resulting residuals of a measurement tion at 20 m distance to the instrument. Clearly visible are the larger at 20  m distance to the instrument. c The resulting deviations of a deviations compared to (b, c). f The resulting residuals of a measure- measurement at 70  m distance calculated with the self-calibration at ment at 4.5 m distance Fig. 11e shows the measurement with 4.5 m distance. The towards the centre, but on average well above 0.3 mgon. range of the residuals is from − 0.7 to 1.1 mgon for the hori- The systematic deviations can be traced back to an unsuit- zontal and zenith angle. Here again, the values are smaller able model of the adjustment. Walser (2004) and Wasmeier 1 3 PFG Fig. 12 Plot of the estimated parameters and a posteriori/empirical standard deviation (2009a) achieved similar results with an integrated coaxial focal length. Figure 11d, f shows the residuals of the self- camera and thus correspond to our expectations. We suspect calibrations with the corresponding measurements. In both the influence of a scale factor, possibly the optical system’s 1 3 PFG Table 1 Results of the Estimated results Empirical standard deviation adjustment Min Max Min Max –5 –3 –4 –4 κ [rad] 2.05827 × 10 1.14836 × 10 1.75569 × 10 4.33541 × 10 c [mm] 456.5227 461.1159 0.3116 1.1808 2 –3 –4 –3 –3 A [1/mm ] – 5.99941 × 10 6.70138 × 10 1.55959 × 10 5.80336 × 10 4 –5 –3 –4 –3 A [1/mm ] – 5.65061 × 10 1.82210 × 10 4.60688 × 10 1.78219 × 10 6 –4 –7 –5 –4 A [1/mm ] – 1.36531 × 10 – 5.42970 × 10 3.48835 × 10 1.36997 × 10 2 –4 –2 –3 –2 B [1/mm ] – 2.70265 × 10 2.59471 × 10 8.47366 × 10 2.01747 × 10 2 –4 –2 –2 –2 B [1/mm ] – 9.56102 × 10 3.31246 × 10 2.89327 × 10 1.02284 × 10 –1 –2 –2 –1 C [1/mm] – 8.08512 × 10 – 9.74693 × 10 5.33810 × 10 1.32803 × 10 –2 –1 –2 –1 C [1/mm] 3.14872 × 10 2.84365 × 10 5.72376 × 10 1.39879 × 10 cases, the residuals remain within the measurement accuracy Overall, our studies show that micromovements of of the total station and do not exceed 0.3 mgon. 0.5 mm/100 m can be resolved. The achievable accuracy Figure 12 shows the estimated distortion parameters over depends on the atmospheric conditions and the illumination the entire distance. The estimated nine parameters are pre- situation and decreases with increasing distance. sented with their respective accuracies as described in chap- The investigations indicate that in the case of typical dis- ter 3. The calculation is made for the entire optics, i.e. for the tances to structures for SHM of 10 m or greater, it is suf- telescope of the total station and the adapter used (Fig. 1). In ficient to use a single self-calibration for a complete project. the parameter estimation, we calculated a separate calibra- tion for each distance and autofocus setting. 5.2 Deformation Measurement on a Steel Bridge In addition to the individual distances, we gave the respective stepper motor positions since these reflect the The identification of dynamic structural characteristics is an respective required mechanical movements of the focusing important aspect of SHM. It enables the analysis of moni- optics, which are responsible for the changes in the distor- tored vibrations and the calculation of natural frequencies. tion parameters (Wasmeier 2009a). Changes in natural frequency indicate possible structural For the template matching with the cross-correlation damages. Conventionally, acceleration sensors are attached method, a circular section with a radius of less than 40 pixels to the structure with a high labour input. MoDiTa enables was used in each case. The maximum mean error of unit of high-frequency recordings of the movement behaviour weight from the adjustment is 0.32 mgon. The smallest value without having to enter the structure. For the measurement is reached during measurement at a distance of 20 m, where of frequencies, the maximum frames per second (fps) are it is 0.13 mgon. All measured values of the 12 measurements essential. Only at an adequately frequent sampling rate can were used, and no measured values were eliminated during the natural frequency be determined from the measured val- the adjustment. ues and aliasing ruled out. Bruschetini-Ambro et al. (2017), Table  1 shows the maximum and minimum estimated Lachinger et al. (2022) do not recommend determining the parameter values and empirical standard deviations corre- damping from the excitation of a train crossing, because the sponding to Fig. 12a–h. selection of the ambient window has too great an influence All symmetric radial distortions show their minima and on the result and therefore the results scatter too much. maxima at close range (Fig. 12a–c). In total, the calculated To capture the deformation behaviour of a bridge dur- empirical standard deviation of the tangential distortions B ing a crossing, we observed a steel bridge using MoDiTa and B (Fig. 12d, e) show similar orders of magnitude and (Fig.  13a, b). We used a Leica TS60 (Leica Geosystems are more consistent than those of the symmetric radial dis- AG 2020) in combination with an industrial camera UI3080 tortions. Both B and B show their greatest value at a long CP-M with 5.04 MPixel (IDS Imaging Development Sys- 1 2 distance, but fluctuate more in the near range. tems GmbH 2016). The distance to the bridge was approxi- The results show slightly strong changes of the param- mately 10 m. The recording frequency was 500 Hz with an eters in the near range. This is in line with our expectations: exposure time of 0.002 s. To achieve this high frequency, with larger movements of the focusing optics, we anticipate correspondingly larger changes in the calibration parameters. 1 3 PFG (a) Railway crossing structure (b) Measuring system MoDiTa Lat: 49.893556 Total Station: Leica TS60 Lon: 8.637458 Camera: UI3080 CP-M with 5.04 MPixel Date: June 17, Exposure time: 0.002 s 11:16 CET Recording frequency: 500 Hz Distance: 10 m. (c) Movement of the target in vertical direction Ambient Window (d) FFT, ambient window only (e) Least square fit for damped oscillation, ambient window only Fig. 13 a, b A steel bridge (30 m span) for freight and passenger traf- ent window. d The result of the fast Fourier transform (FFT) with a fic. c The recorded measured values in vertical direction. 25 s of the main peak at 3.9  Hz (approximate value). e A best-fit evaluation of recording is shown. For this period, 12 500 observations are shown. the ambient window with a natural frequency at 3.8  Hz (balanced For the evaluation of the vibration behaviour, we evaluated the ambi- value) 1 3 PFG we only observed an area of interest. The observed point the edge of the measurement image are larger. This shows is located in the upper centre of the western bridge. We that the distortion approach varies significantly, especially attached no targets to the structure. Figure 13c shows the in a close-up range. We assume the influence of the changed recorded deformation over time. For further analysis, we camera constant c is responsible for the systematic devia- used the ambient window (shown in Fig.  13c by a green tions. The use of one calibration can save a lot of time during box). The ambient window captures the oscillation behav- the measurement, as it is no longer necessary to measure the iour of the bridge without additional mass from the train. calibration pattern again and calculate it. Figure 13d shows the results of the Fast Fourier Trans- Furthermore, we carried out an exemplary deformation form (FFT) of the ambient window to determine the natu- measurement on a steel bridge and demonstrated the suc- ral frequencies. Only the results of the low frequencies are cessful use for the determination of natural frequencies. Due shown. A clear peak is visible at 3.9 Hz. to the high recording frequency, it is possible to record the To calculate the damped oscillation using the least vibration behaviour. Further research is needed in this area, squares fit, we evaluated the measured values from second such as the comparison to other sensor systems. 19.0 to 20.0 (Fig. 13e). The result is a calculated natural fre- Acknowledgements The authors would like to thank Alexander Milch, quency of 3.79 Hz. The mean error of unit of weight is esti- Cedric Jager, Lukas Haas and Michael Biele for their work on the mated at 0.046 mm/10 m. This corresponds to the expected project. resolution and accuracy, cf. 5.1. Funding Open Access funding enabled and organized by Projekt Both evaluations come to almost the same result and con- DEAL. The project was funded by the Carl-Zeiss-Foundation, BAM— firm each other. Big-Data-Analytics in Environmental and Structural Monitoring, P2017-02-003. Availability of Data and Material The data presented in this study are available on request from the corresponding author. 6 Conclusion and Outlook Declarations In this article we explain how the general setup of the meas- uring system in combination with the developed software Conflicts of Interest/Competing Interests The authors declare no con- detects the crosshair and, thus, performs self-calibration. flict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the The self-calibration achieves accuracies within the measur- manuscript, or in the decision to publish the results. ing accuracy of the total station. We made assumptions such as the lines of the crosshairs are parallel and six precise lines Open Access This article is licensed under a Creative Commons Attri- represent the outer crosshair. We have not investigated either bution 4.0 International License, which permits use, sharing, adapta- tion, distribution and reproduction in any medium or format, as long assumption further. as you give appropriate credit to the original author(s) and the source, The algorithms explained track the crosshair position and provide a link to the Creative Commons licence, and indicate if changes orientation correctly and with sub-pixel accuracy, although were made. The images or other third party material in this article are the inner crosshair cannot be found by means of cross- included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in correlation. Due to the determination of the crosshair lines the article's Creative Commons licence and your intended use is not and the sub-pixel accurate geometric reference to the target permitted by statutory regulation or exceeds the permitted use, you will axis, the intersection point of the crosshair lines does not need to obtain permission directly from the copyright holder. To view a have to be determined again in every image but may also, copy of this licence, visit http://cr eativ ecommons. or g/licen ses/ b y/4.0/ . for example, be covered. This offers the measuring system more flexibility. In the explained practical application we show results of several calibration calculations. We have shown that it is not References necessary to calculate a separate photogrammetric calibra- Atorf P, Heidelberg A, Schlüter M, Zschiesche K (2019) Berührung- tion for each distance to the object. For the combination of slose Positionsbestimmung von spiegelnden Kugeln mit Methoden camera and total station used, areas can be defined which, des maschinellen Sehens. zfv – Zeitschrift für Geodäsie, Geoin- for example, meet the requirements of the SHM and can formation und Landmanagement 144: 317–322. https:// doi. org/ 10. 12902/ zfv- 0268- 2019 be carried out with one calibration of the optical system. Bruschetini-Ambro S-Z, Fink J, Lachinger S, Reiterer M (2017) Ermit- As the distance to the calibration distance increases, so do tlung der dynamischen Kennwerte von Eisenbahnbrücken unter the angular deviations. Generally, the deviations towards 1 3 PFG Anwendung von unterschiedlichen Schwingungsanregungsmeth- Leica Geosystems AG (2020) Leica Nova TS60: Data sheet. https:// oden. Bauingenieur 92:2–13leica-g eosys tems.com/ en- gb/ pr oducts/ t otal-s tations/ r obotic- t otal- Bürki B, Guillaume S, Sorber P, Oesch H (2010) DAEDALUS: a versa-stati ons/ leica- nova- ts60. Accessed 4 Mar 2021 tile usable digital clip-on measuring system for Total Stations. In: Luhmann T, Robson S, Kyle S, Boehm J (2020) Close-range photo- 2010 International Conference on indoor positioning and indoor grammetry and 3D imaging, 3rd edn. De Gruyter, Berlin navigation (IPIN), pp 1–10. https://ww w.r esear ch-colle ction. e thz. Paar R, Marendić A, Wagner A, Wiedemann W, Wunderlich T, Roić ch/ handle/ 20. 500. 11850/ 159968 M, DamjanovićD (2017) Using IATS and digital levelling staffs Canny J (1983) Finding edges and lines in images. Master's thesis, for the determination of dynamic displacements and natural oscil- Massachusetts Institute of Technology, Department of Electrial lation frequencies of civil engineering structures. In: Kopačik A, Engineering and Computer Science Kyrinovič P, Maria JH (eds) Proceedings of the 7th international Charalampous E, Psimoulis P, Guillaume S, Spiridonakos M, Klis R, conference on engineering surveying - INGEO 2017. Lisbon, Por- Bürki B, Rothacher M, Chatzi E, Luchsinger R, Feltrin G (2014) tugal, pp 49–58. https:// www. bib. irb. hr/ 900625 Measuring sub-mm structural displacements using QDaedalus: a Paar R, Roić M, Marendić A, Miletić S (2021) Technological devel- digital clip-on measuring system developed for total stations. Appl opment and application of photo and video theodolites. Appl Sci Geomat. https:// doi. org/ 10. 1007/ s12518- 014- 0150-z 11:3893. https:// doi. org/ 10. 3390/ app11 093893 Förstner W, Wrobel BP (2016) Photogrammetric computer vision: sta- Reiterer A, Wagner A (2012) System Considerations of an Image tistics, geometry, orientation and reconstruction, vol 11. Springer Assisted Total Station – Evaluation and Assessment. avn - Allge- International Publishing, Cham meine Vermessungs-Nachrichten, pp 83–94 Grimm D, Zogg H-M (2013) Leica Nova MS50 White paper. Leica Schlüter M, Hauth S, Heß H (2009) Selbstkalibrierung motorisierter Geosystems AG, Heerbrugg Digitalkameratheodolite für technische Präzisionsmessungen. Guillaume S, Bürki B, Griffet S, Mainaud Durand H (2012) QDaedalus In: DVW e. V. – Gesellschaft für Geodäsie, Geoinformation : Augmentation of total stations by CCD sensor for automated und Landmanagement (ed) zfv – Zeitschrift für Geodäsie, Geo- contactless high-precision metrology. In: FIG Working Week 2012 information und Landmanagement. Wißner-Verlag, Augsburg, Guillaume S, Clerc J, Leyder C, Ray J, Kistler M (2016) Contribution pp 22–28 of the Image-Assisted Theodolite System QDaedalus to Geodetic Steger C (1996a) Extracting curvilinear structures: a differential geo- Static and Dynamic Deformation Monitoring. In: Conference metric approach. In: Buxton B, Cipolla R (eds) Computer vision— and Seminar Proceedings: 3rd Joint International Symposium on ECCV ’96. Springer Berlin Heidelberg, Berlin, pp 630–641 Deformation Monitoring (JISDM). International Federation of Steger C (1996b) Extraction of curved lines from images. In: IEEE (ed) Surveyors, FIG, Copenhagen, p 66. https:// www. resea rch- colle Proceedings of 13th International Conference on pattern recogni- ction. ethz. ch/ handle/ 20. 500. 11850/ 126892 tion, 2: 251–255 Haralick RM, Shapiro LG (1992) Computer and robot vision: Vol. Steger C (1998) An unbiased detector of curvilinear structures. IEEE 2. Chapter 14.9. Addison-Wesley Longman Publishing Co. Inc Trans Pattern Anal Mach Intell 20:113–125. https:// doi. org/ 10. Hauth S, Schlüter M, Thiery F (2013) Schneller und ausdauernder als 1109/ 34. 659930 das menschliche Auge: Modulare Okularkameras am Motorta- Suesse H, Voss K (1993) Adaptive Ausgleichsrechnung und Aus- chymeter. In: avn. Allgemeine Vermessungsnachrichten, vol 120. reißerproblematik für die digitale Bildverarbeitung. In: Pöppl SJ, VDE VERLAG GmbH, Berlin - Offenbach, pp 210–216 Handels H (ed) Mustererkennung. Springer, Berlin Heidelberg, Huang YD, Harley I. (1989) Calibration of close-range photogram- pp 600–607 metric stations using a free network bundle adjustment. In: Gruen, Wagner A, Wasmeier P, Reith, Wunderlich T (2013) Bridge monitoring Kahmen (ed) Proceedings of first Conference on Optical 3-D by means of video-tacheometer—a case study. avn - Allgemeine measurement techniques. Wichmann Verlag, Vienna/Austria, pp Vermessungs-Nachrichten, pp 283–292 49–56 Wagner A, Huber B, Wiedemann W, Paar G (2014) Long-range geo- IDS Imaging Development Systems GmbH (2015) UI-3250ML-M- monitoring using image assisted total stations. J Appl Geodesy. GL: Data sheet. https://en. ids- imagi ng. com/ s tore/ui- 3250ml. html . https:// doi. org/ 10. 1515/ jag- 2014- 0014 Accessed 4 Feb 2015 Wagner A, Wiedemann W, Wasmeier P, Wunderlich T (2016) Moni- IDS Imaging Development Systems GmbH (2016) UI-3080CP-M-GL: toring concepts using image assisted total stations. In: Paar R, Data sheet. https:// en. ids- imagi ng. com/ IDS/ datas heet_ pdf. php? Marendić A, Zrinjski M (eds) SIG 2016. Croatian Geodetic sku= AB008 48. Accessed 26 June 2016 Society ISO 17123-3 (2001) Optics and optical instruments—field procedures Walser BH (2004) Development and calibration of an image assisted for testing geodetic and surveying instruments—part 3: theodo- total station. Dissertation, ETH Zurich lites. International Organization for Standardization Wasmeier P (2009a) Grundlagen der Deformationsbestimmung mit Kampmann G, Renner B (2004) Vergleich verschiedener Methoden zur Messdaten bildgebender Tachymeter. Zugl.: München, Techn. Bestimmung ausgleichender Ebenen und Geraden. avn - Allge- Univ., PhD thesis, 2009a, Technische Universität München meine Vermessungsnachrichten, pp 56–67 Wasmeier P (2009b) Videotachymetrie – Sensorfusion mit Potenzial. Lachinger S, Vorwagner A, Reiterer M, Fink J, Ambro SZ (2022) In: avn. Allgemeine Vermessungsnachrichten, vol 7. VDE VER- Entwicklung eines neuen Regelwerkes für dynamische Messungen LAG GmbH, Berlin - Offenbach, pp 261–267 von Eisenbahnbrücken der ÖBB. In: VDI Wissensforum GmbH Zschiesche K (2022) Image assisted total stations for structural health (ed) 7. VDI-Fachtagung Baudynamik, pp 53–65 monitoring—a review. Geomatics 2:1–16. https://doi. or g/10. 3390/ Leica Geosystems AG (2009) TS30 Technical Data. https://leica- g eosy geoma tics2 010001 stems.com/ sf tp/files/ ar chiv ed-files/ T S30_T echnical_ Dat a_en. pdf . Accessed 4 Feb 2022 1 3 http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png "PFG – Journal of Photogrammetry, Remote Sensing and Geoinformation Science" Springer Journals

Self-Calibration and Crosshair Tracking with Modular Digital Imaging Total Station

Loading next page...
 
/lp/springer-journals/self-calibration-and-crosshair-tracking-with-modular-digital-imaging-0caWu9V0qV
Publisher
Springer Journals
Copyright
Copyright © The Author(s) 2022. corrected publication 2022
ISSN
2512-2789
eISSN
2512-2819
DOI
10.1007/s41064-022-00220-0
Publisher site
See Article on Publisher Site

Abstract

The combination of a geodetic total station with a digital camera opens up the possibilities of digital image analysis of the captured images together with angle measurement. In general, such a combination is called image-assisted total station (IATS). The prototype of an IATS called MoDiTa (Modular Digital Imaging Total Station) developed at i3mainz is designed in such a way that an existing total station or a tachymeter can be extended by an industrial camera in a few simple steps. The ad hoc conversion of the measuring system opens up further areas of application for existing commercial measuring systems, such as high-frequency aiming, autocollimation tasks or tracking of moving targets. MoDiTa is calibrated directly on site using image-processing and adjustment methods. The crosshair plane is captured for each image and provides identical points in the camera image as well as in the reference image. However, since the camera is not precisely coaxially mounted and movement of the camera cannot be ruled out, the camera is continuously observed during the entire measurement. Vari- ous image-processing algorithms determine the crosshairs in the image and compare the results to detect movement. In the following, we explain the self-calibration and the methods of crosshair detection as well as the necessary matching. We use exemplary results to show to what extent the parameters of self-calibration remain valid even if the distance and thus the focus between instrument and target object changes. Through this, one calibration is applicable for different distances and eliminates the need for repeated, time-consuming calibrations during typical applications. Keywords Self-calibration · Crosshair · Image Assisted Total Station · Image processing · Image matching · Tracking Zusammenfassung Selbstkalibrierung und Strichkreuzverfolgung mittels einer modularen digitalen bildgebenden Totalstation. Die Kombina- tion einer geodätischen Totalstation mit einer digitalen Kamera eröffnet die Möglichkeit der digitalen Bildanalyse der auf- genommenen Bilder zusammen mit der Winkelmessung. Im Allgemeinen wird eine solche Kombination als Image Assisted Total Station (IATS) bezeichnet. Der am i3mainz entwickelte Prototyp einer IATS namens MoDiTa (Modular Digital Imaging Total Station) ist so konzipiert, dass eine bestehende Totalstation oder ein Tachymeter mit nur wenigen Handgriffen um eine Industriekamera erweitert werden kann. Die Ad-hoc-Erweiterung des Messsystems eröffnet weitere Anwendungsbereiche für bestehende kommerzielle Messsysteme wie hochfrequente Zielerfassung, Autokollimations-aufgaben oder die Verfolgung bewegter Ziele. MoDiTa wird direkt vor Ort mittels Bildverarbeitungs- und Ausgleichungsmethoden kalibriert. Die Faden- kreuzebene wird für jedes Bild erfasst und liefert identische Punkte sowohl im Kamerabild als auch im Referenzbild. Da die Kamera jedoch nicht exakt koaxial montiert ist und eine Bewegung der Kamera nicht auszuschließen ist, wird die Kamera während der gesamten Messung kontinuierlich beobachtet. Verschiedene Bildverarbeitungsalgorithmen bestimmen während der Messung das Fadenkreuz im Bild und vergleichen diese Ergebnisse, um Bewegungen zu erkennen. Im Folgenden werden * Kira Zschiesche Kira.Zschiesche@hs-mainz.de i3mainz Institute for Spatial Information and Surveying Technology, Mainz University of Applied Sciences, Lucy-Hillebrand-Straße 2, 55128 Mainz, Germany Vol.:(0123456789) 1 3 PFG die Selbstkalibrierung und die Methoden der Fadenkreuzerkennung sowie der notwendige Abgleich erläutert. Anhand exem- plarischer Ergebnisse wird gezeigt, inwieweit die Parameter der Selbstkalibrierung auch dann gültig bleiben, wenn sich der Abstand und damit der Fokus zwischen Instrument und Zielobjekt ändert. Damit ist eine Kalibrierung für unterschiedliche Entfernungen anwendbar und erspart bei typischen Anwendungen die Wiederholung der zeitraubenden Kalibrierungen. 1 Introduction crosshairs in the image. No additional optical component is added in between. This makes it necessary to attach a menis- Modular Digital Imaging Total Stations show a wide range cus lens to the front of the telescope for distances of 13 m or of experimental applications in the fields of engineering sur - more. This front lens shifts the focal plane to the image sen- veying and metrology (Atorf et al. 2019; Guillaume et al. sor for focused images. Similar to Huang and Harley (1989), 2016; Wagner et al. 2014). Recent reviews have been pre- the calibration is carried out by virtual control points, where sented by Paar et al. (2021) and Zschiesche (2022). In short, the central projection is expressed by an affine approach. the development of surveying instruments into more power- Instrument errors are not taken into account. ful and user-friendly tools is taking place through automa- The developed application of the University of Zagreb tion and the addition of new sensor technology. This can be attached a GoPro5 directly to the ocular (Paar et al. 2017, seen, for example, in automatic target recognition (ATR) or 2021). Another setup where the camera is directly attached the autofocus of so-called total stations. By extending a total to the eyepiece can also be found in Schlüter et al. (2009). station with one or more cameras, the possibilities of image For the measurement with the IATS at the University of processing also become available. In addition, a further Zagreb, videos are recorded which are later split into images. advantage is the users’ independence due to the subjective Here, photo targets with predefined circles of known diam- sense of the observer’s eye. However, the cameras used so eter and distance between the circle centres are used. This far by instrument manufacturers primarily serve to improve enables the evaluation of the image data. For the frequency interactive user workflows, but they currently do not provide analysis, raw image coordinates are used and no camera a real-time interface for user-specic fi image analysis or deep calibration is required. However, the photo target must be learning applications in the measurement process. Further- attached to the object to be observed. more, the highest possible frame rate is significantly lower The second design offers the advantage of a fixed cam- than the frame rate achievable by industrial cameras. For era with the instrument like commercial IATS. Commer- example, a multistation MS50 from the manufacturer Leica/ cial IATS often have too low speed of image acquisition Hexagon achieves 20 frames per second from a 5 MP coaxial for kinematic measurements (e.g. for frequency analysis in camera and thus allows smooth user interaction. Looking structural health monitoring). The fixed camera provides into detail, the frame rate of 20 Hz is only achieved with constant calibration parameters as opposed to the modular respect to VGA resolution of the display, 640 × 480 pixels version which requires calibration after reconfiguration. An (Grimm and Zogg 2013). Saving a full 2560 × 1920 pixel early prototype is mentioned in Walser (2004), and the pro- image to an SD card usually takes more than 2 s with JPEG totype series IATS2 from the manufacturer Leica in Reiterer compression and even more than 6 s in raw format. To be and Wagner (2012), Wagner et al. (2013, 2016), Wasmeier able to target applications that we believe require frame rates (2009b). Walser (2004) describes the camera with an affine of around 1 Hz–1000 Hz, we have decided to continue the chip model and uses a combined approach to take camera concept of external cameras to achieve these frame rates and instrument errors into account. Wasmeier (2009a) shows (Hauth et al. 2013), while integrating the motorised focus a comparison of different methods. support of the multistations. We consider it an advantage The measuring system MoDiTa developed at i3mainz that the external camera does not disturb the thermal design extends an existing instrument modularly by an external of the multistation even at high pixel clock rates. industrial camera. The self-calibration based on the photo- During the process of prototype development, differ - grammetric camera model fully integrates the external cam- ent types of construction emerged. External implementa- era into the measurement process. By permanently tracking tions make it possible to mount the camera on the ocular the crosshair, the accuracy characteristics of the total station or replace it. These are used in combination with commer- are maintained. In the following, we explain the measure- cial total stations or tacheometers and can be converted ment system and the calibration. The necessary image-based and adapted to the particular conditions and requirements. acquisition of the crosshair for the calibration and the further One example of such a modular system is DAEDALUS of measurement process will be discussed in the following in ETH Zurich (Bürki et al. 2010; Charalampous et al. 2014; more detail. The approach used here shows how a calibration Guillaume et al. 2012, 2016). In this concept, a CCD chip can be calculated flexibly on site using software and various replaces the eyepiece. The camera does not capture the cameras and total stations (compatible to TCA, TPS, TS 1 3 PFG and MS series from the manufacturer Leica) without any contact. The use of the total station’s motorised autofocus is additional equipment. advantageous because, among other things, it enables sim- ple self-calibration. After self-calibration, we calculate the corresponding horizontal or vertical angle for each point of 2 Measurement System interest in the image. Due to the modular design of the measuring system, a The Modular Digital Imaging Total Station (MoDiTa) com- camera can be selected depending on the respective project bines a high-end industrial camera with a digital total station requirements. Project requirements might include: in a modular and flexible way and is currently on prototype level (Fig. 1). As described in Hauth et al. (2013), the stand- a monochrome, NIR or RGB (Bayer pattern) sensor, ard eyepiece of the total station is replaced by an industrial low light suitability (usually by large pixel pitch) or high camera via a bayonet ring. To balance the weight of the resolution, camera, we attached a counterweight to the telescope. The a global or rolling shutter, cameras can be mounted in any rotation around the target availability of a hardware trigger, axis by means of a simple clamping screw. By means of availability of line scan modes, a corresponding adapter, the eyepiece camera used takes high frame rate (frames per second). images directly from the crosshair plane. The crosshair is thus captured in every image. Among other things, this ena- The industry standard C-mount used makes it easy to bles automatic, image-based targeting, which is within the replace components. Depending on the industrial cam- accuracies of the total station (standard deviation according era used, images can be captured in different modes. By to ISO 17123-3 2001). With the help of template match- selecting an area of interest (AoI), the range of captured ing, non-signalled distinctive features are captured without lines and columns can be defined. In line-wise mode, only one line is captured over the width of the image. The data to be transmitted can thus be reduced, enabling a higher image capture frequency. A more detailed description can be found in Hauth et al. (2013). 3 Self‑Calibration To obtain measurement results within the measurement accuracy of the total station, calibration of the entire system is required. This is done by self-calibration directly on site. Given the speed of the self-calibration process, we do not intend to achieve repeatability of the calibration parameters of the camera in different setups. The aim is rather to be able to use the measuring system quickly and in an application- oriented manner. The determination of interpolable param- eters for a particular combination of camera and total station was never attempted. The user installs or replaces the camera on site and the measurement can be continued after calibration. Due to the simple mounting of the camera and the modular design, it is near impossible to recreate an identical setup. As a result, there are minimal differences in the optical path for each setup. Die ff rences of several pixels in the image are possible. Fig. 1 The upper pictures show the ready-to-measure system MoDiTa Calibration is mainly carried out automatically and only in combination with a multistation MS50. The picture below shows needs to be operated manually by the user at the beginning. the schematic structure of the eyepiece adapter for attaching the dig- Before calibration, it is necessary to detect the crosshair to ital camera with the optics. The optics are attached to the eyepiece holder via an S-mount connection. This holder is connected to the provide a reference image of the crosshair. The crosshair ref- total station via a bayonet connection for the eyepiece. The length erence image ensures consistency of visual aiming through of the eyepiece holder determines the magnification and thus how the eyepiece to camera-based aiming. Furthermore, the ref- much of the crosshair is imaged onto the sensor. The digital camera is erence image of the crosshairs is used to correct any camera attached to the camera mount via a C-mount or CS-mount connection 1 3 PFG � � � � movements computationally, cf. Sect. 4. The telescope is ⎡ x ⎤ ⎡ x − x −Δx ⎤ P 0 � � � � moved relative to a fixed target point in such a way that the ⎢ ⎥ ⎢ ⎥ y = y − y −Δy (1) P 0 ⎢ ⎥ ⎢ ⎥ target point is imaged at favourably distributed locations on c c ⎣ ⎦ ⎣ ⎦ the image plane (Schlüter et al. 2009). This allows for the collection of data for an overdetermined linear system of The corresponding point in object space is equations. The software provides for different patterns with X x ⎡ ⎤ ⎡ ⎤ different distributions of the observation points in the image. P T � ⎢ ⎥ ⎢ ⎥ Y = DR R R R R R y , (2) The selection of patterns makes it possible to open up new IP IA IP H V K P P P P P ⎢ ⎥ ⎢ ⎥ �[…]� ⎣ ⎦ ⎣ ⎦ fields of application in an applied, scientific environment by Z means of an inexpensive measuring system. For example, with a comprehensive high-precision calibration with up to 36 measurements can be carried out for an investigation into � � 2 2 2 �[…]� = x + y + c . (3) atmospheric refraction. The implemented maximum number of observation points in the image is set to 9 points per quad- The total vector in image space is normalised to unity, rant (4 quadrants × 9 observation points = 36). It is possible which is indicated by the division by […] . The spatial dis- to define fewer observation points, thus reducing the over tance D is actually not required for the calibration. D pro- determination. For a simpler example, see Fig. 11a. After longs the unit vector to the object point. the measurement, we have 12 images, each with the target The matrix R describes the rotation of the camera sensor at different positions in the image. In this case, we use a around the optical axis. black and white laser-scanning target with a checkerboard cos  − sin  0 pattern. For the automatic, rough approach of these target ⎡ ⎤ ⎢ ⎥ R = sin  cos  0 directions, the knowledge of a rough start transformation is (4) ⎢ ⎥ 0 01 sufficient, which only includes the camera constant and the ⎣ ⎦ rotation of the camera coordinate system around the target R and R follow from the graduated circle reading. axis of the total station. We merely tilt the telescope to the H V P P The required values are supplied by the total station. The side by a small, fixed amount to determine the start transfor - matrices describe the necessary rotations to transform the mation. During the measurement, the software continuously direction vector from the system of the total station into a observes the crosshair, the so-called matching. Due to the coordinate system of its ancestries. Thus, when the instru- simple mounting of the camera, the crosshair is not in the ment is previously stationed, the coordinates are converted centre of the image. It is also possible to rotate it around the into the used system directly. optical axis. In the context of self-calibration, we calculate the param- ⎡ cos H − sin H 0 ⎤ P P eters via parameter estimation based on the least squares ⎢ ⎥ R = sin H cos H 0 (5) H P P method. The functional model is based on the mapping rela- ⎢ ⎥ 0 01 ⎣ ⎦ tions between sensor space and object space (Walser 2004). Optical distortion and a possible tilt of the camera are compensated by the distortion approach according to Luh- ⎡ 10 0 ⎤ ⎢ ⎥ mann et al. (2020). With c as the camera constant for the R = 0 sin V cos V (6) V P P ⎢ ⎥ entire optics, the unknown angles H and V for the target. 0 cos V − sin V ⎣ ⎦ P P , c and the photogrammetric radial, tangential and asym- R and R describe the rotation of non-compensator cor- metric distortions (A , A, A, B, B, C, C ) are obtained. IP IA 1 2 3 1 2 1 2 P P rected total station readings into compensator corrected ones We describe the illustration of the camera chip by a 2D (I and I are calculated from the inclination in the direction transformation using a photogrammetric distortion model. P A � � of the target axis and transversely to the direction of the Δx and Δy represent the parameters of the distortion, x target axis). X , Y and Z are calculated and denote the and y represent the principal point, respectively, and the P P P normalised direction vector to the target point. detected crosshair. The angle readings to the fixed target are By using (7) and (8), the angles H and V can be not measured directly, but result from the pixel coordinates P P calculated. of the reference crosshair. The index P represents the meas- ured values to the target point. The unit vector to the searched target point is formed H = atan , (7) from the total station readings and the pixel position of the image point of one measurement: 1 3 PFG The unknowns and the termination criterion are calcu- X ∕ sin H P P V = atan for sin H  > cos H , lated per iteration. Termination occurs after the limit has P P P been reached. T T T T else x A Pl = l Pl − v Pv < 0.00000001 (13) As a result, the compensated direction angles to the tar- Y ∕ cos H P P V = atan . (8) get are provided. A transformation into Cartesian coordi- nates can be done afterwards by a distance measurement, if required. For this purpose, the determined target point The different telescope positions H V result in the cor- P, P is directly entered by the total station and a reflectorless responding image point x ,y . From Eq. (2) follows H , V P P P P measurement is carried out. The measured distance is used to the target point. We use this concept for self-calibration. to extend the unit vector to the target point. This means that the target point does not have to be aimed ̃ ̃ directly, but introduced as an unknown H and V . H − H = 0 + v (9) 4 Crosshair Tracking ̃ Based on the modular adapter for mounting a camera, it is V − V = 0 + v (10) possible to capture the crosshair. The crosshair is a geodetic crosshair that is not located in the exact centre of the image. We calculate residuals through a summary modelling of We distinguish between detecting and matching. Detection all stochastic influences. Due to practical reasons, the sto- of the reference crosshair should take place as soon as pos- chastic portions of image coordinates, circle readings and sible after the camera is mounted. As with manual eyepiece compensator readings are not modelled separately from each adjustment, a monotone image background is preferred for other. Atmospheric flicker can be reduced by grouped mul- this step, e.g. a sky or grossly out-of-focus image, so as to tiple exposures suitably. get an even background. The position and orientation of the Here, the corrected tachymeter readings to the target point crosshair in the pixel coordinate system of the camera is correspond to the direct measurement to the target. We did determined with the help of further crosses, which we refer not differentiate between total station and camera-related to as (virtual) réseau crosses in the following. The software corrections. continuously observes the position of the crosshair during x and y represent the pixel position of the crosshair. Ch Ch further measurement. Any image coordinate is transformed Distortion and  have no effect on the principal point. to the reference crosshair using 2D transformation including � � ⎡ x − x − 0 ⎤ ⎡ 0 ⎤ two translations and one rotation. Smaller deviations are rec- Ch 0 � � ⎢ ⎥ ⎢ ⎥ R R R y − y − 0 = R R 0 ̃ ̃ ̃ ̃ (11) ognised and taken into account by matching. If the change is H V K H V Ch 0 �[…]� ⎢ ⎥ ⎢ ⎥ c 1 ⎣ ⎦ ⎣ ⎦ too large, a new detection is necessary. Figure 9 provides a simplified overview of the single steps. In the following, we We consider measurements independently and equally distinguish between the inner and the outer crosshair. The accurately. The weight matrix P = I is defined with the ones two inner lines mark the centre of the crosshair. The outer on the main diagonal or zeros if the measurement is not to crosshair is composed of the six outer lines of the geodetic be included in the equation as an error. crosshair (Fig. 2). The elaborate modelling of the reference The Cholesky factorisation according to Förstner and crosshair makes it possible to ensure a largely continuous Wrobel (2016) is used to solve the system of normal equa- tracking later on, even if line elements are only recognis- tions. The normal equation matrix is split into an upper and able in parts. lower triangular matrix C and CT. We solve the system of normal equations by subsequent forward and backward sub- 4.1 Crosshair Detection stitution. This saves computing time because instead of the entire normal equation matrix N, only one triangular matrix During crosshair detection, first the outer crosshair with its needs to be inverted (Förstner and Wrobel 2016; Luhmann six lines will be roughly determined. According to Steger et al. 2020). (1996a, b, 1998), a Gaussian smoothing filter in combination From the linear dependent residuals of the unknowns fol- with its partial directional derivatives is applied. According lows the estimation of the unknowns as to Canny (1983), the smoothing and the threshold values are −1 T T determined. If the value of the second partial derivative of x ̂ = A Pl A Pl. (12) an image point exceeds the upper threshold in a pixel, this 1 3 PFG The mean value then gives the line width. The contour width can differ depending on the camera and the total stations crosshairs, but it must be at least one pixel wide in order to be detectable. The results are the start and end points, the straight line equation of the six crosshair lines and the line width. We use the six compensated lines of the outer crosshair to determine the rough crosshair centre. All intersections of the straight lines are formed. We eliminate negative coordi- nates. By forming the median of the nine intersections, the rough centre is calculated. The maximum distance from the Fig. 2 Geodetic crosshair with pixel coordinate system. Also shown is the distinction between inner (red) and outer (green) crosshair rough centre to the intersection points and the approach of an isosceles triangle are used to calculate the distance between two parallel crosshair lines. is detected as a line point with sub-pixel accuracy. If the For the definition of the inner crosshair, we define a cir - second partial derivative is smaller than the lower threshold, cle with the centre equal to the roughly determined centre we dismiss the pixel. If the value is between both thresholds, and the radius equal to the distance between two parallel the pixel is only used if it can be connected by detected line crosshair lines. According to Luhmann et al. (2020), one- points. As a result, we obtain several line segments. These dimensional grey value profiles are formed vertically to the are then examined for false detections according to Haralick circumference. and Shapiro (1992) or Suesse and Voss (1993). By calculat- These are obtained by averaging all existing grey val- ing a regression line through the image points of the lines, ues of a line that lies vertically to the circular ring. Using the mean distance of the individual points to the line can a Gaussian smoothing filter and a Laplace operator, we are be calculated. We reject points with greater distance than able to calculate the edge positions with sub-pixel accuracy. the mean. The regression line is then calculated again. By We compare the edge amplitudes with a previously defined defining a limit value for the direction difference and the threshold value. If the amplitude is greater than the threshold distance of the end points of neighbouring lines, these are value, an edge is present at the corresponding image posi- merged if necessary (Fig.  3). The regression line is then tion. A total of eight points are detected on the inner cross- calculated again. This is repeated until the maximum val- hair, two points per line (Fig. 4a). If more than four lines ues for the direction difference and the distance of the line intersect with the circular ring, only the best four edges are end points are no longer undercut. The longest six lines of used. The selection is made via the calculated edge ampli- the calculations correspond to the outer geodetic crosshair. tude. The greater the amplitude, the higher is the contrast of These are still in the form of polylines. They are calculated the contour at the image position. The start and end points individually by adjustment according to Haralick and Sha- of the edges per line are calculated as a mean so that there piro (1992), Suesse and Voss (1993) as a straight line equa- is one point for each inner line of the crosshair (Fig. 4b). tion. In addition, the line width is determined for later use For the orientation of the cross, the direction angle from the in the precise determination of the crosshair. Simplified, we rough centre to the point with the largest edge amplitude is calculate the width of the longest line by edge detection. used. Starting from a point on the line, we search the perpendicu- For the precise determination of the crosshair, we con- lar distance on both sides up to the edge. The length of the sider the lines individually and points are determined at perpendicular is determined for each pixel on the vector line. equal distances on each line. With the help of the direction Fig. 3 Detection of the outer crosshair according to Canny (1983), Fig. 4 Rough detection of the inner crosshair. a Definition of an inter - Steger (1996a, b, 1998), and Suesse and Voss (1993). Solving the section circle around the rough centre. Edge detection following the ambiguities using an example of an end of a line. a Line detection. b circle. b Mean of the edge points Merging lines based on limits. c Discarding falsely detected lines 1 3 PFG angles, we form sectors of circles around the rough cross- hairs. Within two defined circles with different radii, the edges per pixel of the lines are detected again according to Luhmann et al. (2020) (Fig. 5a). As a result, the edge begin- nings and ends of the inner crosshair are available for each line as coordinates between the two circles. These are aver- aged again so that the centre points of the line contours are available. Points with the same direction on the inner cross- hair are combined. This results in two lines: one horizontal and one vertical. These are equalised according to the same Fig. 6 Point detection of the outer crosshair using the example of a double line. a Intersection of the inner circle with an inner line. b principle of Haralick and Shapiro (1992) and Suesse and Line points are formed via edge detection perpendicular to the inter- Voss (1993) (Fig. 5c). The exact image coordinates of the section point crosshair centre are now available with sub-pixel accuracy via the point of intersection. We calculate réseau crosses for the alignment of the pre- the line contour perpendicular to the line is performed. The points on the opposite crosshair lines are combined so that cise crosshair. This is done by calculating two points per line. By defining two circles with different radii (small cir - the final result is a horizontal and a vertical line (Fig.  7). The adjustment calculation is carried out according to the least cle: 3 times the distance of the parallel crosshair lines, large circle: shortest distance from the centre to the edge of the squares method. We detect and eliminate outliers before the adjustment. image reduced by 10%). The resulting intersections with the outer cross lines are all within the image area. The calculation of the straight lines is based on the pro- cedure according to Kampmann and Renner (2004), method The smaller circle is intersected with the precise inner crosshair lines (Fig. 6a). Similar to the procedure already 3. We transferred the model of the adjustment calculation mentioned to the model of the crosshair with two straight described for the rough determination of the inner cross- hair, the edge contours are determined perpendicular to the lines and adapted to its special features. The double lines of the crosshair result in an additional unknown so that the line from the crosshair centre according to Luhmann et al. (2020) (Fig. 6b). The detection of the contours is again car- functional model in coordinate form is as follows: ried out via grey value profiles and the start and end points a ∗ x + b ∗ y + d ±  = 0. (14) 1,2 i 1,2 i 1,2 1,2 are then averaged. We repeat the steps for the outer radius so that two points are available for each outer crosshair line. The variable δ corresponds to the half parallel distance Finally, we sort the detected points separately for the inner of a double line to the adjusted straight line. These are cho- and outer circle in the correct quadrant. The 12 points on the sen with different signs for the double lines and should be outer cross lines are determined precisely and are available defined the same for both straight lines. The parameter δ is in the correct position as a pair of points per line. One hun- omitted for the single line. The equation is set up indepen- dred further points are then determined between two points dently for the two lines, so that the parameters must be deter- of a line according to the procedure shown in Fig. 5. We mined separately for each equation. The eight parameters defined the number for which a large number of points is of the two lines to be determined are listed in the unknown available for the definition of the straight line and the follow - vector X. ing adjustment. A different definition is also possible. The The adjustment of both straight lines is done in one cal- one hundred points to be detected are distributed at equal culation. The approximate values for the variables d and 1/2 intervals along the length between the start and end point of a line. As in the previous step, the detection for each point of Fig. 5 Precise detection of the inner crosshair. a Definition of circles around the rough centre. Creating sectors. b Mean of the edge points. Fig. 7 Principle of adjustment (simplified example). a Before the c Compensated lines with end points adjustment. b After the adjustment 1 3 PFG δ are determined empirically. b and a are given the value of the circle area, is generated on several image pyramids 1,2 1 1 zero and are thus defined as parallel straight lines to the in different planes and rotations. We generate the image image coordinate system. Since the underlying equation of pyramids until the top level still provides enough informa- the functional model is linear, the partial derivatives accord- tion about the image. The processing effort is higher than ing to the parameters are equal to the observations used. with other correlation methods due to the large number of Simplified, the condition equation can be regarded as an images generated. However, for the selected image area and observation equation for the software solution, but with a due to today’s technology standards, this is not a disadvan- significantly higher weight than the observations. The weight tage (Luhmann et al. 2020). Finally, the origin of the model for all observations equals 1, and the conditional equation is image is set to the precise crosshair centre. given the weight 10 (Kampmann and Renner 2004). As pre- For the crosshair matching, the NCC model of the last viously described in the section on calibration, the solution detected crosshair centre together with the current cross- of the system of normal equations is carried out by means of hair is required. We define the search area for the inner Cholesky factorisation according to Luhmann et al. (2020) crosshair by a circle around the last detected crosshair. We or Förstner and Wrobel (2016). The adjustment is iterative define the radius with 1.5 times of the distance between the until the termination criterion is reached. For the second parallel lines. This saves unnecessary computing time since iteration the parameter estimates are updated to the extent the centre of the crosshair cannot be located at the edge of that the unknown X are defined as parameter estimates X the image. Then, according to Luhmann et al. (2020), all for the following iterations. For the determination of the two defined instances of the crosshair are detected from within straight lines of the outer crosshair in the sub-pixel area with the image section of the current crosshair. We only use the one decimal place, a few iterations are already sufficient. We best instance for the crosshair since it is unique in the image. choose the termination criterion in such a way that in normal The best instance is characterised as the highest value of the cases only a few iterations are necessary. The three opposite correlation coefficient, whereby this can take values between lines with approximately the same orientation are balanced zero and one. The calculation of the precise crosshairs to form a straight line (Fig. 7). The outer line cross has thus together with the resulting réseau crosses is then carried out been determined precisely so that the réseau crosses can according to the procedure already described. The software then be determined as described. These are located on the saves the coordinates again in a local file. However, the inner straight lines determined at this point and, together with the crosshair can be determined mathematically by detecting crosshair centre, define the precise position and orientation the outer crosshair (Fig. 8). By determining the crosshair of the crosshair in the image coordinate system. lines and the geometric reference to the target axis in sub- pixel accuracy, the intersection of the crosshair lines does 4.2 Crosshair Matching Due to the possibility that the crosshair position changes after a longer period of time and when the telescope position changes, this must be determined continuously (Atorf et al. 2019). In the case of smaller movements of the crosshair, this can be matched by calculating a normalised correla- tion coefficient (NCC). The current centre of the crosshair in the image is compared with the last detected crosshair. The current position of the crosshair centre must be within a generated model in order to be matched. If the difference in position is too large, the crosshairs must be detected again. Matching is again carried out using the NCC pro- cedure according to Luhmann et al. (2020). This procedure corresponds to a simplified detection of the precise cross- hair centre since no homogeneous background is required throughout. The current crosshair centre should be able to be continuously tracked in the image during a measurement. Fig. 8 a, b Goal of crosshair matching (simplified example). Matched For the model image, we define a circle with 0.75 times points on at least two lines in two directions (a) or on one line in combination with a successful matching of the inner crosshair ensure the distance between the parallel lines around the precise tracking of camera motions. c, d Examples of an unmatchable inner centre of the crosshair. Within this image area, the simi- crosshair due to dark background (c) and overexposure (d). Never- larity comparison is carried out by means of normalised theless, the inner crosshair is calculable by the detection of the outer cross-correlation. The model image, in this case the pixels crosshair 1 3 PFG not have to be determined repeatedly in every image, but, for example, may also be overlapped. In the user interface of the control software, coordinate differences between the last detected and the matched crosshair centre are displayed to the user (see Fig. 9). 5 Practical Applications In the following, we cover exemplary studies on different applications of MoDiTa in the structural health monitoring (SHM) of existing structures. 5.1 Studies on Distance Independence A practical application for IATS is the SHM of structures, such as factory chimneys, dams or bridges (Paar et al. 2021; Zschiesche 2022). What all these structures have in common is that they usually have elongated dimensions. Often it is impossible to stand directly perpendicular to the structure or to measure the entire structure from the same distance due to environmental conditions, such as rivers or railway lines Fig. 9 A simplified overview of the single steps. In the following, we (Fig. 10). Changes in the distance to the measured object distinguish between the inner and the outer crosshair. The two inner lead to the refocusing of the optics and thus also to changes lines mark the centre of the crosshair. The outer crosshair is com- in the distortions. To discuss this aspect in more detail, we posed of the six outer lines of the geodetic crosshair (Fig. 2) have carried out measurements from different distances to the instrument. For the measurement, we used a TS30 (Leica Geosystems AG 2009) and an industrial camera UI-3250 ML-M (IDS Imaging Development Systems GmbH 2015) with 1.92 MPixel. The measurement took place on 3 February 2022 in the courtyard of Mainz University of Applied Sciences between 10 am and 1 pm (CET). We limited the distance between 3.5 and 100 m. Over this distance calibration measurements were carried out with MoDiTa with a sample of 12 measurements for one distance. These measurements were taken over the entire image area. For comparison, we applied the calibration of the measure- ment with 20 m distance also to the measurements with shorter or longer distance. We compared the results in the form of residuals or deviations in Fig. 11. The different posi- tions that IATS moves to are visible. Fig. 10 Exemplary view of two IATS (MoDiTa) on an elongated Figure  11b shows residuals of the 20 m measurement structure with different forced distances. Here, the observation of a calculated with the 20 m calibration in the range of − 0.3 reference point (green dashed line) and simultaneous observation of other monitoring points on the bridge (yellow) are shown to 0.2 mgon for the horizontal and zenith angle, which is within the expected angular accuracy of the measurement system. We assume that the adjustment modulates the sys- tem successfully. In comparison, even at a longer distance have values of max. 0.3 mgon; in the outer range of − 0.6 to the object, deviations appear only slightly. However, the to 0.9 mgon horizontally and − 0.1 to 0.5 mgon zenith dis- use of the same calibration shows that at a greater distance tance. Therefore, the middle part is still modelled within the significantly larger deviations occur in the marginal area of accuracy of measurement, but systematics can already be the measurement image (Fig. 11c). Close to the crosshair, in identified in the outer area. From a distance of approximately this case also close to the centre of the image, the deviations 7 m, the deviations increase significantly. For clarification, 1 3 PFG Fig. 11 a A measurement image with displayed positions for self-cal- 20 m distance to the instrument. d The resulting residuals of a meas- ibration. The defined target is located at the positions marked in red. urement at 70 m distance to the instrument. e The resulting deviations In this way, different positions (in this case 12) are approached across of a measurement at 4.5  m distance calculated with the self-calibra- the image for adjustment. b The resulting residuals of a measurement tion at 20 m distance to the instrument. Clearly visible are the larger at 20  m distance to the instrument. c The resulting deviations of a deviations compared to (b, c). f The resulting residuals of a measure- measurement at 70  m distance calculated with the self-calibration at ment at 4.5 m distance Fig. 11e shows the measurement with 4.5 m distance. The towards the centre, but on average well above 0.3 mgon. range of the residuals is from − 0.7 to 1.1 mgon for the hori- The systematic deviations can be traced back to an unsuit- zontal and zenith angle. Here again, the values are smaller able model of the adjustment. Walser (2004) and Wasmeier 1 3 PFG Fig. 12 Plot of the estimated parameters and a posteriori/empirical standard deviation (2009a) achieved similar results with an integrated coaxial focal length. Figure 11d, f shows the residuals of the self- camera and thus correspond to our expectations. We suspect calibrations with the corresponding measurements. In both the influence of a scale factor, possibly the optical system’s 1 3 PFG Table 1 Results of the Estimated results Empirical standard deviation adjustment Min Max Min Max –5 –3 –4 –4 κ [rad] 2.05827 × 10 1.14836 × 10 1.75569 × 10 4.33541 × 10 c [mm] 456.5227 461.1159 0.3116 1.1808 2 –3 –4 –3 –3 A [1/mm ] – 5.99941 × 10 6.70138 × 10 1.55959 × 10 5.80336 × 10 4 –5 –3 –4 –3 A [1/mm ] – 5.65061 × 10 1.82210 × 10 4.60688 × 10 1.78219 × 10 6 –4 –7 –5 –4 A [1/mm ] – 1.36531 × 10 – 5.42970 × 10 3.48835 × 10 1.36997 × 10 2 –4 –2 –3 –2 B [1/mm ] – 2.70265 × 10 2.59471 × 10 8.47366 × 10 2.01747 × 10 2 –4 –2 –2 –2 B [1/mm ] – 9.56102 × 10 3.31246 × 10 2.89327 × 10 1.02284 × 10 –1 –2 –2 –1 C [1/mm] – 8.08512 × 10 – 9.74693 × 10 5.33810 × 10 1.32803 × 10 –2 –1 –2 –1 C [1/mm] 3.14872 × 10 2.84365 × 10 5.72376 × 10 1.39879 × 10 cases, the residuals remain within the measurement accuracy Overall, our studies show that micromovements of of the total station and do not exceed 0.3 mgon. 0.5 mm/100 m can be resolved. The achievable accuracy Figure 12 shows the estimated distortion parameters over depends on the atmospheric conditions and the illumination the entire distance. The estimated nine parameters are pre- situation and decreases with increasing distance. sented with their respective accuracies as described in chap- The investigations indicate that in the case of typical dis- ter 3. The calculation is made for the entire optics, i.e. for the tances to structures for SHM of 10 m or greater, it is suf- telescope of the total station and the adapter used (Fig. 1). In ficient to use a single self-calibration for a complete project. the parameter estimation, we calculated a separate calibra- tion for each distance and autofocus setting. 5.2 Deformation Measurement on a Steel Bridge In addition to the individual distances, we gave the respective stepper motor positions since these reflect the The identification of dynamic structural characteristics is an respective required mechanical movements of the focusing important aspect of SHM. It enables the analysis of moni- optics, which are responsible for the changes in the distor- tored vibrations and the calculation of natural frequencies. tion parameters (Wasmeier 2009a). Changes in natural frequency indicate possible structural For the template matching with the cross-correlation damages. Conventionally, acceleration sensors are attached method, a circular section with a radius of less than 40 pixels to the structure with a high labour input. MoDiTa enables was used in each case. The maximum mean error of unit of high-frequency recordings of the movement behaviour weight from the adjustment is 0.32 mgon. The smallest value without having to enter the structure. For the measurement is reached during measurement at a distance of 20 m, where of frequencies, the maximum frames per second (fps) are it is 0.13 mgon. All measured values of the 12 measurements essential. Only at an adequately frequent sampling rate can were used, and no measured values were eliminated during the natural frequency be determined from the measured val- the adjustment. ues and aliasing ruled out. Bruschetini-Ambro et al. (2017), Table  1 shows the maximum and minimum estimated Lachinger et al. (2022) do not recommend determining the parameter values and empirical standard deviations corre- damping from the excitation of a train crossing, because the sponding to Fig. 12a–h. selection of the ambient window has too great an influence All symmetric radial distortions show their minima and on the result and therefore the results scatter too much. maxima at close range (Fig. 12a–c). In total, the calculated To capture the deformation behaviour of a bridge dur- empirical standard deviation of the tangential distortions B ing a crossing, we observed a steel bridge using MoDiTa and B (Fig. 12d, e) show similar orders of magnitude and (Fig.  13a, b). We used a Leica TS60 (Leica Geosystems are more consistent than those of the symmetric radial dis- AG 2020) in combination with an industrial camera UI3080 tortions. Both B and B show their greatest value at a long CP-M with 5.04 MPixel (IDS Imaging Development Sys- 1 2 distance, but fluctuate more in the near range. tems GmbH 2016). The distance to the bridge was approxi- The results show slightly strong changes of the param- mately 10 m. The recording frequency was 500 Hz with an eters in the near range. This is in line with our expectations: exposure time of 0.002 s. To achieve this high frequency, with larger movements of the focusing optics, we anticipate correspondingly larger changes in the calibration parameters. 1 3 PFG (a) Railway crossing structure (b) Measuring system MoDiTa Lat: 49.893556 Total Station: Leica TS60 Lon: 8.637458 Camera: UI3080 CP-M with 5.04 MPixel Date: June 17, Exposure time: 0.002 s 11:16 CET Recording frequency: 500 Hz Distance: 10 m. (c) Movement of the target in vertical direction Ambient Window (d) FFT, ambient window only (e) Least square fit for damped oscillation, ambient window only Fig. 13 a, b A steel bridge (30 m span) for freight and passenger traf- ent window. d The result of the fast Fourier transform (FFT) with a fic. c The recorded measured values in vertical direction. 25 s of the main peak at 3.9  Hz (approximate value). e A best-fit evaluation of recording is shown. For this period, 12 500 observations are shown. the ambient window with a natural frequency at 3.8  Hz (balanced For the evaluation of the vibration behaviour, we evaluated the ambi- value) 1 3 PFG we only observed an area of interest. The observed point the edge of the measurement image are larger. This shows is located in the upper centre of the western bridge. We that the distortion approach varies significantly, especially attached no targets to the structure. Figure 13c shows the in a close-up range. We assume the influence of the changed recorded deformation over time. For further analysis, we camera constant c is responsible for the systematic devia- used the ambient window (shown in Fig.  13c by a green tions. The use of one calibration can save a lot of time during box). The ambient window captures the oscillation behav- the measurement, as it is no longer necessary to measure the iour of the bridge without additional mass from the train. calibration pattern again and calculate it. Figure 13d shows the results of the Fast Fourier Trans- Furthermore, we carried out an exemplary deformation form (FFT) of the ambient window to determine the natu- measurement on a steel bridge and demonstrated the suc- ral frequencies. Only the results of the low frequencies are cessful use for the determination of natural frequencies. Due shown. A clear peak is visible at 3.9 Hz. to the high recording frequency, it is possible to record the To calculate the damped oscillation using the least vibration behaviour. Further research is needed in this area, squares fit, we evaluated the measured values from second such as the comparison to other sensor systems. 19.0 to 20.0 (Fig. 13e). The result is a calculated natural fre- Acknowledgements The authors would like to thank Alexander Milch, quency of 3.79 Hz. The mean error of unit of weight is esti- Cedric Jager, Lukas Haas and Michael Biele for their work on the mated at 0.046 mm/10 m. This corresponds to the expected project. resolution and accuracy, cf. 5.1. Funding Open Access funding enabled and organized by Projekt Both evaluations come to almost the same result and con- DEAL. The project was funded by the Carl-Zeiss-Foundation, BAM— firm each other. Big-Data-Analytics in Environmental and Structural Monitoring, P2017-02-003. Availability of Data and Material The data presented in this study are available on request from the corresponding author. 6 Conclusion and Outlook Declarations In this article we explain how the general setup of the meas- uring system in combination with the developed software Conflicts of Interest/Competing Interests The authors declare no con- detects the crosshair and, thus, performs self-calibration. flict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the The self-calibration achieves accuracies within the measur- manuscript, or in the decision to publish the results. ing accuracy of the total station. We made assumptions such as the lines of the crosshairs are parallel and six precise lines Open Access This article is licensed under a Creative Commons Attri- represent the outer crosshair. We have not investigated either bution 4.0 International License, which permits use, sharing, adapta- tion, distribution and reproduction in any medium or format, as long assumption further. as you give appropriate credit to the original author(s) and the source, The algorithms explained track the crosshair position and provide a link to the Creative Commons licence, and indicate if changes orientation correctly and with sub-pixel accuracy, although were made. The images or other third party material in this article are the inner crosshair cannot be found by means of cross- included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in correlation. Due to the determination of the crosshair lines the article's Creative Commons licence and your intended use is not and the sub-pixel accurate geometric reference to the target permitted by statutory regulation or exceeds the permitted use, you will axis, the intersection point of the crosshair lines does not need to obtain permission directly from the copyright holder. To view a have to be determined again in every image but may also, copy of this licence, visit http://cr eativ ecommons. or g/licen ses/ b y/4.0/ . for example, be covered. This offers the measuring system more flexibility. In the explained practical application we show results of several calibration calculations. We have shown that it is not References necessary to calculate a separate photogrammetric calibra- Atorf P, Heidelberg A, Schlüter M, Zschiesche K (2019) Berührung- tion for each distance to the object. For the combination of slose Positionsbestimmung von spiegelnden Kugeln mit Methoden camera and total station used, areas can be defined which, des maschinellen Sehens. zfv – Zeitschrift für Geodäsie, Geoin- for example, meet the requirements of the SHM and can formation und Landmanagement 144: 317–322. https:// doi. org/ 10. 12902/ zfv- 0268- 2019 be carried out with one calibration of the optical system. Bruschetini-Ambro S-Z, Fink J, Lachinger S, Reiterer M (2017) Ermit- As the distance to the calibration distance increases, so do tlung der dynamischen Kennwerte von Eisenbahnbrücken unter the angular deviations. Generally, the deviations towards 1 3 PFG Anwendung von unterschiedlichen Schwingungsanregungsmeth- Leica Geosystems AG (2020) Leica Nova TS60: Data sheet. https:// oden. Bauingenieur 92:2–13leica-g eosys tems.com/ en- gb/ pr oducts/ t otal-s tations/ r obotic- t otal- Bürki B, Guillaume S, Sorber P, Oesch H (2010) DAEDALUS: a versa-stati ons/ leica- nova- ts60. Accessed 4 Mar 2021 tile usable digital clip-on measuring system for Total Stations. In: Luhmann T, Robson S, Kyle S, Boehm J (2020) Close-range photo- 2010 International Conference on indoor positioning and indoor grammetry and 3D imaging, 3rd edn. De Gruyter, Berlin navigation (IPIN), pp 1–10. https://ww w.r esear ch-colle ction. e thz. Paar R, Marendić A, Wagner A, Wiedemann W, Wunderlich T, Roić ch/ handle/ 20. 500. 11850/ 159968 M, DamjanovićD (2017) Using IATS and digital levelling staffs Canny J (1983) Finding edges and lines in images. Master's thesis, for the determination of dynamic displacements and natural oscil- Massachusetts Institute of Technology, Department of Electrial lation frequencies of civil engineering structures. In: Kopačik A, Engineering and Computer Science Kyrinovič P, Maria JH (eds) Proceedings of the 7th international Charalampous E, Psimoulis P, Guillaume S, Spiridonakos M, Klis R, conference on engineering surveying - INGEO 2017. Lisbon, Por- Bürki B, Rothacher M, Chatzi E, Luchsinger R, Feltrin G (2014) tugal, pp 49–58. https:// www. bib. irb. hr/ 900625 Measuring sub-mm structural displacements using QDaedalus: a Paar R, Roić M, Marendić A, Miletić S (2021) Technological devel- digital clip-on measuring system developed for total stations. Appl opment and application of photo and video theodolites. Appl Sci Geomat. https:// doi. org/ 10. 1007/ s12518- 014- 0150-z 11:3893. https:// doi. org/ 10. 3390/ app11 093893 Förstner W, Wrobel BP (2016) Photogrammetric computer vision: sta- Reiterer A, Wagner A (2012) System Considerations of an Image tistics, geometry, orientation and reconstruction, vol 11. Springer Assisted Total Station – Evaluation and Assessment. avn - Allge- International Publishing, Cham meine Vermessungs-Nachrichten, pp 83–94 Grimm D, Zogg H-M (2013) Leica Nova MS50 White paper. Leica Schlüter M, Hauth S, Heß H (2009) Selbstkalibrierung motorisierter Geosystems AG, Heerbrugg Digitalkameratheodolite für technische Präzisionsmessungen. Guillaume S, Bürki B, Griffet S, Mainaud Durand H (2012) QDaedalus In: DVW e. V. – Gesellschaft für Geodäsie, Geoinformation : Augmentation of total stations by CCD sensor for automated und Landmanagement (ed) zfv – Zeitschrift für Geodäsie, Geo- contactless high-precision metrology. In: FIG Working Week 2012 information und Landmanagement. Wißner-Verlag, Augsburg, Guillaume S, Clerc J, Leyder C, Ray J, Kistler M (2016) Contribution pp 22–28 of the Image-Assisted Theodolite System QDaedalus to Geodetic Steger C (1996a) Extracting curvilinear structures: a differential geo- Static and Dynamic Deformation Monitoring. In: Conference metric approach. In: Buxton B, Cipolla R (eds) Computer vision— and Seminar Proceedings: 3rd Joint International Symposium on ECCV ’96. Springer Berlin Heidelberg, Berlin, pp 630–641 Deformation Monitoring (JISDM). International Federation of Steger C (1996b) Extraction of curved lines from images. In: IEEE (ed) Surveyors, FIG, Copenhagen, p 66. https:// www. resea rch- colle Proceedings of 13th International Conference on pattern recogni- ction. ethz. ch/ handle/ 20. 500. 11850/ 126892 tion, 2: 251–255 Haralick RM, Shapiro LG (1992) Computer and robot vision: Vol. Steger C (1998) An unbiased detector of curvilinear structures. IEEE 2. Chapter 14.9. Addison-Wesley Longman Publishing Co. Inc Trans Pattern Anal Mach Intell 20:113–125. https:// doi. org/ 10. Hauth S, Schlüter M, Thiery F (2013) Schneller und ausdauernder als 1109/ 34. 659930 das menschliche Auge: Modulare Okularkameras am Motorta- Suesse H, Voss K (1993) Adaptive Ausgleichsrechnung und Aus- chymeter. In: avn. Allgemeine Vermessungsnachrichten, vol 120. reißerproblematik für die digitale Bildverarbeitung. In: Pöppl SJ, VDE VERLAG GmbH, Berlin - Offenbach, pp 210–216 Handels H (ed) Mustererkennung. Springer, Berlin Heidelberg, Huang YD, Harley I. (1989) Calibration of close-range photogram- pp 600–607 metric stations using a free network bundle adjustment. In: Gruen, Wagner A, Wasmeier P, Reith, Wunderlich T (2013) Bridge monitoring Kahmen (ed) Proceedings of first Conference on Optical 3-D by means of video-tacheometer—a case study. avn - Allgemeine measurement techniques. Wichmann Verlag, Vienna/Austria, pp Vermessungs-Nachrichten, pp 283–292 49–56 Wagner A, Huber B, Wiedemann W, Paar G (2014) Long-range geo- IDS Imaging Development Systems GmbH (2015) UI-3250ML-M- monitoring using image assisted total stations. J Appl Geodesy. GL: Data sheet. https://en. ids- imagi ng. com/ s tore/ui- 3250ml. html . https:// doi. org/ 10. 1515/ jag- 2014- 0014 Accessed 4 Feb 2015 Wagner A, Wiedemann W, Wasmeier P, Wunderlich T (2016) Moni- IDS Imaging Development Systems GmbH (2016) UI-3080CP-M-GL: toring concepts using image assisted total stations. In: Paar R, Data sheet. https:// en. ids- imagi ng. com/ IDS/ datas heet_ pdf. php? Marendić A, Zrinjski M (eds) SIG 2016. Croatian Geodetic sku= AB008 48. Accessed 26 June 2016 Society ISO 17123-3 (2001) Optics and optical instruments—field procedures Walser BH (2004) Development and calibration of an image assisted for testing geodetic and surveying instruments—part 3: theodo- total station. Dissertation, ETH Zurich lites. International Organization for Standardization Wasmeier P (2009a) Grundlagen der Deformationsbestimmung mit Kampmann G, Renner B (2004) Vergleich verschiedener Methoden zur Messdaten bildgebender Tachymeter. Zugl.: München, Techn. Bestimmung ausgleichender Ebenen und Geraden. avn - Allge- Univ., PhD thesis, 2009a, Technische Universität München meine Vermessungsnachrichten, pp 56–67 Wasmeier P (2009b) Videotachymetrie – Sensorfusion mit Potenzial. Lachinger S, Vorwagner A, Reiterer M, Fink J, Ambro SZ (2022) In: avn. Allgemeine Vermessungsnachrichten, vol 7. VDE VER- Entwicklung eines neuen Regelwerkes für dynamische Messungen LAG GmbH, Berlin - Offenbach, pp 261–267 von Eisenbahnbrücken der ÖBB. In: VDI Wissensforum GmbH Zschiesche K (2022) Image assisted total stations for structural health (ed) 7. VDI-Fachtagung Baudynamik, pp 53–65 monitoring—a review. Geomatics 2:1–16. https://doi. or g/10. 3390/ Leica Geosystems AG (2009) TS30 Technical Data. https://leica- g eosy geoma tics2 010001 stems.com/ sf tp/files/ ar chiv ed-files/ T S30_T echnical_ Dat a_en. pdf . Accessed 4 Feb 2022 1 3

Journal

"PFG – Journal of Photogrammetry, Remote Sensing and Geoinformation Science"Springer Journals

Published: Sep 26, 2022

Keywords: Self-calibration; Crosshair; Image Assisted Total Station; Image processing; Image matching; Tracking

References