Add Journal to My Library
Monthly Notices of the Royal Astronomical Society
, Volume 476 (1) – May 1, 2018

7 pages

/lp/ou_press/peak-locking-centroid-bias-in-shack-hartmann-wavefront-sensing-kYyXmujwM0

- Publisher
- The Royal Astronomical Society
- Copyright
- © 2018 The Author(s) Published by Oxford University Press on behalf of the Royal Astronomical Society
- ISSN
- 0035-8711
- eISSN
- 1365-2966
- D.O.I.
- 10.1093/mnras/sty182
- Publisher site
- See Article on Publisher Site

Abstract Shack–Hartmann wavefront sensing relies on accurate spot centre measurement. Several algorithms were developed with this aim, mostly focused on precision, i.e. minimizing random errors. In the solar and extended scene community, the importance of the accuracy (bias error due to peak-locking, quantization, or sampling) of the centroid determination was identified and solutions proposed. But these solutions only allow partial bias corrections. To date, no systematic study of the bias error was conducted. This article bridges the gap by quantifying the bias error for different correlation peak-finding algorithms and types of sub-aperture images and by proposing a practical solution to minimize its effects. Four classes of sub-aperture images (point source, elongated laser guide star, crowded field, and solar extended scene) together with five types of peak-finding algorithms (1D parabola, the centre of gravity, Gaussian, 2D quadratic polynomial, and pyramid) are considered, in a variety of signal-to-noise conditions. The best performing peak-finding algorithm depends on the sub-aperture image type, but none is satisfactory to both bias and random errors. A practical solution is proposed that relies on the antisymmetric response of the bias to the sub-pixel position of the true centre. The solution decreases the bias by a factor of ∼7 to values of ≲ 0.02 pix. The computational cost is typically twice of current cross-correlation algorithms. instrumentation: adaptive optics, techniques: high angular resolution, techniques: image processing 1 INTRODUCTION The Shack–Hartmann wavefront sensor is commonly used to measure the wavefront aberrations in astronomical adaptive optics (Tyson 2015), optical testing (Malacara 2007), ophthalmology (Burns et al. 2014), or microscopy (Booth 2014). It consists of a two-dimensional (2D) array of microlenses. For a plane wavefront incidence, the spots are focused on the optical axis of the each microlens – the reference centres. For an aberrated wavefront, the imaged spots are displaced from the reference centres. The estimation of the spot displacements between the aberrated and the reference spots allows one to retrieve the incident aberrated wavefront profile (Dai 1996) Correlation algorithms are used to estimate spot displacements when extended sources are present [cf. Rais et al. (2016) for a recent review]. The cross-correlation (a) is optimal at lower signal-to-noise ratios (SNR; Vijaya Kumar, Dickey & Delaurentis 1992); (b) is fast and of simple implementation over other methods such as maximum likelihood (Gratadour, Mugnier & Rouan 2005) and iterative gradient-based shift estimators (Rais et al. 2016); and (c) has unitary gain (Gratadour et al. 2010). Cross-correlation is applied to measure image displacements for solar adaptive optics (Wöger & Rimmele 2009; Löfdahl 2010; Townson, Kellerer & Saunter 2015), laser guide star elongated spots (Thomas et al. 2008; Basden et al. 2014), and to extended scene wavefront sensing (Poyneer 2003; Robert et al. 2012). The image displacement is computed by cross-correlating a reference1 image to the target aberrated sub-aperture image. The correlation algorithm can be implemented either in spatial domain (Löfdahl 2010) or in the Fourier domain (Poyneer 2003; Sidick 2013). In both domains, the image displacement is measured in two steps. In the first step, the cross-correlation between the reference and the target image is computed. In the second step, a sub-pixel peak-finding algorithm is applied to the correlation image (Poyneer 2003). Commonly used peak-finding algorithms in image registration are 1D parabola fitting (Poyneer 2003; Thomas et al. 2006; Robert et al. 2012), Gauss fitting (Nobach & Honkanen 2005), centre of gravity, pyramid fitting (Bailey 2003), and 2D quadratic polynomial fitting (Löfdahl 2010). These will be addressed further in the article (cf. Table 1). Table 1. Sub-pixel ($$s^{\prime }_x$$, $$s^{\prime }_y$$) peak-finding algorithms. The estimate of $$s^{\prime }_y$$ is obtained in an analogous fashion to $$s^{\prime }_x$$. Algorithm 1D parabola fit (PF): $$s^{\prime }_x = x_0 + 0.5 \times \frac{ C[x_0-1, y_0] - C[x_0+1, y_0] }{ C[x_0-1, y_0] + C[x_0+1, y_0] - 2 C[x_0, y_0] }$$ Gaussian fit (GF): $$s^{\prime }_x = x_0 + 0.5 \times \frac{ \ln (C[x_0-1, y_0]) - \ln (C[x_0+1, y_0]) }{ \ln (C[x_0-1, y_0]) + \ln (C[x_0+1,y_0]) - 2 \ln (C[x_0,y_0]) }$$ Pyramid (PYF): $$s^{\prime }_x = x_0 + 0.5 \times \frac{C[x_0-1, y_0] - C[x_0+1, y_0]}{ \min \big ( C[x_0-1, y_0], C[x_0+1, y_0] \big ) - C[x_0, y_0] }$$ 2D quadratic polynomial fit (QPF): ($$s^{\prime }_x$$, $$s^{\prime }_y$$) $$= \Big ( x_0 + \frac{2a_1a_5-a_2a_4}{a_4^2 - 4a_3a_5}, y_0 + \frac{2a_2a_3-a_1a_4}{a_4^2 - 4a_3a_5} \Big )$$ with polynomial f(x) = a0 + a1x + a2y + a3x2 + a4xy + a5y2. Centre of gravity (CoG): $$s^{\prime }_x = x_0 + \frac{C[x_0-1, y_0] - C[x_0+1, y_0]}{ 3 \min \big ( C[x_0-1, y_0], C[x_0+1, y_0] \big ) - \big ( C[x_0, y_0] + C[x_0+1, y_0] + C[x_0-1, y_0] \big )}$$ Algorithm 1D parabola fit (PF): $$s^{\prime }_x = x_0 + 0.5 \times \frac{ C[x_0-1, y_0] - C[x_0+1, y_0] }{ C[x_0-1, y_0] + C[x_0+1, y_0] - 2 C[x_0, y_0] }$$ Gaussian fit (GF): $$s^{\prime }_x = x_0 + 0.5 \times \frac{ \ln (C[x_0-1, y_0]) - \ln (C[x_0+1, y_0]) }{ \ln (C[x_0-1, y_0]) + \ln (C[x_0+1,y_0]) - 2 \ln (C[x_0,y_0]) }$$ Pyramid (PYF): $$s^{\prime }_x = x_0 + 0.5 \times \frac{C[x_0-1, y_0] - C[x_0+1, y_0]}{ \min \big ( C[x_0-1, y_0], C[x_0+1, y_0] \big ) - C[x_0, y_0] }$$ 2D quadratic polynomial fit (QPF): ($$s^{\prime }_x$$, $$s^{\prime }_y$$) $$= \Big ( x_0 + \frac{2a_1a_5-a_2a_4}{a_4^2 - 4a_3a_5}, y_0 + \frac{2a_2a_3-a_1a_4}{a_4^2 - 4a_3a_5} \Big )$$ with polynomial f(x) = a0 + a1x + a2y + a3x2 + a4xy + a5y2. Centre of gravity (CoG): $$s^{\prime }_x = x_0 + \frac{C[x_0-1, y_0] - C[x_0+1, y_0]}{ 3 \min \big ( C[x_0-1, y_0], C[x_0+1, y_0] \big ) - \big ( C[x_0, y_0] + C[x_0+1, y_0] + C[x_0-1, y_0] \big )}$$ View Large Sub-pixel peak-finding in the correlation image is biased towards integer pixels. In adaptive optics, these errors are often referred as systematic bias errors, quantization errors, or sampling errors. Methods for their correction are modelling and a posteriori correction (Wöger & Rimmele 2009; Löfdahl 2010; Sidick 2013). These approaches are limited because the bias errors depend on (a) modelling; (b) the sub-aperture image characteristics; (c) the noise level; (d) the combination of correlation and the peak-finding algorithms; making it difficult to model – especially in low signal-to-noise conditions. In the following, the bias problem of centroid algorithms is addressed. In Section 2 the methods used are presented, including a novel algorithm for bias error reduction. The results on the bias performance of several peak-finding algorithms are presented in Section 3.1. It is found that no algorithm is simultaneously satisfactory for both bias and random errors. The results on the proposed solution to the bias error are presented and discussed in Section 3.2. In Section 4 we conclude by recalling the main ideas. 2 METHODS 2.1 Current peak-finding methods Consider the reference (I0) and sub-aperture (IS) images, with size N × N pix2. The cross-correlation in the image domain (C), is given by \begin{eqnarray} C[m,n] = \sum _{i=1}^{N} \sum _{j=1}^{N} I_\mathrm{S}[i + m, j + n] I_0[i,j]. \end{eqnarray} (1)The image displacement in integer pixels is determined from the correlation maximum location, which is at the pixel (x0, y0). The sub-pixel image displacement ($$s^{\prime }_x$$, $$s^{\prime }_y$$) is estimated by applying 2D centroid algorithms (cf. Table 1 and Section 1) to the correlation map C[m, n]. In most algorithms only five pixels are used: (x0, y0), (x0 − 1, y0), (x0 + 1, y0), (x0, y0 − 1), and (x0, y0 + 1). For the 2D quadratic polynomial fit nine pixels instead of five are required for the estimation of the six coefficients. The estimation of $$s^{\prime }_y$$ is analogous to $$s^{\prime }_x$$. The measured displacement s΄ (in a given direction x or y) by the centroid algorithms is related to the real displacement s by \begin{eqnarray} s^{\prime }=s+\beta +\epsilon , \end{eqnarray} (2)where β is the bias error and ε the noise error. As referred in Section 1 these algorithms have systematic errors, the bias error β exhibits a ‘sinusoidal’ variation with an exact shape depending on ε, image, and centroid algorithm. In Fig. 1 an example of this bias is presented, in the absence of noise, for the cross-correlation algorithm (equation 1). The origin of the bias is well known in the strain measurement community, it is due to the transfer function of the centroid algorithm (e.g. Schreier, Braasch & Sutton 2000). For example, the transfer function of the linear interpolation is not unitary but a complex number. Its module and phase changes with interpolation position. Therefore a bias in intensity and shift when an interpolation is made (cf. Schreier et al. 2000, for details). Figure 1. View largeDownload slide Bias error for a point source with centre of gravity centroid algorithm versus sx. The shift vector is $$\vec{s}=[s,s]^{\rm T}$$, i.e. sx = s. Figure 1. View largeDownload slide Bias error for a point source with centre of gravity centroid algorithm versus sx. The shift vector is $$\vec{s}=[s,s]^{\rm T}$$, i.e. sx = s. In the presence of noise ε, the bias error β in equation (2) is estimated by taking the average of a large number of realizations, assuming that s is constant for the number of realizations. The noise error is then significantly reduced and equation (2) becomes \begin{eqnarray} \left\langle s^{\prime }\right\rangle \simeq s+\beta , \end{eqnarray} (3)where the 〈 〉 denotes average. The noise error ε is estimated by the root-mean-square (RMS) deviation of the random sample of realizations \begin{eqnarray} \epsilon \simeq \sigma = \sqrt{\frac{\sum _{i=1}^n \left( s^{\prime }-\left\langle s^{\prime }\right\rangle \right)^2}{n}}, \end{eqnarray} (4)with n the number of realizations. 2.2 Window shift peak-finding algorithm In the standard approach, the sub-pixel peak centre is determined by directly applying a peak-finding algorithm of Table 1. In this work, a method to reduce the bias error in the peak-finding is proposed. A similar method was previously applied in the context of particle image velocimetry (Gui & Wereley 2002), but to our knowledge, it is presented for the first time in the context of adaptive optics. It is a two-step method: (a) coarse search; (b) fine search. In Algorithm 1 the pseudo-code of the method is presented. In the first step (coarse search, lines 2–4) the integer pixel maximum location (x0, y0) is found. In the second step, (fine search, lines 5–19) an image region of interest (IROI, cf. line 12) is interpolated from the sub-aperture image IS. The interpolation is made with the same sampling as the original image. At each iteration, the interpolation is done at changing fractional initial positions δ (cf. line 7). Then the correlation between the reference I0 and each IROI is computed (cf. line 13). The sub-pixel displacements s΄ are then obtained using the peak-finding algorithms of Table 1 (cf. line 16). These sub-pixel displacements are then corrected by the step δ introduced during the interpolation (cf. line 17). This process is repeated K times, with varying δ (cf. line 7). Because K correlations took place, s΄ is a vector of K elements. The individual displacements s΄[k] are affected by the bias β. This bias is ‘sinusoidal’ and antisymmetric, with period 1 pix, as referred in Fig. 1. The algorithm then takes the average of all K displacements (cf. line 19), which reduces the bias approximately proportional to K. For computational efficiency the cross-correlation C is not computed in all pixels but only for a sub-image of size 5 × 5 pix2 centred in the maximum, generating a cropped version of the cross-correlation: C5. Simulations show that the centroid algorithms behave similarly for C and C5. The combination of the original pixel grid based conventional correlation (in a large field of view) and a sub-image grid correlation within a small field of view, warrants a high dynamic range shift determination to the algorithm. Algorithm 1: View largeDownload slide Window shift peak-finding algorithm. The function $${\tt FindCentre}$$ is one of the peak-finding algorithms presented in Tab. 1. The function $${\tt Correlation(I_\mathrm{S}; I_0)}$$ is given in equation 1. The function $${\tt FindCentreInteger(C)}$$ determines the integer pixel where the maximum of C is located. The function $${\tt Interpolate2D(I_{\rm S}; x; y)}$$ extracts an interpolated image $$I_\mathrm{ROI}$$ from $$I_{\rm S}$$ at grid array locations x and y. The function $${\tt Range2D(a,b,N)}$$ creates a square 2D mesh from (a,a) to (b,b), with N2 pixels. In Section 2.2 the algorithm is explained in detail. The IROI has image dimensions as I0. Algorithm 1: View largeDownload slide Window shift peak-finding algorithm. The function $${\tt FindCentre}$$ is one of the peak-finding algorithms presented in Tab. 1. The function $${\tt Correlation(I_\mathrm{S}; I_0)}$$ is given in equation 1. The function $${\tt FindCentreInteger(C)}$$ determines the integer pixel where the maximum of C is located. The function $${\tt Interpolate2D(I_{\rm S}; x; y)}$$ extracts an interpolated image $$I_\mathrm{ROI}$$ from $$I_{\rm S}$$ at grid array locations x and y. The function $${\tt Range2D(a,b,N)}$$ creates a square 2D mesh from (a,a) to (b,b), with N2 pixels. In Section 2.2 the algorithm is explained in detail. The IROI has image dimensions as I0. 2.3 Synthetic sub-aperture images Four types of sub-aperture image models of relevance for astronomical adaptive optics were used: (a) a point source diffracted spot; (b) a laser guide star elongated spot (Schreiber et al. 2009); (c) a crowded field image; (d) a solar photosphere image (Löfdahl 2010). The simulation of the point source and the laser guide star are realized using 2D Gaussian profiles (circular 2 × 2 pix2 and elliptical 3 × 6 pix2 with a 45° rotation angle, respectively). The crowded field sub-aperture images are obtained by shifting and adding circular Gaussian profiles of varying intensity. To model the sub-aperture solar photosphere image, a Swedish Solar Telescope solar granulation image is used.2 All the sub-aperture images are Nyquist sampled and have a size of 16 × 16 pix2. The synthetic sub-aperture images are presented in Fig. 2. Figure 2. View largeDownload slide Synthetic Shack–Hartmann sub-aperture images. From left to right: (a) the point source; (b) the elongated laser guide star (LGS); (c) the crowded field; (d) the solar photosphere. Colourbars indicate contrast levels. Figure 2. View largeDownload slide Synthetic Shack–Hartmann sub-aperture images. From left to right: (a) the point source; (b) the elongated laser guide star (LGS); (c) the crowded field; (d) the solar photosphere. Colourbars indicate contrast levels. The synthetic image shifts (s) due to atmospheric tilts are generated as follows. For the point source, laser guide star, and crowed field the shifts s are directly applied to the Gaussian profiles. The original solar image has a factor of 10 larger sampling than the one used for the sub-aperture images. The original image is shifted and blurred to the target Nyquist resolution by convolving it with a PSF. The resulting image is binned to generate a 16 × 16 pix2. Due to the extended and low contrast nature of the solar image, the cross-correlation algorithm is slightly adapted. The mean intensity is subtracted from the reference and sub-aperture solar images because their linear intensity trend (low contrast) can shift the correlation centre from its correct position (Löfdahl 2010). Noise is added to the synthetic sub-aperture images. For all images a Gaussian read-out-noise (σR) of 1 e− pix−1 is assumed, in line with new generation detectors (Finger et al. 2014; Feautrier et al. 2016). Each synthetic image was generated with counts in each pixel following Poisson statistics. The total image SNR is calculated as \begin{eqnarray} \mathrm{SNR} = \frac{N_\mathrm{e}}{\sqrt{N_\mathrm{e} + \sigma _\mathrm{R}^2 N_\mathrm{P} }}, \end{eqnarray} (5)where Ne and NP are the total number of electrons and pixels in the sub-aperture image. For reference, Ne = 5 × 103 e−, corresponds to a 9.5 mag H-band star with integration time of 10−2 s, when a 9 × 9 lenslet and an 8 m class telescope considered. For the solar image case Ne = 5 × 104 e− corresponds to an SNR = 104. 3 RESULTS AND DISCUSSION 3.1 Performance of current peak-finding methods 3.1.1 No noise case To study the bias, synthetic sub-aperture images displaced horizontally at known positions s are generated. In this section, the sub-aperture images have no noise. The positions s varied from −1 to 1 pix, in steps of 0.05 pix. For each synthetic image, the correlation centres s΄ are computed with the conventional cross-correlation as described in Section 2.1. Then the peak-finding algorithms of Table 1 are applied to estimate the position s΄. The biases are then simply β = s΄ − s. For completeness, the bias is also presented for the determination of s΄ with the centre of gravity algorithm directly (i.e. without the correlation) in the sub-aperture images point source and elongated laser guide star. The results are presented in Fig. 3, for the four types of sub-aperture images. Figure 3. View largeDownload slide Bias errors for various peak-finding algorithms and sub-aperture images. The algorithm colour/symbol legend is presented at the top left panel, cf. Table 1 for abbreviations translation. The ‘Corr +’ label is used when the algorithm is applied to the correlation image. Sub-aperture images are point source (top left), laser guide star (top right), crowded field (bottom left), and solar photosphere (bottom right). The shift vector is $$\vec{s}=[s,s]^{\rm T}$$. The top left (point source) and right (laser guide star) images also include the results for direct application of the Centre of Gravity algorithm to the sub-aperture images, plus signs labelled ‘CoG’. Figure 3. View largeDownload slide Bias errors for various peak-finding algorithms and sub-aperture images. The algorithm colour/symbol legend is presented at the top left panel, cf. Table 1 for abbreviations translation. The ‘Corr +’ label is used when the algorithm is applied to the correlation image. Sub-aperture images are point source (top left), laser guide star (top right), crowded field (bottom left), and solar photosphere (bottom right). The shift vector is $$\vec{s}=[s,s]^{\rm T}$$. The top left (point source) and right (laser guide star) images also include the results for direct application of the Centre of Gravity algorithm to the sub-aperture images, plus signs labelled ‘CoG’. It is found that the bias are antisymmetric (β(−x) = −β(x)) for all sub-aperture images as expected. It is periodic for the point source, laser guide star, and crowded field images. But not periodic for the solar image. This non-periodicity is due to the extended scene nature of the image. When a shift is applied different parts of the image enter the field of view of the sub-aperture. Therefore the solar image does not have β = 0 pix at s = ±0.5 pix. The exact shape of the bias curve depends on the centroid algorithm and also on the nature of the image. In the [−0.5, 0.5] pix interval the bias extreme values are located approximately at s = ±0.25 pix for all images except the laser guide star.3 Sharp transitions are observed for extended sub-aperture images at pixel positions s = ±0.5 pix. This is due to the shift vector being diagonal: $$\vec{s}=[s,s]^{\rm T}$$, which translates in the peak of the correlation being ‘split’ into two diagonal pixels. When the shift s is, e.g. 0.4 pix, the brighter pixel is in the lower left one and the bias is negative. When the shift is e.g. 0.6 pix, the brighter pixel is the top right and the bias is positive. The best performing centroid algorithm depends on the image: (a) for the point source and crowded field, it is the Gaussian fit; (b) for the solar image it is 2D quadratic polynomial; (c) for the laser guide star it is the centre of gravity. The reason for this behaviour is the matching of the algorithm to the actual shape of each image correlation centre (e.g. point source and crowded field are generated with circular Gaussians). 3.1.2 Varying signal-to-noise ratio The centroid algorithms’ performance was tested in varying SNR conditions. For each SNR, 500 random realizations are generated at an input shift vector $$\vec{s}=[s_\mathrm{max},s_\mathrm{max}]^{\rm T}$$ where the bias is approximately maximum: (a) smax = 0.25 pix for the point, crowded field, and solar images; (b) smax = 0.4 pix for the laser guide star image. Note that the exact maximum location depends on the algorithm and therefore smax is approximate. The input shifts are measured by applying the conventional cross-correlation (equation 1) and centroid algorithm (cf. Table 1). The bias and SNR are computed using equations (3) and (4), respectively. The SNR is varied via NP in the sub-aperture images. The results are presented in Fig. 4. The first column presents the RMS centroid error (σ). It is computed by the RMS of s΄. It decreases with SNR as expected. The direct image centre determination via the centre of gravity algorithm has a worse behaviour for the point source and laser guide star than the correlation algorithms. The superiority of the correlation with respect to σ is well known in the literature (e.g. Thomas et al. 2006). Intuitively it is expected because of the noise smoothing and shape matching. All correlation algorithms have a similar behaviour. Figure 4. View largeDownload slide Performance of centroid algorithms with SNR. Left column: RMS centroid error σ; centre column: |〈s΄〉 − s|, which coverages to bias error β at high SNR; right column 〈s΄〉 − s versus σ. The dotted line in the right column traces 〈s΄〉 − s = σ. The rows from top to bottom are point source, laser guide star, crowded field, solar image, respectively. The dotted line in the last column is where bias error and random error is equal. The data points above this dotted line indicate bias error domination over random error. Figure 4. View largeDownload slide Performance of centroid algorithms with SNR. Left column: RMS centroid error σ; centre column: |〈s΄〉 − s|, which coverages to bias error β at high SNR; right column 〈s΄〉 − s versus σ. The dotted line in the right column traces 〈s΄〉 − s = σ. The rows from top to bottom are point source, laser guide star, crowded field, solar image, respectively. The dotted line in the last column is where bias error and random error is equal. The data points above this dotted line indicate bias error domination over random error. The centre column of Fig. 4 shows that |〈s΄〉 − s | increases to an asymptotic value, the bias error β. |〈s΄〉 − s | is not constant because at low SNR the effective image shape changes. Intuitively the low value of |〈s΄〉 − s | at low SNR can be explained by a pure noise image, for which the bias is expected to be zero. The large values of |〈s΄〉 − s | for the low SNR of the solar image (centre column, bottom row of Fig. 4) are due to the large variance of |〈s΄〉 − s | for this sub-aperture image. The large bias for the laser guide star in comparison to others is caused by the shape of the correlation peak, which is elongated and oriented 45° rotation angle. The right column of Fig. 4 plots |〈s΄〉 − s | versus σ. The dashed line is the imaginary curve |〈s΄〉 − s | = σ. Points above the curve show a bias error larger than the centroid error. Typically the bias error is larger than the noise error for SNR larger than 10, except for the solar case where it becomes important for SNR larger than 200. Figure 5. View largeDownload slide Bias errors of the window shift method in comparison with the conventional approach. The algorithm legend is presented at the top left panel. Sub-aperture images are point source (top left), laser guide star (top right), crowded field (bottom left), and solar photosphere (bottom right). The shift vector is $$\vec{s}=[s,s]^{\rm T}$$. The top row also includes the performance for direct sub-aperture images (labelled CoG). Figure 5. View largeDownload slide Bias errors of the window shift method in comparison with the conventional approach. The algorithm legend is presented at the top left panel. Sub-aperture images are point source (top left), laser guide star (top right), crowded field (bottom left), and solar photosphere (bottom right). The shift vector is $$\vec{s}=[s,s]^{\rm T}$$. The top row also includes the performance for direct sub-aperture images (labelled CoG). 3.2 Performance of the window shift method In this section the performance of the algorithm introduced in Section 2.2 is presented, initially for a fixed SNR and then for a varying SNR. 3.2.1 Fixed signal-to-noise ratio Fig. 5 shows the comparison of the window shift method with the conventional cross-correlation algorithm. One of the worst performing centroid algorithms – the centre of gravity – was used. The SNR conditions are the same as in Section 3.1.1. The sampling factor is K = 5. The window shift method drastically reduces the bias. For the solar image the final bias is larger, because the shift will include pixels from the edge of the image that are not present in the reference image. 3.2.2 Varying sampling factor K The effect of the sampling factor K in reducing the bias β of the centroid algorithm is presented in Fig. 6. The setup is the same as Section 3.2.1, except for the sampling factor K, which varied. The centre of gravity algorithm was used considering its better performance against lower SNR for point, crowded field and solar images. Figure 6. View largeDownload slide Performance of the window shift method as a function sampling factor K. The curves are for point source, laser guide star (LGS), crowded field (CF), and solar image. The dashed curve depicts the function β(K) = 0.1/K. Figure 6. View largeDownload slide Performance of the window shift method as a function sampling factor K. The curves are for point source, laser guide star (LGS), crowded field (CF), and solar image. The dashed curve depicts the function β(K) = 0.1/K. The bias β strongly decreases with the sampling factor K. It approximately follows a ∝ K−1 relation. For sampling factors K > 5–6 no significant improvement is observed. This behaviour is similar to the one observed by Gui & Wereley (2002) in a different context. 3.2.3 Varying signal-to-noise ratio The performance of the window shift method as a function of SNR is presented in Fig. 7. The same setup as the one presented in Section 3.1.2 is used (e.g. s position). The sampling factor is K = 5. The residual errors are well below 0.05 pix for all SNR and sub-aperture image type. Figure 7. View largeDownload slide Residual bias errors for window shift method as a function of SNR: point source (top left); laser guide star (top right); crowded field (bottom left); and solar photosphere (bottom right). Figure 7. View largeDownload slide Residual bias errors for window shift method as a function of SNR: point source (top left); laser guide star (top right); crowded field (bottom left); and solar photosphere (bottom right). The performance is similar to the windowed, adaptive thresholding centre of the mass method of Townson et al. (2015).4 3.2.4 Computational efficiency Several number of factors can influence the total execution time of the algorithm, importantly, the sub-aperture window size, the efficiency of programming (ex. multithreading), the performance of hardware, and the programming language. Therefore only the relative computational efficiency was computed. The proposed window shift method is slower in comparison to the conventional algorithm by a factor of 2.4 and 3.6 for K = 3 and K = 5, respectively. For larger sub-apertures, the computational time ratio is reduced because the window shift method uses a fixed and small correlation sub-image C5. If the sub-aperture size is increased from 16 × 16 to 32 × 32 pix2 the window shift method is slower by a factor of 1.4 and 1.8 for K = 3 and K = 5, respectively. 4 CONCLUSIONS A systematic study of the bias error in conventional centroid algorithms used for slope measurement in Shack–Hartmann wavefront sensors is presented for the first time. It is found that the bias can be as the same order of magnitude of the centroid error, especially at moderate and high SNR ratios, typically the bias error is larger than the noise error for SNR larger than 10, except for the solar case where it becomes important for SNR larger than 200. No centroid method reduces both the bias and noise error terms in conventional correlation methods. A window shift method is proposed based on the antisymmetric nature of the bias. It works by sampling the sub-aperture image K times, at the same resolution, but shifted by a sub-pixel step, with size function of K. The obtained K shifts are then averaged out, significantly reducing the bias. The window shift method is studied as a function of image type, centroid algorithm, SNR, and K sampling factor. It is found that it robustly reduces the bias by a factor of ∼7 to values of ≲ 0.02pix. The computational cost of the algorithm is optimized by obtaining the correlation in two steps: (a) large region based coarse search; (b) small region based C5 fine search. It ranges between a factor of 1.4–3.6 of conventional approaches. The window shift method can be applied to other algorithms which work similar to the cross-correlation algorithm such as square difference function, absolute difference function, and square of the absolute difference function (Löfdahl 2010). The square difference function is especially important for the solar type of images as it gives a significantly smaller random error and more antisymmetric pattern of systematic error (Löfdahl 2010). However, the systematic error values are larger in a comparison to the cross-correlation. The proposed method would be of relevance for the square difference function to reduce its systematic error by using its consistent antisymmetric pattern. Further developments are the study of the window shift algorithm for sub-aperture images that have a sampling smaller than the critical sampling and for Shack–Hartmann devices with a small number of apertures, such as those used for fast tip-tilt correction or pupil tracking. Acknowledgements We are grateful to J. P. Lancelot and V. Akondi for commenting and proof reading of the manuscript. We thank the referee for her/his constructive and insightful report. This research was partially supported by Fundação para a Ciência e a Tecnologia (contracts PTDC/CTE-AST/116561/2010, SFRH/BD/52066/2012, UID/FIS/00099/2013) and the European Commission (EC) (Grant Agreement 312430). NA acknowledge support from the ERC Starting Grant ‘ImagePlanetFormDiscs’ (Grant Agreement No. 639889). CMC wishes to acknowledge A*MIDEX project (no. ANR-11-IDEX-0001-02) funded by the French Government programme ‘Investissements d’Avenir’, managed by the French National Research Agency (ANR). Footnotes 1 Cf. Basden et al. (2014) for several approaches for reference image generation. 2 http://www.isf.astro.su.se/gallery/ 3 The shape of the β curve for the laser guide star is due to applying shift $$\vec{s}=[s,s]^{\rm T}$$ along the diagonal of the image. For a horizontal shift, it would have a similar shape as the point source and crowded field. 4 Note that a different SNR metric is used in Townson et al. (2015), i.e. in their fig. 7 an SNR of 20 corresponds to an SNR ∼ 100 in our Fig. 7. REFERENCES Bailey D. G., 2003, in Bailey D. G., ed. Proc. Image and Vision Computing . Massey University, Palmerston North, New Zealand, p. 414 Basden A. G., Chemla F., Dipper N., Gendron E., Henry D., Morris T., Rousset G., Vidal F., 2014, MNRAS , 439, 968 https://doi.org/10.1093/mnras/stu027 CrossRef Search ADS Booth M. J., 2014, Light Sci. Appl. , 3, e165 https://doi.org/10.1038/lsa.2014.46 CrossRef Search ADS Burns S. A., Elsner A. E., Chui T. Y., VanNasdale D. A., Clark C. A., Gast T. J., Malinovsky V. E., Phan A. D. T. 2014, Biomed. Opt. Express , 5, 961 https://doi.org/10.1364/BOE.5.000961 CrossRef Search ADS PubMed Dai G.-M., 1996, J. Opt. Soc. Am. A , 13, 1218 https://doi.org/10.1364/JOSAA.13.001218 CrossRef Search ADS Feautrier P., Gach J.-L., Bério P., 2016, in Malbet F., Creech-Eakman M. J., Peter G., Tuthill P. G., eds, Proc. SPIE Conf. Ser. Vol. 9907, Optical and Infrared Interferometry and Imaging V . SPIE, Bellingham, p. 990715 Finger G., Baker I., Alvarez D. et al. , 2014, in Marchetti E., Close L. M., Jean-Pierre Véran J-P., eds, Proc. SPIE Conf. Ser. Vol. 9148, Adaptive Optics Systems IV . SPIE, Bellingham, p. 914817 Gratadour D., Gendron E., Rousset G. 2010, J. Opt. Soc. Am. A 27, A171 CrossRef Search ADS Gratadour D., Mugnier L. M., Rouan D., 2005, A&A , 443, 357 CrossRef Search ADS Gui L., Wereley S. T., 2002, Exp. Fluids , 32, 506 https://doi.org/10.1007/s00348-001-0396-1 CrossRef Search ADS Löfdahl M. G., 2010, A&A , 524, A90 CrossRef Search ADS Malacara D., 2007, Optical Shop Testing . John Wiley, Hoboken, NJ Google Scholar CrossRef Search ADS Nobach H., Honkanen M., 2005, Exp. Fluids , 38, 511 https://doi.org/10.1007/s00348-005-0942-3 CrossRef Search ADS Poyneer L. A., 2003, Appl. Opt. , 42, 5807 https://doi.org/10.1364/AO.42.005807 CrossRef Search ADS PubMed Rais M., Morel J. M., Thiebaut C., Delvit J. M., Facciolo G., 2016, in J. Phys. Conf. Ser. , 756, 1 https://doi.org/10.1088/1742-6596/756/1/012002 CrossRef Search ADS Robert C., Michau V., Fleury B., Magli S., Vial L., 2012, Opt. Express , 20, 14, 15636 https://doi.org/10.1364/OE.20.015636 CrossRef Search ADS Schreiber L., Foppiani I., Robert C., Diolaiti E., Conan J. -M., Lombini M. 2009, MNRAS , 396, 1513 https://doi.org/10.1111/j.1365-2966.2009.14797.x Schreier H. W., Braasch J. R., Sutton M. A., 2000, Opt. Eng. , 39, 11 https://doi.org/10.1117/1.1314593 CrossRef Search ADS Sidick E., 2013, Appl. Opt. , 52, 26, 6487 https://doi.org/10.1364/AO.52.006487 CrossRef Search ADS Thomas S., Fusco T., Tokovinin A. et al. , 2006, MNRAS , 371, 1, 323 https://doi.org/10.1111/j.1365-2966.2006.10661.x CrossRef Search ADS Thomas S. J., Adkins S., Gavel D., Fusco T., Michau V., 2008, MNRAS , 387, 173 https://doi.org/10.1111/j.1365-2966.2008.13110.x CrossRef Search ADS Townson M. J., Kellerer A., Saunter C. D., 2015, MNRAS , 452, 4022 https://doi.org/10.1093/mnras/stv1503 CrossRef Search ADS Tyson R. K., 2015, Principles of Adaptive Optics , 4th edn. CRC Press, Boca Raton, FL Vijaya Kumar B. V. K., Dickey F. M., Delaurentis J. M., 1992, J. Opt. Soc. Am. A , 9, 678 Wöger F., Rimmele T., 2009, Appl. Opt. , 48, A35 CrossRef Search ADS PubMed © 2018 The Author(s) Published by Oxford University Press on behalf of the Royal Astronomical Society

Monthly Notices of the Royal Astronomical Society – Oxford University Press

**Published: ** May 1, 2018

Loading...

personal research library

It’s your single place to instantly

**discover** and **read** the research

that matters to you.

Enjoy **affordable access** to

over 18 million articles from more than

**15,000 peer-reviewed journals**.

All for just $49/month

Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly

Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.

Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.

Read from thousands of the leading scholarly journals from *SpringerNature*, *Elsevier*, *Wiley-Blackwell*, *Oxford University Press* and more.

All the latest content is available, no embargo periods.

## “Hi guys, I cannot tell you how much I love this resource. Incredible. I really believe you've hit the nail on the head with this site in regards to solving the research-purchase issue.”

Daniel C.

## “Whoa! It’s like Spotify but for academic articles.”

@Phil_Robichaud

## “I must say, @deepdyve is a fabulous solution to the independent researcher's problem of #access to #information.”

@deepthiw

## “My last article couldn't be possible without the platform @deepdyve that makes journal papers cheaper.”

@JoseServera

DeepDyve ## Freelancer | DeepDyve ## Pro | |
---|---|---|

Price | FREE | $49/month |

Save searches from | ||

Create lists to | ||

Export lists, citations | ||

Read DeepDyve articles | Abstract access only | Unlimited access to over |

20 pages / month | ||

PDF Discount | 20% off | |

Read and print from thousands of top scholarly journals.

System error. Please try again!

or

By signing up, you agree to DeepDyve’s Terms of Service and Privacy Policy.

Already have an account? Log in

Bookmark this article. You can see your Bookmarks on your DeepDyve Library.

To save an article, **log in** first, or **sign up** for a DeepDyve account if you don’t already have one.