Ralph , Maschotta; Simon , Boymann; Ulrich , Hoppe;
Comparison of Feature-List Cross-Correlation Algorithms with Common Cross-Correlation Algorithms
1 Institute of Biomedical Engineering and Informatics, Ilmenau Technical University, P.O. Box 100565, 98693 Ilmenau, Germany 2 Department of Audiology, University Hospital of Erlangen-Nuremberg, Waldstr. 1, 91054 Erlangen, Germany Received 1 August 2005; Revised 20 December 2006; Accepted 21 December 2006 Recommended by Rafael Molina This paper presents a feature-list cross-correlation algorithm based on: a common feature extraction algorithm, a transformation of the results into a feature-list representation form, and a list-based cross-correlation algorithm. The feature-list cross-correlation algorithms are compared with known results of the common cross-correlation algorithms. Therefore, simple test images containing diï¬erent objects under changing image conditions and with several image distortions are used. In addition, a medical application is used to verify the results. The results are analyzed by means of curve progression of coeï¬cients and curve progression of peak signal-to-noise ratio (PSNR). As a result, the presented feature list cross-correlation algorithms are sensitive to all changes of image conditions. Therefore, it is possible to separate objects that are similar but not equal. Because of the high quantity of feature points and the strong PSNR, the loss of a few feature points does not have a signiï¬cant inï¬uence on the detection results. These results are conï¬rmed by a successfully applied medical application. The calculation time of the feature list cross-correlation algorithms only depends on the length of the feature-lists. The amount of feature points is much less than the number of pixels in the image. Therefore, the feature-list cross-correlation algorithms are faster than common cross-correlation algorithms. Better image conditions tend to reduce the size of the feature-list. Hence, the processing time decreases considerably. Copyright Â© 2007 This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. INTRODUCTION The two-dimensional cross-correlation is a simple and robust algorithm used to solve diï¬erent problems in the ï¬eld of image processing. However, when images are rotated, scaled, or include other image distortions the computation time increases considerably. Furthermore, extensive changes in brightness, contrast, or strong outliers cause false results [1]. Because of this, optimized algorithms have been developed, namely, the cross-correlation algorithm based on least squared error [2], the block-matching algorithms [3], techniques based on the Fourier transform [4], waveletbased techniques [5], and feature-based techniques [2, 6â10]. These algorithms are used for many problems such as motion estimation [11], video coding [12], target detection [13], character recognition [8], image registration [14], or image fusion [15]. In [16], a summary of visual tracking techniques and problems with focus on motion estimation in log-polar images is presented. Some references for motion estimation on Cartesian images can be found in [17]. Similar to [6, 18, 19], in this paper single feature points are saved in a feature list, along with their positions and the value of the feature. In contrast to the earlier described methods where single selected points with certain features were used, this paper presents a threshold-based selection of the feature points. Additionally, the proposed method allows the use of simple feature extraction algorithms such as Canny or Laplace of Gaussian edge detection [20, 21]. The applicability is shown for the Sobel operator. However, other feature extraction algorithms are also possible [9, 22, 23]. The matching algorithms are based on the two-dimensional cross-correlation algorithm. It has been adapted to the list-based cross-correlation algorithm. The deï¬nition of this algorithm is similar to the discrete generalized Radon transform [24]. However, there are some distinctions. Firstly, the aim of the Radon transform [25] is the transformation into a parameter domain whereas the aim of the list-based crosscorrelation is the calculation of the coeï¬cients of the two dimensional cross-correlation. The Hough transform can be 2 interpreted as a special case of the Radon transform [18, 19]. Hence, usually binary images are used instead. The literature also presents improvements of the Hough transform or Radon transform with respect to the cross-correlation [19, 26, 27]. In this paper, cross-correlation between two grayscale images is in the focus of the analysis. Secondly, the generalized Hough transform uses a reference table that characterizes a template shape [28, 29], which is accurately selected while the presented algorithm initially uses the whole image and all gray-scale values. It is only due to the attenuation of the processing eï¬ort that simple featurevalues are used. Also diï¬erent forms of cross-correlation are analyzed. The binary cross-correlation is similar to cross-correlation using the Hough transform. But diï¬erent possibilities for measuring the diï¬erence values for the cross-correlation are not considered. This paper considers as an example an algorithm based on the diï¬erence. Other distance measurements such as mean-squared error are possible, by using the list-based cross-correlation. Thirdly, in [19, 26] the relation between the Radon transform and the cross-correlation is shown. However, in these works only the template is transformed into another representation form and the pixel representation of the image remains unaï¬ected. The calculation is performed for all the pixels in the image. In this paper, the template and the image are transformed into a list representation form. The list-based algorithms only use these lists to calculate a two-dimensional cross-correlation. Zero values do not contribute to the result of a crosscorrelation. Therefore, only image points above a threshold are used and only these required positions are calculated. This is also a major advantage of a technique called image point mapping, which is presented in [24]. This image point mapping is used to calculate the discrete generalized Radon transform. In this paper, the feature-list cross-correlation algorithms are compared with the common cross-correlation algorithms. In the following section, the principle of the list-based cross-correlation algorithm is presented and the feature-list cross-correlation algorithm is described. The methods of comparison used and the test images with particular image distortions are presented afterwards. Additionally, a medical application is described, which is used to verify the results of the diï¬erent algorithms. The results and the discussion are presented afterwards. 2. ALGORITHMS AND METHODS tion form without losing any information (2), b[i] = b[x, y] = v, i = y Â· Nb + x, Nb number of columns. In this case, it is necessary to know the size Nb and Mb of the image. This form is usually used to implement image processing algorithms. Another possible way of representing images can be described as a list-based representation form. It describes the image as an unsorted list of vectors, where every vector contains the position and value of the diï¬erent parameters at this position (3), bx [n] = x, b y [n] = y, bv [n] = b[x, y], Nb number of columns, Mb number of rows where x = 1 Â· Â· Â· Nb , y = 1 Â· Â· Â· Mb , n = 1 Â· Â· Â· Nb Â· Mb . (3) The position of a pixel can also be negative. This is useful for some image operations. In the literature, similar forms are also used as parameter vectors or parameter tables [6, 19, 27, 28]. In this paper, the coordinates of the pixel and its intensity value or its absolute gradient value are used. For the list-based algorithms, all source and template images are transformed into this form of representation. The size N and M of the image can be computed by the maximum and the minimum of the x and y positions (4), Nb = max bx [n] â min bx [n] + 1, Mb = max b y [n] â min b y [n] + 1. (4) (2) By using this list-based form, any image operation can be performed. In this paper, the list-based representation form is used to compute the two-dimensional cross-correlation, which is described in the following section. 2.2. List-based cross-correlation algorithm In discrete space, the two-dimensional cross-correlation algorithm (CCA) is deï¬ned as g[x, y] = j,i 2.1. List representation of images In the ï¬eld of image processing, a digital image b can be deï¬ned as a two-dimensional array of colour points v (pixels). The positions of the pixels are determined by the topology. Thus, it is possible to access every pixel by their x and y coordinates (1), b[x, y] = v. (1) h[i, j]b[x + i, y + j], where x = 1 Â· Â· Â· Nb ; y = 1 Â· Â· Â· Mb , i=â j=â Nh â 1 Nh â 1 Â·Â·Â· , 2 2 Mh â 1 Mh â 1 Â·Â·Â· 2 2 (5) An image can also be deï¬ned as a sorted sequence of pixels. It can be transformed into this vector-based representa- for x = 1 Â· Â· Â· Ng , y = 1 Â· Â· Â· Mg , where Mb Ã Nb is the size of the source image, Mh Ã Nh is the size of the template image, and Mg Ã Ng is the size of the resulting image. It is necessary to calculate (5) for each pixel of the result image g. The computation time of this algorithm depends on the size of both images b and h. Hence, the algorithm needs O(Nb Â· Mb Â· Nh Â· Mh ) computation time. In [26], the generalized Radon transform is used to calculate the cross-correlation.1 In [24], the image point mapping technique (IPM) is presented, to calculate the Radon transform. These are the fundamentals for the feature-list crosscorrelation algorithm. The IPM technique uses the discrete generalized Radon transform, which can be deï¬ned as follows (6): g(l) = M â1 N â1 i=0 j =0 3 can be transformed into the list-based cross-correlation algorithm (7). Every entry in the image list is calculated with every entry of the template list. The measurement of the distance value of the cross-correlation ci, j can be replaced by other measurements such as least-square error, normed distance measurements or others. In this paper, a binary and a diï¬erence-based measurements are additionally used (see Section 2.4). Because of the length of the source image-list of Nb Â· Mb and the size of the template image-list of Nh Â· Mh , the list-based cross-correlation algorithm needs O((Nb Â· Mb Â· Nh Â· Mh ) Â· Ng Â· Mg ) computation time. By examining formula (7) it can be concluded that the summation at position g[x, y] is only necessary for P = 0. These positions where P = 0 can be calculated as follows: x = bx [i] â hx [ j] , y = b y [i] â h y [ j] . for x = 0, (6) (8) b[i, j]Î´ j â Ï(i; l) , â§ â¨1 where Î´(x) = â© 0 for = 0, l = l1 , l2 , . . . , lÎ· , where g denotes the discrete generalized Radon transform of b[i, j] and l denotes a Î·-dimensional discrete index parameter vector and Î´(x) denotes the Kronecker delta function. Finally Ï(i; l) denotes a discrete index transformation curve, where j = Ï(i; l). The IPM technique calculates the summation only for image values diï¬erent from zero and only for possible vectors lr ,2 Sb Sh At these positions the product of bv [i] and hv [ j] can be added up to a summation matrix. Hence, this algorithm depends only on the size of the image lists of b and h. It needs only O(Nb Â· Mb Â· Nh Â· Mh ) computation time. However, the algorithm requires extra time for each operation to calculate the positions. But compared to CCA (5), only two loops are necessary to process the whole image. The presented algorithm will only be useful if the computation can be further optimized. In [24], each computation for image points with a value of zero is omitted. By investigating formula (7), it can be concluded that the product of bv [i] and hv [ j] only has inï¬uence on the result if both values are nonzero (9), for bv [i] = 0 â¨ hv [ j] = 0, ci, j = âª â©bv [i] Â· hv [ j] otherwise. â§ âª0 â¨ g[x, y] = i=1 j =1 ci, j Â· Px,y,i, j , (9) where ci, j = bv [i] Â· hv [ j], Px,y,i, j = Î´ x â bx [i] â hx [ j] Â· Î´ y â b y [i] â h y [ j] (7) Sb = Nb Â· Mb , Sh = Nh Â· Mh for x = 1 Â· Â· Â· Ng , y = 1 Â· Â· Â· Mg . In this paper, this IPM technique is used to calculate the coeï¬cients of the cross-correlation g[x, y] directly. Furthermore, only images in the list-based representation form (3) are used. Hence, the source image b and the template image h are transformed into this representation form. To calculate the cross-correlation by using the IPM technique, a function Ï is deï¬ned to calculate the position where the measurement of the distance value of the cross-correlation, denoted as ci, j , has an inï¬uence on the result matrix g[x, y]. This function is denoted as Px,y,i, j . Hence, the cross-correlation algorithm Hence, it is possible to drop every value equal to zero from the image lists and the template lists. This reduces the list size of the images Sb and Sh . The size of the image list is now independent of the image size. It only depends on the length of the image list, which depends on the contents of the image. In any case, the list-based cross-correlation algorithm needs only O((Sb Â· Sh )) computation time. Additionally, it is possible to transform the formulas above into the vector-based representation form, as presented in (2). Hence, the memory required for the image-lists and the computation eï¬ort required to calculate the position decrease. 2.3. Feature list For the deï¬nition of the Radon transform, please see [19, 24â26]. For detailed information see [24]. Algorithm (7) is only faster than CCA (5) if the image contains large regions of zero values. However, this is unusual. Therefore, preprocessing operations are necessary to reduce the amount of pixels which are nonzero. However, the significant information should not be removed. One possibility for reducing the number of pixels is the calculation of local feature points. Possible algorithms include, for instance, edgedetection or corner-detection algorithms [9, 30â33]. The algorithms must be able to achieve consistent results and be 4 robust with respect to the noise and changing image conditions which have a major inï¬uence on the correlation results. In this paper, the 3 Ã 3 Sobel operator [20, 21] is used in the horizontal and vertical directions with signed result values. The Sobel operator has been chosen to demonstrate the assets and drawbacks of feature-based cross-correlation algorithms. In [23], additional feature extraction algorithms for feature list cross-correlation algorithms are analyzed for detecting blood vessels in human retinal images. In this analysis, the Sobel operator obtained good results. In this paper, the signed results of the Sobel operator in both directions are only transformed into the feature list representation form (3) if the absolute feature value exceeds a constant predeï¬ned threshold value. Finally, both feature lists have to be concatenated. The use of the gradient and magnitude values of the edge is suggested for practical applications. 2.4. Cross-correlation algorithms In the ï¬eld of image processing, cross-correlation algorithms are used in diï¬erent variations. Multiplication can, for example, be replaced by calculating the diï¬erence, the meansquared error, the absolute error, or the median squared error [2, 16, 17, 20, 34, 35]. The CCA as deï¬ned in (5) is robust with regard to noise. However, a bright spot will have a strong inï¬uence on the result [7, 35]. By using subtraction, the algorithm becomes robust with regard to single outliers but sensitive with regard to noise. Normalized cross-correlation coeï¬cient [35] and empirical cross-correlation algorithms obtain better results [36]. These algorithms use the local mean value or the local variance value. Therefore, these algorithms require more computational eï¬ort. But there exist alternative implementations for a fast normalized cross-correlation [35]. This implementation reduces the computational eï¬ort. In contrast, binary cross-correlation [1, 37] is fast but the results are worse. The inï¬uence of varying image conditions, changing object forms, and image contents on the results of common cross-correlation algorithms has already been analyzed (see [1]). In any case, in this paper, the common crosscorrelation algorithms, more precisely the cross-correlation algorithm (CCA) (5) and the normalized cross-correlation algorithm (NCCA) (10) [34], are compared to three different feature-list cross-correlation algorithms. These algorithms are the feature-list cross-correlation algorithm (FLA) (7), the feature-list cross-correlation algorithm using diï¬erence values (DFLA) (13), and the binary feature-list crosscorrelation algorithm (BFLA) (11), which is similar to the cross-correlation using the Hough transform, g[x, y] j,i h j=â u=â v=â Mh â 1 Mh â 1 Â·Â·Â· , 2 2 Nb â 1 Nb â 1 Â·Â·Â· , 2 2 Mb â 1 Mb â 1 Â·Â·Â· . 2 2 (10) In formula (9), the condition for reducing the size of the feature list is shown. By using a reduced feature list, all feature-list cross-correlation algorithms have to take this condition into account, which is added to BFLA and DFLA. Hence, the behavior of these algorithms diï¬ers from that of common binary or diï¬erence algorithms, ci, j = â© 1 â§ â¨0 ci, j = â© bv [i] â hv [ j] â§ â¨0 for bv [i] = 0 â¨ hv [ j] = 0, otherwise, for bv [i] = 0 â¨ hv [ j] = 0, otherwise. (11) (12) In contrast to other cross-correlation algorithms, algorithm (12) achieves best matches at the minimum value. Therefore, its results are subtracted from the maximum value of the image (13). In this paper, a constant maximum value of 255 is used, ci, j = âª â§ âª0 â¨ for bv [i] = 0 â¨ hv [ j] = 0, otherwise. (13) â©max(bv ) â bv [i] â hv [ j] Evaluation i+ Nh â 1 /2, j + Mh â 1 /2 b[x+i, y+ j] j,i h[i, j]2 u,v b[x + u, y + v]2 To evaluate and compare the results of the diï¬erent featurelist cross-correlation algorithms, several tests using diï¬erent artiï¬cial images, templates, image parameters, image distortions, and evaluation parameters are run. Two simple objects, a circle and a triangle, are used as an image and as a template. In previous analysis [38], these templates have shown the most diï¬ering results. In other common analysis of crosscorrelation algorithms (e.g., see [1]), also the brightness and contrast of the images are modiï¬ed, the images are scaled, blurred, and degraded by noise. In addition to these analyses in this paper, a template is searched which is not present in the image. The coeï¬cients of the cross-correlation algorithms and the peak signal-to-noise ratio (PSNR) are compared for all kind of distortions. To validate the former results considering real images, a medical application is used. Therefore, diï¬erent templates of diï¬erent sizes are searched in human retinal blood vessel image series to calculate the image displacement. The number of incorrect detected templates is compared. 2.5.1. Test images For the evaluation, 8-bit gray-scale images showing a circle with a diameter of 81 pixels and a equilateral triangle of the for x = 1 Â· Â· Â· Nb ; y = 1 Â· Â· Â· Mb , i=â Nh â 1 Nh â 1 Â·Â·Â· , 2 2 Figure 1: Example of a test image of triangles with changed brightness. The original image is the 7th image. same size are used. The size of the equilateral triangle is determined by the size of the wrapped circle. The diameter of this wrapped circle is 81 pixels. The centre point of the triangle is in the middle of the image. Both objects have a grayscale value of 128. This value allows the brightness to be increased. The size of the template is 91 Ã 91 pixels. It is determined by the size of the object and a border of 5 pixels. The border is used to allow diï¬erent convolution matrix sizes for the feature extraction algorithm and to avoid the related marginal problem. For the feature-list cross-correlation algorithms, the feature-lists are created ï¬rst. The lengths of the feature-lists of the templates are about 900 feature points for the triangle template and about 1100 feature points for the circle template. Hence, the length of the feature list is about 8 times smaller than the template image. One test image for each kind of image modiï¬cation has been created. That is why the eï¬ect of a single variations can be analyzed separately. Every test image consists of 21 diï¬erent object images. These object images, having been changed iteratively, are arranged horizontally. The object image size is 293 Ã 293 pixels. It is derived from the maximum object size of 101 pixels, plus a border of 5 pixels, plus two times the size of the template. The maximum object size depends on the maximum scaling value (see Section 2.5.4). An additional border of the size of the template minus one divided by two, determines the test image size to be 6243 Ã 383 pixels. Due to all the borders adding space, the results of the crosscorrelation for each modiï¬cation are independent of the results of the neighbouring objects. In Figure 1, an example of a test image is shown. For the medical application, the human retinal blood vessel image series from ï¬ve test persons (see Figure 2) [23, 39] are used.3 The image series includes 21 to 26 single grayscale fundus images of ï¬ve healthy subjects. The images have a size of 768 Ã 576 pixels. These images are of good quality as short ï¬ashes were used as the fundus illumination. In addition, an optical green ï¬lter of 560 nm is used. In total, 119 medical images were analyzed. The ï¬rst image of each series is used to extract diï¬erent templates with diï¬erent sizes. Three medium templates with a size of 100 Ã 100 pixels, one small template with a size of approximately 40 Ã 40 pixels and two large templates of approximately 250 Ã 150 pixels are used (see Figure 2). 2.5.2. Evaluation measures The coeï¬cients of the cross-correlation have a diï¬erent range of values. The normalized cross correlation has a range of values between zero and one. For comparing the results, the coeï¬cients are normalized by the size of the template or by the length of the feature list. It is possible that one edge point exists in the feature list twice, because the results of the Sobel operator in that horizontal and vertical directions are stored. Therefore, the coeï¬cients of the list-based crosscorrelation algorithms are sometimes greater than one. In addition to the coeï¬cients, the peak signal-to-noise ratio (14) is also calculated, PSNR [x, y] = 10 Â· log f [x, y] â f 1/(Mh Â· Nh â 1) Â· j,i f [i, j] â f (14) where x, y position of maximum value, i=â Nh â 1 Nh â 1 Â·Â·Â· 2 2 Mh â 1 Mh â 1 Â·Â·Â· . j=â 2 2 (15) The result region is determined by the corresponding modiï¬cation step and has the same size as the template (Mh ÃNh ). For each modiï¬cation step, the value in the middle of the result region is used as the peak value for the PSNR calculation. Sometimes, the maximum value is not in the middle of the result region, where it should be, due to the symmetry of the templates. This information is evaluated and presented as markers in the result graphs (e.g., see Section 3, Figure 7). 2.5.3. Variation of image conditions To analyze the behavior of the diï¬erent cross-correlation algorithms, the image conditions are changed in various ways. The ï¬rst test image is distorted by noise. Therefore, for every modiï¬cation step the object image is added up with uniformly centred distributed noise varying in intensity from 0 to 200 percent of the maximum grayscale value. Ten of these test images were created to reduce the variance of the results. The mean value and the standard deviation of the results were analyzed (e.g., see Figure 4). Furthermore, the brightness and contrast were changed by linear scaling (16), g[x, y] = b[x, y] + c1 â c2 . (16) The image series has been recorded by the VisualIS system for digital fundus imaging (thanks to Imedos GmbH, Jena, Germany). In the test images, the brightness was changed by varying c1 in 21 steps from â108 to 250. Hence, in the ï¬rst part, from â108 to 0, only the gray-scale value of the object was changed. In the second part, from 0 to 125, both the value of (1) (2) 1 (3) 4 (5) 6 3 (4) (6) 2 5 Figure 2: Example of retinal fundus image with the selected templates: (1)â(3) medium template (100 Ã 100 pixels); (4) small template (40 Ã 40 pixels); (5) large template which includes the optic nerve (180 Ã 180 pixels); (6) large template (240 Ã 140 pixels). Figure 3: Example results of diï¬erent cross-correlation algorithms (topâCCA; bottomâFLA). The test image contains triangles varying in brightness (see Figure 1). The 7th image shows the result for the original image. High coeï¬cients are black, low coeï¬cients are white. the object and the background were changed. The diï¬erence between the gray value between object and background remained constant. In the last part, from 125 to 250, only the colour of the background was changed. The distance between object and background inï¬uences the values of the feature extraction. We expect a signiï¬cant eï¬ect of this variation on the results of the feature-list cross-correlation algorithms. In the next test image, the contrast was changed by varying c2 from 9 to 189 percent. In all variations, only the object gray-scale value was changed in 21 steps from 11 to 240. 2.5.4. Change of object form The modiï¬cation of the object image was also analyzed. To do so, the object was scaled using the nearest neighbor scaling algorithm in 21 steps from a diameter of 61 to 101 pixels. The size of the triangle changed appropriately with the diameter of the wrapped circle. The centre point was kept in the middle of the object. Another test image includes blurred objects, which are generated using a box ï¬lter with diï¬erent mask sizes from 1 to 41 pixels. In most applications, diï¬erent objects can easily be separated or distinguished. That is why, as a last variation, the correlation results using deviant templates are analyzed. Therefore, the scaling test images (see Figure 1) are correlated with the template which is not in the actual image. 2.5.5. Medical application For the ï¬nal test, the incorrectly detected templates in the human retinal blood vessel image series are counted. Human retinal images are used, because these fundus images have a high individual reproducibility and do usually not change even over longer time intervals. The maximum position in the result of the cross-correlation is assumed to be the detected template position. The position of the templates and the displacement for each image of the image series are known. The template is incorrectly detected if the distance of the detected template position in the x or y directions is greater than 5 pixels from the known position. In addition, the computational eï¬ort for all tests are measured. For the medical application, in addition to the time required for all images, the time required with respect to the template size is analyzed. 3. RESULTS Figure 3 illustrates the result coeï¬cients of CCA and FLA for an exemplary test image that shows triangles varying in brightness. Obviously, the feature-list cross-correlation is more sensitive with regard to changing brightness. The evaluation results for all cross-correlation algorithms based on diï¬erent images are shown in Figures 4 to 9. For each distortion type, four graphs are shown (e.g., see Circle with noise (average standard deviation: 0.0064) 1.2 1 0.8 0.6 0.4 0.2 0 1.2 1 0.8 0.6 0.4 0.2 0 Coeï¬cients Coeï¬cients Triangle with noise (average standard deviation: 0.0072) Noise in % of maximum gray value Noise in % of maximum gray value PSNR circle with noise (average standard deviation: 0.59) 30 25 20 15 10 5 0 0 20 40 60 80 100 120 140 160 180 200 Noise in % of maximum gray value 30 25 20 15 10 5 0 0 PSNR triangle with noise (average standard deviation: 0.65) PSNR PSNR Noise in % of maximum gray value Figure 4: Inï¬uence of changes in noise on the coeï¬cients and the PSNR of the cross-correlation algorithms. Top: coeï¬cients of the correlation algorithms, (markerâthe correct position always detected); bottom: PSNR and standard deviation of the cross-correlation algorithms; left: results of the circles; right: results of the triangles. Figure 4). The ï¬gures at the top illustrate the coeï¬cients of the cross-correlation algorithms. In addition, the validation of the maximum position is visualized. If the maximum position is located in the centre of the object, a marker is displayed on the curve. The coeï¬cients of the cross-correlation measures are diï¬erently normalized, a comparison of the values is not suggestive. However, the curve progression can be analyzed. The graphs at the bottom show the PSNR. The results of the feature-list cross-correlation algorithms are sometimes negative. In this case, the graphs are truncated. The graphs on the left show the results of the images with circles. The graphs on the right show the results of the images with triangles. 3.1. Variation of image conditions The inï¬uence of noise on the correlation results is shown in Figure 4. Due to the sensitivity of the feature extraction algorithm concerning noise and the lower amount of values for the calculation, we expected that the feature-list crosscorrelation algorithms are more sensitive to noise than the common cross-correlation algorithms. This assumption is conï¬rmed by the results. The coeï¬cients of FLA, DFLA, and NCCA decrease with increasing noise. The other coeï¬cients remain more or less constant. This curve progression is independent of the form of object used. For up to 80 percent of all algorithms and all objects, the position of the maximum value agrees with the object position. The standard deviation of the coeï¬cients is very low for all algorithms. The PSNR of all feature-list cross-correlation algorithms also decreases with increasing noise (see Figure 4 bottom). The PSNR of the BFLA and the DFLA decreases more strongly than the PSNR of the FLA. But the values for the PSNR of the feature-list cross-correlation algorithms is up to three times higher than those of common cross-correlation algorithms. The PSNR of the FLA and the DFLA are higher than common cross-correlation algorithms for up to 90 percent noise. Due to the decreasing variance of the results of the NCCA, the PSNR of the NCCA increases slightly. The standard deviation of the PSNR rises with increasing noise for all algorithms. With the BFLA and the DFLA, it rises even faster than with other algorithms. The FLA and the CCA always detected the correct position. The BFLA lacks position accuracy. The inï¬uence of altering brightness on the results of the analyzed cross-correlation algorithms is shown in Figure 5. The BFLA is robust concerning varying brightness, as the binary images remain the same. The results of the other algorithms vary widely. In the ï¬rst section, where c1 is between â108 and 0 and the background is constant, the coeï¬cients of the FLA, the DFLA, and the CCA are rising, while those of the NCCA remain constant. In the second section, where c1 is between 0 and 125 and only the diï¬erence between object and background is constant, the coeï¬cients of the FLA Circles with diï¬erent brightness 1.2 1 0.8 0.6 0.4 0.2 0 1.2 1 0.8 0.6 0.4 0.2 0 Coeï¬cients Coeï¬cients Triangles with diï¬erent brightness â100 â50 â100 â50 PSNR circles with diï¬erent brightness 30 25 20 15 10 5 0 â100 â50 PSNR triangles with diï¬erent brightness 30 25 20 15 10 5 0 â100 â50 PSNR PSNR Figure 5: Inï¬uence of changes in brightness on the coeï¬cients and the PSNR of the cross-correlation algorithms (change of c1 (16)). Top: coeï¬cients of the correlation algorithms, (markerâcorrect position found); bottom: PSNR of the correlation algorithms; left: results of the circles; right: results of the triangles. and the DFLA also remain constant. While only the coeï¬cients of the CCA are still rising, those of the NCCA begin to fall. In the last section, where c1 is between 125 and 250 and only the background is changed, the coeï¬cients of the FLA and the DFLA are falling, the coeï¬cients of the CCA remain constant, and those of the NCCA are still falling. The curve progression of the coeï¬cients of the FLA and the DFLA can be explained by the result values of the feature extraction. Because of the varying diï¬erence between object and background, the value of the extracted feature values are changing. The PSNR of the FLA, the BFLA, and the CCA are approximately constant (see Figure 5 bottom). The DFLA shows the same curve progression for the PSNR values as for the coeï¬cients. The PSNR of the NCCA depends on the variance of the coeï¬cients around their maximum. With increasing brightness, this decreases. Therefore, the result of the PSNR of the NCCA rises if the background colour rises. For all algorithms, the correct position has been detected for all levels of brightness. The diï¬erence between the analyzed objects is marginal. Changing the contrast also leads to correct position detection by all algorithms (see Figure 6). The diï¬erences in the results between the analyzed objects are also minimal. The coeï¬cients of the BFLA and the NCCA are approximately constant while they rise with the FLA and the CCA. Only the coeï¬cients of the DFLA have their maximum values at the position of the unchanged image. The same is true for the PSNR of the DFLA. The PSNR values of all other algorithms are approximately constant when varying the contrast. 3.2. Change of object form Figure 7 shows the results of changing the size of the analyzed objects. Where object and template have the same size, the coeï¬cient of all algorithms, except those of the CCA, have a single maximum at the correct position at the centre of the objects. With the triangular object, the peak is not as strong as for the circle object. In those cases where the triangular object is scaled larger than the template, the template is located inside, at top of the triangle. This leads to constant coeï¬cients for the CCA, but to incorrect positions. The PSNR of all algorithms also has the maximum value when object and template are of the same size (see Figure 7 bottom). Again, feature-list cross-correlation algorithms have a major peak at the correct position. By changing the size of the triangles, feature-list cross-correlation algorithms oï¬er the correct position only if the size of the template and the object is approximately the same. At this point, the coeï¬cients and the PSNR attain a high maximum value. The DFLA gives the best results. Moreover, it is the most sensitive algorithm. Circles with diï¬erent contrast 1.2 1 0.8 0.6 0.4 0.2 0 1.2 1 0.8 0.6 0.4 0.2 0 Coeï¬cients Coeï¬cients Triangles with diï¬erent contrast 1 c2 (%) 1 c2 (%) 30 25 20 15 10 5 0 0.2 0.4 0.6 0.8 1 c2 (%) 30 25 20 15 10 5 0 0.2 0.4 0.6 0.8 1 c2 (%) PSNR circles with diï¬erent contrast PSNR triangles with diï¬erent contrast PSNR PSNR Figure 6: Inï¬uence of changes in contrast on the coeï¬cients and the PSNR of the cross-correlation algorithms (change of c2 in percent (16)). Top: coeï¬cients of the cross-correlation algorithms, (markerâcorrect position found); bottom: PSNR of the cross-correlation algorithms; left: results of the circles; right: results of the triangles. The more the image is blurred, the more the coeï¬cients of all algorithms, except those of the BFLA decrease (see Figure 8). The coeï¬cients of the BFLA increase a little bit with increasing blur. The PSNR of all feature-list cross-correlation algorithms is decrease strongly if the object is blurred (see Figure 8 bottom). The PSNR of the common cross-correlation algorithms is decrease slightly. The feature-list cross-correlation algorithms only ï¬nd the correct position as long as the image is lightly blurred. In addition to the distortions described before a template which is not present in the image is searched. In Figure 9 the results are visualized. In this case, the coeï¬cients of the feature-list cross-correlation algorithms are 80 percent smaller than the results when the searched object and template are the same. The coeï¬cients of the other algorithms are higher than the results of the feature-list algorithms as they are only decreased by 20 percent. The position of the maximum coeï¬cient is seldom at the central position. This is obvious, because the templates are not in the image. In any case, the CCA and the NCCA sometimes have their maximum values at the central position. The PSNRs of the feature-list cross-correlation algorithms are decreased as well as the PSNRs of the other algorithms (see Figure 9 bottom), while those of the featurelist cross-correlation algorithms decrease more than those of the others. The curve progressions of the feature-list cross- correlation algorithms are deï¬nitely diï¬erent from the corresponding curve progressions of the scaling objects (see Figure 7). 3.3. Medical application In Figure 11, the total amount of errors for all templates and all images is shown. The results of the medical application partially conï¬rmed the results of the analytic images. The CCA has the largest amount of errors. Only the large templates are sometimes detected. The results of the NCCA are clearly better than those of the CCA. The FLA is derived from CCA. This could explain why that this algorithm also has a high amount of errors. This large amount of errors in relation to the other feature-list cross-correlation algorithms is unexpected because the results of the former analysis gets better results. On the other hand, the results of the BFLA are better than expected. By using medium and large templates, the DFLA and the BFLA have the lowest amount of errors. On the other hand, by using small templates, the NCCA has the minimal amount of errors (see Figure 10). But overall, the DFLA achieves the best results (see Figure 10). 3.4. Computational effort The processing time for the common cross-correlation algorithms is constant for all types of distortion. The algorithms Circles with diï¬erent size 1.2 1 0.8 0.6 0.4 0.2 0 1.2 1 0.8 0.6 0.4 0.2 0 Coeï¬cients Coeï¬cients Triangles with diï¬erent size Diameter in pixel 30 25 20 15 10 5 0 65 70 75 80 85 90 95 100 Diameter in pixel PSNR circles with diï¬erent size 30 25 20 15 10 5 0 65 PSNR triangles with diï¬erent size PSNR PSNR Figure 7: Inï¬uence of changing object size on the coeï¬cients and the PSNR of the cross-correlation algorithms. The size is speciï¬ed as the diameter of the object wrapped circle in pixels. Top: coeï¬cients of the cross-correlation algorithms, (markerâcorrect position found); bottom: PSNR of the cross-correlation algorithms; left: results of the circles; right: results of the triangles. are implemented in C++ by using a signal processing framework [40] and the intel performance library [34]. Using a currently standard PC, this implementation of the CCA and the NCCA requires about 37 seconds without optimization. The time needed for the feature list algorithms depends on the length of the feature list. Omitting the noisy and blurred images, this size is constant. Therefore, the processing time for the feature list algorithms is constant, at about one to two seconds. Hence, the algorithms are 12 to 50 times faster than the common cross-correlation algorithms. But this is valid only for these analytic examples. Noise and blur lead to a considerably increasing feature-list size. Therefore, the feature-list cross-correlation algorithms require about 2 to 10 seconds for the blurred images and 26 to 190 seconds for the noisy images. The FLA requires the highest processing time, which is about 170 to 210 seconds for the same images. The processing time for the medical images also depends on the size of the template. All the algorithms require more processing time for large templates than for small templates. For the CCA and NCCA, the processing time is approximately the same. The results of the feature-list crosscorrelation algorithms are strongly varying. The FLA requires the most processing time, but only for the large templates. The feature-list cross-correlation algorithms are up to 12 times faster than the common cross-correlation algorithms. By using other feature extraction algorithms such as the Canny operator [30], the feature list cross-correlation al- gorithms are even up to 14 times faster than common crosscorrelation algorithms [23]. 4. DISCUSSION As is well known, the CCA is robust with respect to noise. The increase of brightness or contrast caused the coeï¬cients to increase, but the PSNR to remain constant. Smaller objects have smaller coeï¬cients. Larger objects result in the same coeï¬cients as for the unchanged object. By changing the size and by increasing the blur, the PSNR hardly decreases. The diï¬erence between the two analyzed objects is minimal. This algorithm only detects large templates in the medical images. The computation time required depends on the image and the template size and is constant, if the image sizes are constant. As is also known from literature, the NCCA is robust concerning changes of brightness and contrast. Increasing noise causes falling coeï¬cients but constant PSNR. The change of the object form also has an inï¬uence on the coeï¬cients and the PSNR. The unchanged object mostly corresponds to the maximum value. By using the medical images, this algorithm obtains the best results for small templates. The computation time required is also constant. Every feature-list cross-correlation algorithm is sensitive with regard to changes of the object form and is susceptible Circles with diï¬erent blur 1.2 1 0.8 0.6 0.4 0.2 0 1.2 1 0.8 0.6 0.4 0.2 0 Coeï¬cients Coeï¬cients Triangles with diï¬erent blur Size of convolution mask in pixel Size of convolution mask in pixel PSNR circles with diï¬erent blur 30 25 20 15 10 5 0 5 10 15 20 25 30 35 40 Size of convolution mask in pixel 30 25 20 15 10 5 0 5 PSNR triangles with diï¬erent blur PSNR PSNR Size of convolution mask in pixel Figure 8: Inï¬uence of changing blur at the coeï¬cients and the PSNR of the cross-correlation algorithms. The blur is speciï¬ed as the size of the convolution mask of the average ï¬lter in pixels. Top: coeï¬cients of the correlation algorithms, (markerâcorrect position found); bottom: PSNR of the correlation algorithms; left: results of the circles; right: results of the triangles. to noise. Only the unchanged or minimally changed objects are detected at the correct position. The processing time is independent of the image size. These algorithms are very fast when there is not too much noise. The BFLA is robust with respect to change of brightness and contrast. Blur also has only a marginal inï¬uence on the result. This algorithm is much more susceptible to noise than others. But for the medical application, this algorithm obtains unexpectedly good results. The FLA is more robust with respect to noise. But it is an algorithm that is sensitive to changes of brightness, contrast, object size, blurring, and any change of the object form. Rising contrast causes rising PSNR. But the FLA requires six times more processing time than the other feature-list crosscorrelation algorithms. Surprisingly, this algorithm gives a large amount of errors when detecting the templates in the analyzed medical images. The results of images analyzed by the DFLA and the FLA are similar. The DFLA reaches the maximum PSNR. Only this algorithm determines a single maximum value. Changes in contrast are also recognized. This algorithm is faster than the FLA, but also more susceptible to noise. But this algorithm obtains the best overall results with regard to the medical application. The results of the common cross-correlation algorithms are similar to those of other publications. In [1], the re- sults of the cross-correlation of Laplacian ï¬ltered images are presented. These results diï¬er from the results of the listbased cross-correlation algorithms. In [1], only the CCA is said to be sensitive concerning brightness and contrast. But the results of this study show that the feature-based crosscorrelations FLA and DFLA are sensitive concerning brightness and contrast, too. The veriï¬cation of the results by using the Laplace operator instead of the Sobel operator leads to similar results. The major advantage of the feature-list cross-correlation algorithms is the high values of the coeï¬cients and the high values of the PSNR. Diï¬erent objects can clearly be diï¬erentiated due to strong diï¬erences in coeï¬cients and PSNR between correct and incorrect objects. The algorithms are sensitive to all deviations from the template. On the other hand, the algorithms cannot generalize very well. A high quantity of feature points and the use of the local area around a feature point can solve this problem. This can be achieved by smoothing the template. Moreover, missing feature points have a strong inï¬uence on the result. But, because of (9), additional feature points in the image have no inï¬uence on the result. This problem can be reduced by using the local mean or variance, so the normed or empirical versions of feature-list cross-correlation algorithms. But these algorithms require additional processing time to calculate the local mean or variance. Search triangle in an image of circles with diï¬erent size 1.2 1 0.8 0.6 0.4 0.2 0 1.2 1 0.8 0.6 0.4 0.2 0 Coeï¬cients Coeï¬cients Search circle in an image of triangles with diï¬erent size Diameter in pixel PSNR search triangle in an image of circles with diï¬erent size 30 25 20 15 10 5 0 65 70 75 80 85 90 95 100 Diameter in pixel 30 25 20 15 10 5 0 PSNR search circle in an image of triangles with diï¬erent size PSNR PSNR Figure 9: Inï¬uence of absent object on the coeï¬cients and the PSNR of the cross-correlation algorithms. The size is speciï¬ed as the diameter of the object wrapped circle in pixels. Top: coeï¬cients of the cross-correlation algorithms, (markerâcorrect position found); bottom: PSNR of the cross-correlation algorithms; left: results of the circles; right: results of the triangles. 100 90 80 Amount of errors (%) Amount of errors for medium templates 100 90 80 Amount of errors (%) Amount of errors for small templates 100 90 80 Amount of errors (%) 70 60 50 40 30 20 10 Amount of errors for large templates 70 60 50 40 30 20 10 0 BFLA DFLA 70 60 50 40 30 20 10 0 BFLA DFLA BFLA DFLA Figure 10: Amount of incorrect detected templates in percent for all images with regard to the size of the templates. The feature-list cross-correlation algorithms are mostly faster than common cross-correlation algorithms. However, the processing time depends on the quantity of feature points in the image, but is independent of the image size. Therefore, a minimal number of features is one requirement for the fea- ture extraction. Diï¬erent images have diï¬erent numbers of feature points and therefore varying processing times. But in most cases the size of the feature-list of the template is constant. Moreover, noise causes the number of feature points to increase, therefore increasing the processing. Furthermore, Table 1: Computation eï¬ort of the cross-correlation algorithms in seconds per image for the diï¬erent analytic test images and the human retinal images. CCA Analytic images Noisy images Blurred images Other distortions Medical images Small templates Medium templates Large templates 37.0 37.0 37.0 1.20 5.49 13.09 NCCA 37.0 37.0 37.0 1.21 5.48 13.12 FLA 190.0 10.0 2.0 0.76 2.22 13.69 BFLA DFLA 32.0 2.0 1.0 0.24 0.50 2.95 26.0 2.0 1.0 0.21 0.44 2.75 13 values. But, absent corner points have a strong inï¬uence on the result. Therefore, the corner points have to be detected with certainly. Other feature extraction algorithms such as the Canny corner detection produce more feature points, so only a few missing feature points have no strong inï¬uence on the result. The use of signed feature values such as those gained by the Laplace operator or the Sobel operator causes a stronger sensitivity with respect to diï¬erent distortions. To increase the robustness of the feature-list crosscorrelation algorithms, additional features could be used. Additionally, in this paper only absolute gradient values are used. Considering also the orientation of the gradient could improve the results, too. But these additional steps result in an increasing processing time. 5. CONCLUSION 100 90 80 Amount of errors 70 60 50 40 30 20 10 0 CCA Amount of errors for all templates NCCA FLA BFLA DFLA Figure 11: Amount of incorrect detected templates in percent for all images and all templates. noise has a strong inï¬uence on the correlation result. That is why the feature extraction should be robust with regard to noise. Additional smoothing could be necessary. Other distortions also have a strong inï¬uence on the result. Furthermore, unintended omission of feature points reduces the coeï¬cients. But, if enough feature points still remain, the results will be adequate. The sensitivity of the feature list crosscorrelation algorithms could, for instance, be useful for inspecting objects for quality assurance, as every change of the object has an inï¬uence on the result. The medical application clariï¬es the practical usability of the feature-list cross-correlation algorithms, although the medical images includes noise, changes of brightness and contrast and somewhat changing objects within the image series. Also the advantage of short processing times are conï¬rmed. The choice of the feature-extraction algorithm has a strong inï¬uence on the cross-correlation results. The requirements of feature extraction are sophisticated. One major requirement is that the minimum of feature values is not equal to zero. Furthermore, the feature extraction has to be robust, consistent, and fast. For instance, the Harriscorner detection [32] only produces a few points with high In this paper, diï¬erent feature-list cross-correlation algorithms are compared to common cross-correlation algorithms. For this purpose, the images and the templates are transformed from the two-dimensional representation form into a list representation form. Next, each zero value is removed from the image lists. This allows a drastic reduction of computational eï¬ort in comparison to common crosscorrelation algorithms. The choice of feature extraction offers further possibilities for reducing the number of calculation steps. The Sobel operator is used in this paper as an example. Lastly, the cross-correlation algorithm is adapted into a list-based cross-correlation algorithm. Diï¬erent kinds of feature-list cross-correlation algorithms are compared. All feature-list cross-correlation algorithms are sensitive to any changes in the object form and are susceptible to noise. However, it is possible to diï¬erentiate between similar objects. When the noise level does not exceed a certain value, these algorithms are much faster than common cross-correlation algorithms. These advantages are conï¬rmed by a medical application. For actual application, it must be taken into account that feature-list cross-correlation algorithms do not have a constant processing time. By improving the image conditions the processing time decreases. The DFLA achieves the best results and is very fast. The FLA is more robust with regard to noise, but slower than other feature-list cross-correlation algorithms. The BFLA is robust to changes in brightness and contrast but is more sensitive to noise. But the medical application shows that the results of the FLA are worse than the results of the other feature-list cross-correlation algorithms. The feature-list cross-correlation algorithms are successfully applied to a medical application. For the purpose presented, the DFLA achieves the best results. But also the BFLA and the NCCA achieve suï¬ciently good results. The choice of feature extraction algorithm also has a strong inï¬uence on the cross-correlation results. But the requirements for feature-extraction are sophisticated and depend on diï¬erent applications as well as on the respective cross-correlation algorithms. Therefore, it is necessary to analyze the inï¬uence of diï¬erent feature extraction algorithms on the results. Hence, it is possible to select the best feature extraction algorithm and feature-list cross-correlation 14 algorithm for the actual problem. This will form part of future analytical investigation. [17] images,â Computer Vision and Image Understanding, vol. 97, no. 2, pp. 209â241, 2005. C. Stiller and J. Konrad, âEstimating motion in image sequences, a tutorial on modeling and computation of 2D motion,â IEEE Signal Processing Magazine, vol. 16, no. 4, pp. 70â 91, 1999. S. R. Deans, âHough transform from the radon transform,â IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 3, no. 2, pp. 185â188, 1981. J. Princen, J. Illingworth, and J. Kittler, âA formal deï¬nition of the hough transform: properties and relationships,â Journal of Mathematical Imaging and Vision, vol. 1, no. 2, pp. 153â168, 1992. B. JÂ¨ hne, Digital Image Processing, Springer, Berlin, Germany, a 6th, revised and extended edition, 2005. M. Sonka, V. Hlavac, and R. Boyle, Image Processing, Analysis, and Machine Vision, PWS, Paciï¬c Grove, Calif, USA, 1999. R. Maschotta, S. Boymann, and U. Hoppe, âRegelbasierte kantenerkennung zur schnellen kantenbasierten segmentierung der glottis in hochgeschwindigkeitsvideos,â in Bildverarbeitung fÂ¨ r die Medizin, pp. 188â192, Springer, Berlin, Germany, 2005. u R. Maschotta, J. Rehs, S. Boymann, and U. Hoppe, âEvaluation of feature extraction algorithms for the feature-list crosscorrelation in retinal images,â in Proceedings of the 3rd European Medical and Biological Engineering Conference, Prague, Czech Republic, November 2005. K. V. Hansen and P. A. Toft, âFast curve estimation using preconditioned generalized radon transform,â IEEE Transactions on Image Processing, vol. 5, no. 12, pp. 1651â1661, 1996. Â¨ J. Radon, âUber die bestimmung von funktionen durch ihre integralwerte lÂ¨ ngs gewisser mannigfaltigkeiten,â Berichte a Schsische Akademie der Wissenschaften. Leipzig, MathematischNaturwissenschaftliche Klasse, vol. 69, pp. 262â277, 1917. Â´ J. Beyerer and F. P. Leon, âDie radontransformation in der digitalen bildverarbeitung,â Automatisierungstechnik, vol. 50, pp. 472â480, 2002. C. F. Olson, âConstrained hough transforms for curve detection,â Computer Vision and Image Understanding, vol. 73, no. 3, pp. 329â345, 1999. N. Guil, J. M. Gonzalez-Linares, and E. L. Zapata, âBidimensional shape detection using an invariant approach,â Pattern Recognition, vol. 32, no. 6, pp. 1025â1038, 1999. P. V. C. Hough, âA method and means for recognizing complex patterns,â US patent 3,069,654, 1962. J. Canny, âA computational approach to edge detection,â IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 8, no. 6, pp. 679â698, 1986. W. FÂ¨ rstner and E. GÂ¨ lch, âA fast operator for detection and o u precise location of distinct points, corners and centres of circular features,â in Proceedings of ISPRS Intercommission Conference on Fast Processing of Photogrammetric Data, pp. 281â 305, Interlaken, Switzerland, June 1987. C. Harris and M. Stephens, âA combined corner and edge detector,â in Proceedings of the 4th Alvey Vision Conference, pp. 147â151, Manchester, UK, August-September 1988. U. KÂ¨ the, âEdge and junction detection with an improved o structure tensor,â in Pattern Recognition, Proceedings of 25th DAGM Symposium, vol. 2781 of Lecture Notes in Computer Science, pp. 25â32, Magdeburg, Germany, September 2003. Intel, Open source computer vision library, reference manual, 2001, http://www.intel.com. J. P. Lewis, âFast normalized cross-correlation,â in Proceedings of Vision Interface (VI â95), pp. 120â123, Quebec, Canada, May 1995.
http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png
EURASIP Journal on Advances in Signal Processing
Hindawi Publishing Corporation
http://www.deepdyve.com/lp/hindawi-publishing-corporation/comparison-of-feature-list-cross-correlation-algorithms-with-common-RNihz52UVT

/lp/hindawi-publishing-corporation/comparison-of-feature-list-cross-correlation-algorithms-with-common-RNihz52UVT

- With DeepDyve, you can stop worrying about how much articles cost, or if it's too much hassle to order — it's all at your fingertips. Your research is important and deserves the top content.
- Read from thousands of the leading scholarly journals from
*Springer, Elsevier, Nature, IEEE, Wiley-Blackwell*and more. - All the latest content is available, no embargo periods.

- We’ll send you automatic email updates on the keywords and journals you tell us are most important to you.
- There is a lot of content out there, so we help you sift through it and stay organized.

## “Hi guys, I cannot tell you how much I love this resource. Incredible. I really believe you've hit the nail on the head with this site in regards to solving the research-purchase issue.”

Daniel C.

## “Woah! It’s like Spotify but for academic articles.”

@Phil_Robichaud

## “I must say, @deepdyve is a fabulous solution to the independent researcher's problem of #access to #information.”

@deepthiw

## “My last article couldn't be possible without the platform @deepdyve that makes journal papers cheaper.”

@JoseServera

Your PayPal account has been charged $**0**.

Your
credit card
has been charged **$0**.

You can now print this article. A purchase receipt has also been sent to your email address.

Read and print from thousands of top scholarly journals.

System error. Please try again!

or

By signing up, you agree to DeepDyve's Terms of Service and Privacy Policy.

Already have an account? Log in