Perceptual Image Hashing with Weighted DWT Features for Reduced-Reference Image Quality Assessment

Perceptual Image Hashing with Weighted DWT Features for Reduced-Reference Image Quality Assessment Abstract We propose a novel perceptual image hashing based on weighted discrete wavelet transform (DWT) statistical features. This hashing converts input image into a normalized image by bi-linear interpolation and color space conversion, extracts edge image of the normalized image via Canny operator, and divides the edge image into non-overlapping blocks. For each block, a three-level 2D DWT is applied to obtain different sub-bands and the weighted sum of the DWT statistics of these sub-bands is calculated. Finally, image hash is generated by concatenating and quantizing these weighted DWT features. Similarity of image hashes is measured by Euclidean distance. The Copydays dataset and the Uncompressed Color Image Database (UCID) are both used to evaluate classification between robustness and discrimination. Receiver operating characteristics curve comparisons illustrate that our hashing is superior to some state-of-the-art algorithms in classification performance with respect to robustness and discrimination. The LIVE Image Quality Assessment Database is used to validate our application in reduced-reference image quality assessment. Experimental results show that our hashing has better performance in image quality assessment than two popular measures, i.e. peak signal-to-noise ratio and structural similarity. 1. INTRODUCTION Wide applications of digital image provide large-scale image databases and thus call for efficient techniques of image storage and management [1]. To tackle this issue, many researchers have focused on a novel technique called image hashing in the past years. Image hashing is an efficient technique for processing digital images. It maps any size input image into a content-based compact code called image hash [2], which is used to represent the input image itself. Image hashing has been successfully used in many applications [3–8], such as image retrieval, image authentication, image indexing, image copy detection, digital watermarking and multimedia forensics. Note that, in many practical applications, digital images often undergo some content-preserving operations, such as JPEG compression, brightness adjustment, contrast adjustment, gamma correction and low-pass filtering. These operations will alter their digital representations, but do not change their visual contents. Therefore, the content-based image hash should be kept unchanged after these operations. In other words, image hashing should be robust against content-preserving operations. This is the first property of image hashing called perceptual robustness [1, 9]. Another basic property of image hashing is called discrimination [1, 10]. It requires that those images with different visual contents should have different image hashes. Besides the two basic properties, image hashing should satisfy additional property for some special applications. For example, it should measure perceptual difference of digital images for quality assessment. In the past decade, many image hashing algorithms have been proposed for tackling different applications. For example, Venkatesan et al. [1] used discrete wavelet transform (DWT) coefficients to construct image hash. This hashing can be used for image indexing. Fridrich and Goljan [11] exploited the projections between input image and direct current (DC)-free random smooth patterns to generate hash. This method can be applied to digital watermarking. Lefebvre et al. [12] used Radon transform (RT) to extract image hash. This scheme can resist geometric transform (e.g. image rotation and image scaling), but its discrimination must be improved. Ou and Rhee [13] applied 1D discrete cosine transform (DCT) to the selected RT projections and took the first alternating current (AC) coefficient of each projection to make hash. This RT–DCT hashing is robust to JPEG compression and filtering, but its discrimination is poor. In other work, Lei et al. [14] used RT, invariant moment and discrete Fourier transform (DFT) to construct image hash. Lin and Chang [15] designed a hashing algorithm with invariant relation between DCT coefficients to distinguish JPEG compression from malicious tampering operations. Ahmed et al. [16] combined DWT and Secure Hash Algorithm 1 (SHA-1) to design a hashing algorithm. These hashing algorithms [14–16] can be used for image authentication, but have weaknesses in resisting some content-preserving operations, such as image rotation and brightness adjustment. Tang et al. [17] proposed a novel lexicographical framework for hash generation, and presented a hashing algorithm with DCT and non-negative matrix factorization (NMF). This algorithm has good performance in image retrieval. To exploit discrimination of color images, Tang et al. [18] took color vector angles (CVA) as color feature and compressed them with DWT. Kozat et al. [19] used singular value decomposition (SVD) to construct image hash. They randomly divided input image into overlapping blocks, applied SVD to every block and constructed a secondary image by using the ‘first’ left and right singular vectors of all blocks. Next, they divided the secondary image into overlapping blocks, re-applied SVD to every block and formed image hash by combining the ‘first’ left and right singular vectors of all blocks. The SVD–SVD hashing [19] can resist image rotation, but its discrimination is far away from desirable performance. Davarzani et al. [20] exploited SVD and center-symmetric local binary patterns (CSLBP) to design image hashing for authentication. Discrimination of the SVD–CSLBP hashing is also not desirable. Inspired by the SVD–SVD hashing [19], Monga and Mihcak [21] presented a similar image hashing algorithm by replacing SVD with NMF. The NMF–NMF–statistics quantization (SQ) hashing [21] outperforms the SVD–SVD hashing, but is sensitive to watermark embedding. In another work, Tang et al. [22] designed an efficient image hashing with a ring partition and NMF, where the ring partition divides input image into some rings, i.e. annuloid regions. This hashing shows better classification performance than NMF–NMF–SQ hashing, and can be used in content change detection. Li et al. [23] incorporated random gabor filtering (GF) with dithered lattice vector quantization (LVQ) to design image hashing. The GF–LVQ hashing achieves good robustness against some digital operations, but its discrimination should be improved. Tang et al. [24] extracted multiple histograms (MH) from different rings of input image to generate hash. The MH-based hashing can resist any-angle rotation. Zhao et al. [25] exploited Zernike moments and salient features to form image hash for authentication. This hashing can resist rotation within five degrees. Recently, Yan et al. [26] used adaptive local feature extraction to design multi-scale image hashing. This method can be used for tampering detection. Qin et al. [27] presented an image hashing scheme with block truncation coding. This scheme has good robustness and can be applied in image retrieval. Huang et al. [28] designed a secure image hashing via random walk on zigzag blocking. The random walk-based hashing reaches good security. Tang et al. [29] proposed to calculate image hash by jointly using multidimensional scaling, log-polar transform (LPT) and DFT. This algorithm can be used in image copy detection. Although researchers have proposed some useful image hashing techniques, there are still many unsolved problems in practice. For example, the current hashing algorithms do not reach desirable classification performance between robustness and discrimination. In addition, their performances in applications to reduced-reference (RR) image quality assessment (IQA) are rarely investigated. In this paper, we propose a perceptual image hashing with weighted DWT statistical features. Our hashing can reach good classification performance between robustness and discrimination. Experiments with three open image datasets are conducted to validate efficiency of our hashing. Receiver operating characteristics (ROC) results illustrate that our classification performance outperforms those of some state-of-the-art algorithms. Application of our hashing in RR-IQA is discussed, and the results show that our hashing has better performance than peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) in IQA. The remainder of this paper is organized as follows. Section 2 introduces the proposed hashing algorithm. Sections 3 and 4 present experimental results and performance comparisons, respectively. Section 5 describes the application in IQA. Conclusions are finally given in Section 6. 2. PROPOSED IMAGE HASHING Image edge is an important visual feature for distinguishing images by human vision system (HVS) [30]. As image edge is generally robust to content-preserving operations and the influences of these operations are well preserved in edge image, it can be used to design perceptual image hashing. In addition, HVS has different sensitivities when observing different image information in terms of their frequencies and directions. Since 2D DWT can decompose an input image into a coarse representation and several detail representations in different directions, it can be used to extract perceptual features for image hashing. Based on these considerations, we propose a perceptual image hashing by incorporating image edge with 2D DWT. As shown in Fig. 1, our hashing consists of four steps. The step of preprocessing is to create a normalized image. The second step is to find image edge and the third step is the extraction of weighted DWT features. Finally, the weighted features are quantized to make a compact hash. Details of these steps are explained in the following sections. FIGURE 1. View largeDownload slide Steps of the proposed hashing. FIGURE 1. View largeDownload slide Steps of the proposed hashing. 2.1. Preprocessing In this section, the bi-linear interpolation and color space conversion are exploited to generate a normalized image for feature extraction. Specifically, bi-linear interpolation is firstly used to convert the input image into a standard size M × M. This is to ensure that our hashing can resist image scaling and digital images with different resolutions will have the same hash length. Digital image in RGB color space is then mapped to YCbCr color space and the luminance component Y is taken for representation as it contains most of the geometric and visually significant information. Let R, G and B be the red, blue and green components of a pixel. Thus, color space conversion from RGB color space to YCbCr color space can be conducted by the following equation [22].   [YCbCr]=[65.481128.55324.966−37.797−74.203112112−93.786−18.214][RGB]+[16128128] (1)where Y is the luminance component of the pixel, and Cb and Cr are the blue-difference chroma and the red difference chroma of the pixel, respectively. 2.2. Edge detection Image edge is a useful feature for distinguishing images and has been successfully used in many applications, such as image retrieval, image classification and image recognition. In this paper, we exploit image edge to construct hash. This is based on the consideration that image edge can represent input image and the influences of digital operations are well preserved in edge image. In the past years, researchers have proposed some useful edge detection methods [31, 32], such as Prewitt operator, Sobel operator, LoG (Laplacian of Gaussian) operator and Canny operator. Here we choose Canny operator as the detection method since it can keep a good tradeoff between detection performance and computational cost. Steps of the classical Canny operator are summarized as follows: (1) Calculate a smooth image with a Gaussian filter for reducing noise influence on detection result. (2) Compute gradients of all pixels in the smooth image. (3) Use non-maximum suppression to avoid spurious response. (4) Determine candidate edges by double thresholds. (5) Track edge via hysteresis. For more details of Canny operator, please refer to its original paper [32]. Figure 2 shows an example of edge detection, where (a) is a grayscale image and (b) is the detection result. FIGURE 2. View largeDownload slide An example of edge detection. (a) Grayscale image and (b) image edge. FIGURE 2. View largeDownload slide An example of edge detection. (a) Grayscale image and (b) image edge. 2.3. Weighted DWT feature extraction In general, some content-preserving operations, such as JPEG compression, low-pass filtering and noise contamination, have slight influence on the low frequency sub-band, but will significantly change the high frequency sub-band. To measure perceptual change of digital image, features extracted from different frequent sub-bands should have different weights. Specifically, the weight of image feature extracted from high frequency sub-band should be bigger than that of image feature extracted from low frequency sub-band. Note that, when a single-level 2D DWT is applied to an image, four sub-bands will be generated, i.e. LL sub-band, LH sub-band, HL sub-band and HH sub-band, where LL sub-band is a low frequency sub-band and other three sub-bands are high frequency sub-bands. Based on these considerations, we propose to extract weighted DWT features for constructing perceptual hash. Detailed extraction is explained as follows. First, the edge image is divided into non-overlapping blocks of size N × N, where M is the integral multiple of N for simplicity. Consequently, L = (M/N)2 image blocks are available. For each block, a three-level 2D DWT is applied and then 10 sub-bands are generated, including a low frequency sub-band, i.e. SLL3, and nine high frequency sub-bands, i.e. SLH3, SHL3, SHH3, SLH2, SHL2, SHH2, SLH1, SHL1 and SHH1. Figure 3 illustrates the schematic diagram of a three-level 2D DWT, where LL sub-band at level 3 is represented with SLL3, LH sub-band at level 3 is represented with SLH3,…, HH sub-band at level 1 is represented with SHH1. Clearly, the 10 sub-bands can be further divided into four categories in terms of the property of their DWT coefficients. (i) Approximation coefficients: Elements in SLL3 are the approximation coefficients. (ii) Detail coefficients in horizontal direction: Elements in SLH1, SLH2 and SLH3 are the detail coefficients at different levels in horizontal direction. (iii) Detail coefficients in vertical direction: Elements in SHL1, SHL2 and SHL3 are the detail coefficients at different levels in vertical direction. (iv) Detail coefficients in diagonal direction: Elements in SHH1, SHH2 and SHH3 are the detail coefficients at different levels in diagonal direction. FIGURE 3. View largeDownload slide Schematic diagram of a three-level 2D DWT. FIGURE 3. View largeDownload slide Schematic diagram of a three-level 2D DWT. According to the categories of DWT coefficients, the weighted DWT feature of each block is calculated as follows. (1) Calculate the mean μ0 of the DWT coefficients of SLL3  μ0=1p∑i=1pSLL3(i) (2)where p is the total number of DWT coefficients of SLL3 and SLL3(i) is the ith element of SLL3 (1 ≤ i ≤ p). (2) Concatenate the DWT coefficients of SLH1, SLH2 and SLH3 to form a vector SLH. Calculate the variance v1 of the elements of SLH by the following equation:   v1=1k−1∑j=1k[SLH(j)−μLH]2 (3) in which k is the number of the elements of SLH, SLH(j) is the jth element of SLH (1 ≤ j ≤ k), and μLH is the mean of the elements of SLH defined as follows:   μLH=1k∑j=1kSLH(j) (4) (3) Concatenate the DWT coefficients of SHL1, SHL2 and SHL3 to form a vector SHL. Compute the variance v2 of the elements of SHL by the following equation:   v2=1b−1∑j=1b[SHL(j)−μHL]2 (5)in which b is the number of the elements of SHL, SHL(j) is the jth element of SHL (1 ≤ j ≤ b), and μHL is the mean of the elements of SHL defined as follows:   μHL=1b∑j=1bSHL(j) (6) (4) Similarly, concatenate the DWT coefficients of SHH1, SHH2 and SHH3 to form a vector SHH. Calculate the variance v3 of the elements of SHH by the following equation:   v3=1t−1∑j=1t[SHH(j)−μHH]2 (7)where t is the number of the elements of SHH, SHH(j) is the jth element of SHH (1 ≤ j ≤ t), and μHH is the mean of the elements of SHH defined as follows:   μHH=1t∑j=1tSHH(j) (8) (5) The DWT feature is determined by the weighted sum of the above statistics of DWT coefficients as follows:   s=μ0w0+v1w1+v2w2+v3w3 (9)where w0, w1, w2 and w3 are the weights of μ0, v1, v2 and v3, which satisfy the following equation:   w0+w1+w2+w3=1 (10) In general, we select the weights following the relation: w0 < w1 < w3 and w2 = w1. This is based on the following considerations. Influences of content-preserving operations are mainly on high frequency sub-bands. Their change on low frequency sub-band is slight, while their influence on the sub-band in diagonal direction is the most significant one. Moreover, the sub-bands in horizontal direction and vertical direction are generally the same importance. Let si be the weighted DWT feature of the ith image block, where 1 ≤ i ≤ L. Thus, we obtain a feature sequence s for representing the input image as follows:   s=[s1,s2,…,sL] (11) 2.4. Quantization To reduce storage cost of the proposed hashing, the feature sequence s is quantized to an integer representation by the following equation:   hi=round(si×100) (12)where round(·) is the function rounding input digit to the nearest integer. Finally, our image hash h is available as follows:   h=[h1,h2,…,hL] (13) Clearly, our hash length is L integers. In the experiment, we find that the integer of our hash only requires 6 bits for storage. Therefore, our hash length is 6L bits in binary form, which will be validated in Section 3.3. 2.5. Hash similarity calculation Assume that h1=[h1(1),h2(1),…,hL(1)] and h2=[h1(2),h2(2),…,hL(2)] are image hashes of two images. To measure their similarity, the Euclidean distance is taken as the metric, which is defined as follows:   d(h1,h2)=∑i=1L(hi(1)−hi(2))2 (14) In general, a smaller Euclidean distance means more similar images. If the Euclidean distance d is bigger than a given threshold T, the images of the input hashes are judged as different images. Otherwise, the images are viewed as a pair of similar images. 3. EXPERIMENTAL RESULTS In this section, many experiments are carried out to validate performance of our image hashing. In these experiments, our used parameter settings are as follows. The input image is resized to 512 × 512 and the block size is 64 × 64, i.e. M = 512 and N = 64. For Canny operator, the standard deviation of Gaussian filter is 1.5 and the double thresholds are ρ1 = 0.04 and ρ2 = 0.10. The symlet wavelet (‘sym8’ in MATLAB) is used as the wavelet filter of 2D DWT. The weights for the statistics of DWT coefficients are w0= 0.1, w1 = 0.15, w2 = 0.15 and w3 = 0.6. Therefore, our hash length is L = (M/N)2 = 64 integers. Sections 3.1 and 3.2 analyze robustness and discrimination, respectively. Our hash length in binary form is discussed in Section 3.3. ROC performances under different parameter settings are presented in Section 3.4. 3.1. Robustness The well-known open dataset called Copydays dataset [33] is selected as the image database for robustness validation. This image database contains 157 color images, whose image sizes range from 1200 × 1600 to 3008 × 2000. Figure 4 presents some sample images of this database. To generate visually similar versions of these 157 color images, some content-preserving operations provided by Photoshop, MATLAB and StirMark 4.0 [34] are exploited to conduct robustness attacks. The StirMark 4.0 can be freely downloaded from the following website: http://www.petitcolas.net/watermarking/stirmark/. The used operations include brightness adjustment, contrast adjustment, gamma correction, 3 × 3 Gaussian low-pass filtering, speckle noise, salt and pepper noise, JPEG compression, watermark embedding, image scaling, and the combinational operation of rotation, cropping and rescaling. Table 1 lists the detailed parameter settings of these content-preserving operations. Clearly, 74 different manipulations in total are used in the robustness test. This means that every color image has 74 visually similar versions and the number of similar images is 157 × 74 = 11 618. Therefore, the total number of the used images in robustness experiment is 11 618 + 157 = 11 775. FIGURE 4. View largeDownload slide Sample images of Copydays dataset. FIGURE 4. View largeDownload slide Sample images of Copydays dataset. TABLE 1. Digital operations and their parameter settings with the number of images created. Tool  Operation  Parameter  Parameter setting  Number  Photoshop  Brightness adjustment  Photoshop’s scale  ±10, ±20  4  Photoshop  Contrast adjustment  Photoshop’s scale  ±10, ±20  4  MATLAB  Gamma correction  γ  0.75, 0.9, 1.1, 1.25  4  MATLAB  3 × 3 Gaussian low-pass filtering  Standard deviation  0.3, 0.4,…,1.0  8  MATLAB  Speckle noise  Variance  0.001, 0.002,…,0.01  10  MATLAB  Salt and pepper noise  Density  0.001, 0.002,…,0.01  10  StirMark  JPEG compression  Quality factor  30, 40,…,100  8  StirMark  Watermark embedding  Strength  10, 20,…,100  10  StirMark  Image scaling  Ratio  0.5, 0.75, 0.9, 1.1, 1.5, 2.0  6  StirMark  Rotation, cropping and rescaling  Angle in degree  ±1, ±2, ±3, ±4, ±5  10  Total        74  Tool  Operation  Parameter  Parameter setting  Number  Photoshop  Brightness adjustment  Photoshop’s scale  ±10, ±20  4  Photoshop  Contrast adjustment  Photoshop’s scale  ±10, ±20  4  MATLAB  Gamma correction  γ  0.75, 0.9, 1.1, 1.25  4  MATLAB  3 × 3 Gaussian low-pass filtering  Standard deviation  0.3, 0.4,…,1.0  8  MATLAB  Speckle noise  Variance  0.001, 0.002,…,0.01  10  MATLAB  Salt and pepper noise  Density  0.001, 0.002,…,0.01  10  StirMark  JPEG compression  Quality factor  30, 40,…,100  8  StirMark  Watermark embedding  Strength  10, 20,…,100  10  StirMark  Image scaling  Ratio  0.5, 0.75, 0.9, 1.1, 1.5, 2.0  6  StirMark  Rotation, cropping and rescaling  Angle in degree  ±1, ±2, ±3, ±4, ±5  10  Total        74  View Large TABLE 1. Digital operations and their parameter settings with the number of images created. Tool  Operation  Parameter  Parameter setting  Number  Photoshop  Brightness adjustment  Photoshop’s scale  ±10, ±20  4  Photoshop  Contrast adjustment  Photoshop’s scale  ±10, ±20  4  MATLAB  Gamma correction  γ  0.75, 0.9, 1.1, 1.25  4  MATLAB  3 × 3 Gaussian low-pass filtering  Standard deviation  0.3, 0.4,…,1.0  8  MATLAB  Speckle noise  Variance  0.001, 0.002,…,0.01  10  MATLAB  Salt and pepper noise  Density  0.001, 0.002,…,0.01  10  StirMark  JPEG compression  Quality factor  30, 40,…,100  8  StirMark  Watermark embedding  Strength  10, 20,…,100  10  StirMark  Image scaling  Ratio  0.5, 0.75, 0.9, 1.1, 1.5, 2.0  6  StirMark  Rotation, cropping and rescaling  Angle in degree  ±1, ±2, ±3, ±4, ±5  10  Total        74  Tool  Operation  Parameter  Parameter setting  Number  Photoshop  Brightness adjustment  Photoshop’s scale  ±10, ±20  4  Photoshop  Contrast adjustment  Photoshop’s scale  ±10, ±20  4  MATLAB  Gamma correction  γ  0.75, 0.9, 1.1, 1.25  4  MATLAB  3 × 3 Gaussian low-pass filtering  Standard deviation  0.3, 0.4,…,1.0  8  MATLAB  Speckle noise  Variance  0.001, 0.002,…,0.01  10  MATLAB  Salt and pepper noise  Density  0.001, 0.002,…,0.01  10  StirMark  JPEG compression  Quality factor  30, 40,…,100  8  StirMark  Watermark embedding  Strength  10, 20,…,100  10  StirMark  Image scaling  Ratio  0.5, 0.75, 0.9, 1.1, 1.5, 2.0  6  StirMark  Rotation, cropping and rescaling  Angle in degree  ±1, ±2, ±3, ±4, ±5  10  Total        74  View Large We extract image hashes of the 157 original color images and their similar versions, and evaluate their similarity with Euclidean distance. Table 2 illustrates statistical results of hash distances under different content-preserving operations. It is observed that the minimum distances of these operations are all smaller than 7, and their maximum distances are smaller than 60. As to the mean distance, all values are smaller than 20, except the operation of rotation, cropping and rescaling. The mean distance of rotation, cropping and rescaling is 30.24, which is much bigger than those of other operations. This is because the operation of rotation, cropping and rescaling is a combinational attack, which will introduce more distortions on digital images. Moreover, all standard deviations are small and not bigger than 9. If the threshold is selected as T = 30, 92.12% similar images are correctly identified. As T = 40, the correct detection rate of the similar images will be 98.26%. When T reaches 60, the correct detection rate is 100%. TABLE 2. Statistical results of hash distances under different operations. Operation  Minimum  Maximum  Mean  Standard deviation  Brightness adjustment  1.41  44.52  11.20  6.33  Contrast adjustment  1.00  34.47  9.68  4.92  Gamma correction  1.73  55.91  13.36  7.56  3 × 3 Gaussian low-pass filtering  0  32.62  7.83  5.58  Speckle noise  2.24  36.89  10.36  5.56  Salt and Pepper noise  1.00  45.55  9.97  5.13  JPEG compression  1.00  42.98  10.31  5.81  Watermark embedding  0  45.17  9.57  6.34  Image scaling  1.41  37.54  10.32  5.45  Rotation, cropping and rescaling  6.63  54.18  30.24  8.01  Operation  Minimum  Maximum  Mean  Standard deviation  Brightness adjustment  1.41  44.52  11.20  6.33  Contrast adjustment  1.00  34.47  9.68  4.92  Gamma correction  1.73  55.91  13.36  7.56  3 × 3 Gaussian low-pass filtering  0  32.62  7.83  5.58  Speckle noise  2.24  36.89  10.36  5.56  Salt and Pepper noise  1.00  45.55  9.97  5.13  JPEG compression  1.00  42.98  10.31  5.81  Watermark embedding  0  45.17  9.57  6.34  Image scaling  1.41  37.54  10.32  5.45  Rotation, cropping and rescaling  6.63  54.18  30.24  8.01  TABLE 2. Statistical results of hash distances under different operations. Operation  Minimum  Maximum  Mean  Standard deviation  Brightness adjustment  1.41  44.52  11.20  6.33  Contrast adjustment  1.00  34.47  9.68  4.92  Gamma correction  1.73  55.91  13.36  7.56  3 × 3 Gaussian low-pass filtering  0  32.62  7.83  5.58  Speckle noise  2.24  36.89  10.36  5.56  Salt and Pepper noise  1.00  45.55  9.97  5.13  JPEG compression  1.00  42.98  10.31  5.81  Watermark embedding  0  45.17  9.57  6.34  Image scaling  1.41  37.54  10.32  5.45  Rotation, cropping and rescaling  6.63  54.18  30.24  8.01  Operation  Minimum  Maximum  Mean  Standard deviation  Brightness adjustment  1.41  44.52  11.20  6.33  Contrast adjustment  1.00  34.47  9.68  4.92  Gamma correction  1.73  55.91  13.36  7.56  3 × 3 Gaussian low-pass filtering  0  32.62  7.83  5.58  Speckle noise  2.24  36.89  10.36  5.56  Salt and Pepper noise  1.00  45.55  9.97  5.13  JPEG compression  1.00  42.98  10.31  5.81  Watermark embedding  0  45.17  9.57  6.34  Image scaling  1.41  37.54  10.32  5.45  Rotation, cropping and rescaling  6.63  54.18  30.24  8.01  3.2. Discrimination The open image database called Uncompressed Color Image Database (UCID) [35] is used to test discrimination of our hashing. The UCID contains 1338 color images, whose sizes are 512 × 384 or 384 × 512. Figure 5 presents some sample images of UCID. In this experiment, hashes of these 1338 color images are firstly extracted. For each image, the Euclidean distances between its hash and the hashes of other 1337 color images are calculated. Finally, 894 453 valid distances are obtained. Figure 6 is the distribution of these distances, where the x-axis represents the Euclidean distance, and the y-axis is the frequency of Euclidean distance. It is observed that the mean of these Euclidean distances is 112.87 and their standard deviation is 34.38. Obviously, the mean and standard deviation of different images are much bigger than those of similar images (the biggest mean and standard deviation of similar images are 30.24 and 8.01). This illustrates good discrimination of our hashing. For example, if T = 30, there are only 0.001% different images wrongly identified as similar images. If T = 40, there are 0.013% different images mistakenly classified. Moreover, if T = 60, there are 1.197% different images wrongly detected as similar images. FIGURE 5. View largeDownload slide Sample images of UCID. FIGURE 5. View largeDownload slide Sample images of UCID. FIGURE 6. View largeDownload slide Distribution of Euclidean distances between hashes of different images. FIGURE 6. View largeDownload slide Distribution of Euclidean distances between hashes of different images. 3.3. Hash length To determine our hash length in binary form, hashes of 1388 color images generated in discrimination test are taken as data source for analysis. Since every hash contains 64 integers, there are 64 × 1338 = 85 632 hash elements in total. The distribution of these elements is shown in Fig. 7, where the x-axis is the value of hash element and the y-axis represents the frequency of element value. From the result, it is observed that the minimum value is 0 and the maximum value is 63. As 6 bits can represent integers ranging from 0 to 26−1 = 63, each hash element only requires 6 bits for storage. Therefore, our hash length in binary form is 6L bits. In the experiment, L is 64 and thus our hash length is 6 × 64 = 384 bits. FIGURE 7. View largeDownload slide Distribution of hash elements. FIGURE 7. View largeDownload slide Distribution of hash elements. 3.4. ROC performances under different parameters In this section, we discuss effect of the main parameters (i.e. block size and weights of DWT features) on ROC performances. The test image databases used in Sections 3.1 and 3.2 are also adopted. In the experiment, we only change one kind of parameters (block size or weights of DWT features) and keep other parameters unchanged. The ROC graph [36] is exploited to make visual classification comparisons with respect to robustness and discrimination. In the ROC graph, the x-axis is generally defined as false positive rate (FPR) PFPR and the y-axis is the true positive rate (TPR) PTPR, which can be determined by the following equations:   PFPR(d≤T)=nFPRNFPR (15)  PTPR(d≤T)=nTPRNTPR (16)in which nFPR is the number of the pairs of different images mistakenly considered as similar images, NFPR is the total pairs of different images, nTPR is the number of the pairs of visually similar images correctly identified as the similar images and NTPR is the total pairs of visually similar images. It is obvious that PFPR and PTPR can indicate discrimination and robustness, respectively. A small PFPR represents good discrimination, and a big PTPR means good robustness. Note that an ROC curve is obtained by varying the threshold T to generate a set of points (PFPR, PTPR). The ROC curve near the top-left corner implies a small FPR and a big TPR, and shows better classification performance than those curves far away from the top-left corner. For block size, the used parameter settings are 32 × 32, 64 × 64 and 128 × 128. Figure 8 is the ROC curve comparisons among different block sizes. It is observed that all ROC curves are close to the top-left corner, meaning good robustness and discrimination performances. Moreover, the ROC curves near the top-left corner are enlarged and presented in the right-bottom part of Fig. 8. Clearly, the ROC curve of 64 × 64 is closer to the top-left corner than the curves of other block sizes. To make quantitative analysis, the area under the ROC curve (AUC) [36] is calculated. Note that the range of AUC is [0, 1]. The bigger the AUC, the better the classification performance. AUC results under different block sizes are listed in Table 3. It is found that the AUC of 64 × 64 is bigger than those of 32 × 32 and 128 × 128. This means that the ROC performance of 64 × 64 is better than the performances of 32 × 32 and 128 × 128. FIGURE 8. View largeDownload slide ROC curve comparisons among different block sizes. FIGURE 8. View largeDownload slide ROC curve comparisons among different block sizes. TABLE 3. AUC results under different block sizes. Block size  AUC  32 × 32  0.99992  64 × 64  0.99998  128 × 128  0.99931  Block size  AUC  32 × 32  0.99992  64 × 64  0.99998  128 × 128  0.99931  View Large TABLE 3. AUC results under different block sizes. Block size  AUC  32 × 32  0.99992  64 × 64  0.99998  128 × 128  0.99931  Block size  AUC  32 × 32  0.99992  64 × 64  0.99998  128 × 128  0.99931  View Large For the weights of DWT features, we select four combinations as follows. (i) w0 = 0.05, w1 = 0.15, w2 = 0.15, w3 = 0.65; (ii) w0 = 0.1, w1 = 0.15, w2 = 0.15, w3 = 0.6; (iii) w0 = 0.15, w1 = 0.2, w2 = 0.2, w3 = 0.45; (iv) w0 = 0.2, w1 = 0.25, w2 = 0.25, w3 = 0.3. Figure 9 presents the ROC curve comparisons under different weights, where the curves near the top-left corner are enlarged and shown in the right-bottom part for viewing details. It is observed that the ROC curve of w0 = 0.1, w1 = 0.15, w2 = 0.15 and w3 = 0.6 is closer to the top-left corner than those of other weight combinations. The AUC of each weight combination is also calculated for quantitative comparison. The results are listed in Table 4. It is found that the AUC of w0 = 0.1, w1 = 0.15, w2 = 0.15 and w3 = 0.6 is bigger than those of other weight combinations. This illustrates that the ROC performance of w0 = 0.1, w1 = 0.15, w2 = 0.15 and w3 = 0.6 is better than the performances of other weight combinations. FIGURE 9. View largeDownload slide ROC curve comparisons among different weights. FIGURE 9. View largeDownload slide ROC curve comparisons among different weights. TABLE 4. AUC results under different weights. Weights  AUC  w0 = 0.05, w1 = 0.15, w2 = 0.15, w3 = 0.65  0.99992  w0 = 0.10, w1 = 0.15, w2 = 0.15, w3 = 0.60  0.99998  w0 = 0.15, w1 = 0.20, w2 = 0.20, w3 = 0.45  0.99979  w0 = 0.20, w1 = 0.25, w2 = 0.25, w3 = 0.30  0.99694  Weights  AUC  w0 = 0.05, w1 = 0.15, w2 = 0.15, w3 = 0.65  0.99992  w0 = 0.10, w1 = 0.15, w2 = 0.15, w3 = 0.60  0.99998  w0 = 0.15, w1 = 0.20, w2 = 0.20, w3 = 0.45  0.99979  w0 = 0.20, w1 = 0.25, w2 = 0.25, w3 = 0.30  0.99694  View Large TABLE 4. AUC results under different weights. Weights  AUC  w0 = 0.05, w1 = 0.15, w2 = 0.15, w3 = 0.65  0.99992  w0 = 0.10, w1 = 0.15, w2 = 0.15, w3 = 0.60  0.99998  w0 = 0.15, w1 = 0.20, w2 = 0.20, w3 = 0.45  0.99979  w0 = 0.20, w1 = 0.25, w2 = 0.25, w3 = 0.30  0.99694  Weights  AUC  w0 = 0.05, w1 = 0.15, w2 = 0.15, w3 = 0.65  0.99992  w0 = 0.10, w1 = 0.15, w2 = 0.15, w3 = 0.60  0.99998  w0 = 0.15, w1 = 0.20, w2 = 0.20, w3 = 0.45  0.99979  w0 = 0.20, w1 = 0.25, w2 = 0.25, w3 = 0.30  0.99694  View Large 4. PERFORMANCE COMPARISONS To show our advantages, we compare our image hashing with seven popular hashing algorithms, including NMF–NMF–SQ hashing [21], CVA–DWT hashing [18], GF–LVQ hashing [23], SVD–CSLBP hashing [20], random walk-based hashing [28], MH-based hashing [24] and RT–DCT hashing [13]. In the comparisons, the two open image databases used in Section 3 are taken and the parameter settings of the assessed algorithms are as follows. For the NMF–NMF-SQ hashing, the normalized image size is 512 × 512, block size is 64 × 64, block number is 80, ranks of the first and the second NMFs are 2 and 1, respectively. For CVA–DWT hashing, its default parameters are used, i.e. the image size is 512 × 512 and block size is 32 × 32. For GF–LVQ hashing, the normalized image size is 512 × 512, 40 rings are selected for hash generation and each has a width of three pixels. For SVD–CSLBP hashing, its optimal parameters reported in the original paper are taken, i.e. the image size is 256 × 256 and block size is 32 × 32. For random walk-based hashing, 20 × 20 grids are selected from image, 48 blocks are then picked out by random walk and each block is represented with 3 bits. For MH-based hashing, the normalized image size is 512 × 512, the number of rings is 5 and the number of segments is 3. For RT–DCT hashing, the normalized image size is 512 × 512, 40 AC coefficients are selected and each is represented with 6 bits. To make fair comparisons, the original metrics of hash similarity of the compared algorithms are adopted here. Therefore, the hash lengths of NMF–NMF-SQ hashing, SVD–CSLBP hashing and MH-based hashing are 64, 64 and 15 floats, respectively. For CVA–DWT hashing, GF–LVQ hashing, random walk-based hashing and RT–DCT hashing, their hash lengths are 960, 120, 144 and 240 bits, respectively. Figure 10 is the ROC curve comparisons among different hashing algorithms, where the right-bottom part shows the enlarged results of the curves near the top-left corner. As can be seen from Fig. 10, the ROC curve of our hashing is closer to the top-left corner than those of the compared algorithms. Therefore, it can be intuitionally concluded that our hashing has better classification performance than the compared algorithms. To make quantitative analysis, the AUCs of these algorithms are also calculated. Note that the range of AUC is [0, 1]. The bigger the AUC, the better the classification performance. It is found that the AUCs of NMF–NMF-SQ hashing, CVA–DWT hashing, GF–LVQ hashing, SVD-CSLBP hashing, random walk-based hashing, MH-based hashing and RT–DCT hashing are 0.9979, 0.9939, 0.9794, 0.9507, 0.9398, 0.8791 and 0.7842, respectively. For our hashing, the AUC is 0.9999, which is bigger than those of all compared algorithms. This illustrates that our hashing is better than the compared algorithms in classification performance between robustness and discrimination. FIGURE 10. View largeDownload slide ROC curve comparisons among different algorithms. FIGURE 10. View largeDownload slide ROC curve comparisons among different algorithms. In addition, the computational time of different algorithms is calculated. All algorithms are coded with MATLAB R2016a, running on a desktop PC with 3.60 GHz Intel Core i7-7700 CPU and 8.0 GB RAM. The operating system is Windows 10 (64-bit version). The total consumed time of extracting hashes of 1338 images in the respective discrimination test of each algorithm is recorded and the average running time for generating a hash is then calculated. It is observed that the average time of NMF–NMF-SQ hashing, CVA–DWT hashing, GF–LVQ hashing, SVD-CSLBP hashing, random walk-based hashing, MH-based hashing and RT–DCT hashing are 0.281, 0.018, 0.283, 0.062, 0.014, 0.023 and 0.322 s, respectively. Our average time is 0.123. Clearly, our hashing is slower than CVA–DWT hashing, SVD-CSLBP hashing, random walk-based hashing and MH-based hashing, but is faster than NMF–NMF-SQ hashing, GF–LVQ hashing and RT–DCT hashing. Summary of performance comparisons is listed in Table 5. Our AUC is the biggest one among the assessed algorithms, indicating that our hashing is superior to the compared algorithms in classification between robustness and discrimination. As to computational time, our hashing has moderate performance. For hash storage, the length of our hashing is 64 integers, i.e. 384 bits in binary form. It is shorter than those of NMF–NMF–SQ hashing, CVA–DWT hashing and SVD–CSLBP hashing. Note that a floating point value requires more bits than an integer for storage. But our length is longer than those of GF–LVQ hashing, random walk-based hashing and RT–DCT hashing. For MH-based hashing, its hash length is 15 floats. According to the IEEE Standard [37], 32 bits at least are required for storing a float. Therefore, the hash length of MH-based hashing is 15 × 32 = 480 bits in binary form, which is longer than our hash length. TABLE 5. Performance comparisons among different hashing algorithms. Performance  Algorithm  NMF–NMF–SQ hashing  CVA–DWT hashing  GF–LVQ hashing  SVD–CSLBP hashing  Random walk-based hashing  MH-based hashing  RT–DCT hashing  Our hashing  AUC  0.9979  0.9939  0.9794  0.9507  0.9398  0.8791  0.7842  0.9999  Time (s)  0.281  0.018  0.283  0.062  0.014  0.023  0.322  0.123  Hash length  64 floats  960 bits  120 bits  64 floats  144 bits  15 floats  240 bits  384 bits  Performance  Algorithm  NMF–NMF–SQ hashing  CVA–DWT hashing  GF–LVQ hashing  SVD–CSLBP hashing  Random walk-based hashing  MH-based hashing  RT–DCT hashing  Our hashing  AUC  0.9979  0.9939  0.9794  0.9507  0.9398  0.8791  0.7842  0.9999  Time (s)  0.281  0.018  0.283  0.062  0.014  0.023  0.322  0.123  Hash length  64 floats  960 bits  120 bits  64 floats  144 bits  15 floats  240 bits  384 bits  TABLE 5. Performance comparisons among different hashing algorithms. Performance  Algorithm  NMF–NMF–SQ hashing  CVA–DWT hashing  GF–LVQ hashing  SVD–CSLBP hashing  Random walk-based hashing  MH-based hashing  RT–DCT hashing  Our hashing  AUC  0.9979  0.9939  0.9794  0.9507  0.9398  0.8791  0.7842  0.9999  Time (s)  0.281  0.018  0.283  0.062  0.014  0.023  0.322  0.123  Hash length  64 floats  960 bits  120 bits  64 floats  144 bits  15 floats  240 bits  384 bits  Performance  Algorithm  NMF–NMF–SQ hashing  CVA–DWT hashing  GF–LVQ hashing  SVD–CSLBP hashing  Random walk-based hashing  MH-based hashing  RT–DCT hashing  Our hashing  AUC  0.9979  0.9939  0.9794  0.9507  0.9398  0.8791  0.7842  0.9999  Time (s)  0.281  0.018  0.283  0.062  0.014  0.023  0.322  0.123  Hash length  64 floats  960 bits  120 bits  64 floats  144 bits  15 floats  240 bits  384 bits  5. APPLICATION IN REDUCED-REFERENCE IMAGE QUALITY ASSESSMENT IQA [38] plays an important role in image compression, image transmission, image display and so on. Generally, the reported metrics of IQA can be divided into three kinds [39]. (i) full-reference (FR) IQA: The reference (original) image and its distorted image are both available for quality assessment. (ii) Reduced-reference (RR) IQA: Some features of the reference image are known and the distorted image is available. (iii) No-reference IQA: The reference image is unavailable and only the distorted image is used to evaluate quality. In this section, we illustrate our application in RR-IQA. Figure 11 presents the block diagram of our hashing in application to RR-IQA. At the sender’s side, hash of the reference image is generated by our hashing and then is sent to the receiver’s side via auxiliary channel. Meanwhile, the reference image is sent to the receiver’s side through transmission channel. At the receiver’s side, the distorted version of the reference image and the hash of the reference image are both received. Next, hash of the distorted image is extracted by our hashing. Finally, objective score is obtained by calculating the Euclidean distance between the received hash and the extracted hash. Section 5.1 introduces the used dataset for IQA and Section 5.2 presents IQA performance comparisons. FIGURE 11. View largeDownload slide Block diagram of our hashing in application to RR-IQA. FIGURE 11. View largeDownload slide Block diagram of our hashing in application to RR-IQA. 5.1. Dataset for IQA The well-known open dataset called the LIVE Image Quality Assessment Database [40] is used for IQA. The LIVE image database is provided by the Laboratory for Image & Video Engineering (LIVE) at the University of Texas at Austin. It contains 29 RGB color images and their distorted images. In the experiment, we use these 29 original color images and their distorted versions attacked by Gaussian blur, white Gaussian noise and bit errors in JPEG2000 bitstream when transmitted over a simulated fast-fading Rayleigh channel (6 distorted images for each operation). The fast-fading Rayleigh channel means that the signal transmitted over the channel is affected by fast Rayleigh fading. As there are 29 × 6 = 174 distorted images for each operation, the total number of all distorted images is 174 × 3 = 522. Figure 12 presents some sample images of the LIVE image database, where (a) are the original images, (b) are the blurred versions, (c) are the noise versions and (d) are the distorted version attacked by bit errors in JPEG2000 bitstream. For easy comparisons, the Differential Mean Opinion Score (DMOS) of every distorted image is provided in the LIVE image database. Note that the range of DMOS is [0, 100]. The smaller the DMOS, the better the quality of the distorted image. FIGURE 12. View largeDownload slide Sample images of the LIVE image database. (a) Original images, (b) blurred images, (c) noise images and (d) distorted images attacked by bit errors in JPEG2000 bitstream. FIGURE 12. View largeDownload slide Sample images of the LIVE image database. (a) Original images, (b) blurred images, (c) noise images and (d) distorted images attacked by bit errors in JPEG2000 bitstream. 5.2. IQA performance comparison To evaluate IQA performance of our hashing, three well-known objective measures are taken, including Linear Pearson Correlation Coefficient (LPCC), Spearman Rank Order Correlation Coefficient (ROCC), and Root Mean Square Error (RMSE). Note that the inputs of these metrics are the provided DMOS and the DMOS Prediction (DMOSP). Since human vision system perceives images in nonlinear fashion, the DMOS is generally nonlinearly changed. To simulate this nonlinear characteristic, objective score of every distorted image is mapped to DMOSP by the well-known nonlinear function called logistic function. Let yi be the ith DMOS, xi be the corresponding DMOSP of yi, and n be the number of distorted images. Thus, the LPCC, ROCC and RMSE are defined as follows:   LPCC=∑i=1n(xi−ux)(yi−uy)∑i=1n(xi−ux)2∑i=1n(yi−uy)2 (17)  ROCC=1−6∑i=1n(xi−yi)2n(n2−1) (18)  RMSE=1n∑i=1n(xi−yi)2 (19) where ux and uy are the means of {x1, x2,…,xn} and {y1, y2,…,yn}, respectively. The ranges of LPCC and ROCC are both [0, 1]. A big LPCC or ROCC means good RR-IQA performance. Since RMSE indicates the errors between DMOSP and DMOS, a small RMSE implies good RR-IQA performance. As reference, the LPCC, ROCC and RMSE results of two popular FR-IQA algorithms called PSNR [41] and SSIM [42] are also calculated. Table 6 lists performance comparisons based on 174 blurred images. It is observed that the LPCC and ROCC values of our hashing are bigger than those of PSNR and SSIM, and our RMSE is smaller than those of PSNR and SSIM. This implies that our hashing is better than PSNR and SSIM in measuring blurred images. Table 7 is performance comparisons based on 174 noise images. It is found that our LPCC and ROCC values are smaller than those of PSNR and SSIM, while our RMSE is bigger than those of PSNR and SSIM. This means that PSNR and SSIM are superior to our hashing in evaluating noise images. Table 8 presents performance comparisons based on 174 distorted images attacked by bit errors. Obviously, our LPCC and ROCC values are bigger than those of PSNR, but are smaller than those of SSIM. Similarly, our RMSE is smaller that of PSNR, but is bigger than that of SSIM. This illustrates that, our hashing is better than PSNR, but is worse than SSIM in evaluating quality of distorted images attacked by bit errors. PSNR and SSIM have better IQA performance than our hashing in some cases. This is because PSNR and SSIM are both FR-IQA metrics, while our hashing is an RR-IQA technique. Since FR-IQA metric can use the whole information of reference image and RR-IQA metric can only use partial information of reference image, FR-IQA metric is generally better than RR-IQA metric. Table 9 presents the whole performance comparisons based on all 522 distorted images. Obviously, our LPCC and ROCC values are bigger than those of PSNR and SSIM, and our RMSE is smaller than those of PSNR and SSIM. Therefore, it can be concluded that our whole IQA performance is better than those of PSNR and SSIM. TABLE 6. Performance comparisons based on 174 blurred images. Index  Algorithm  PSNR  SSIM  Our hashing  LPCC  0.7420  0.9015  0.9290  ROCC  0.7256  0.9146  0.9205  RMSE  10.5408  6.8041  5.8178  Index  Algorithm  PSNR  SSIM  Our hashing  LPCC  0.7420  0.9015  0.9290  ROCC  0.7256  0.9146  0.9205  RMSE  10.5408  6.8041  5.8178  TABLE 6. Performance comparisons based on 174 blurred images. Index  Algorithm  PSNR  SSIM  Our hashing  LPCC  0.7420  0.9015  0.9290  ROCC  0.7256  0.9146  0.9205  RMSE  10.5408  6.8041  5.8178  Index  Algorithm  PSNR  SSIM  Our hashing  LPCC  0.7420  0.9015  0.9290  ROCC  0.7256  0.9146  0.9205  RMSE  10.5408  6.8041  5.8178  TABLE 7. Performance comparisons based on 174 noise images. Index  Algorithm  PSNR  SSIM  Our hashing  LPCC  0.9689  0.9708  0.9182  ROCC  0.9820  0.9653  0.9285  RMSE  3.9498  3.8291  6.3267  Index  Algorithm  PSNR  SSIM  Our hashing  LPCC  0.9689  0.9708  0.9182  ROCC  0.9820  0.9653  0.9285  RMSE  3.9498  3.8291  6.3267  TABLE 7. Performance comparisons based on 174 noise images. Index  Algorithm  PSNR  SSIM  Our hashing  LPCC  0.9689  0.9708  0.9182  ROCC  0.9820  0.9653  0.9285  RMSE  3.9498  3.8291  6.3267  Index  Algorithm  PSNR  SSIM  Our hashing  LPCC  0.9689  0.9708  0.9182  ROCC  0.9820  0.9653  0.9285  RMSE  3.9498  3.8291  6.3267  TABLE 8. Performance comparisons based on 174 distorted images attacked by bit errors. Index  Algorithm  PSNR  SSIM  Our hashing  LPCC  0.8463  0.9290  0.9039  ROCC  0.8451  0.9291  0.9012  RMSE  8.7611  5.1884  7.0341  Index  Algorithm  PSNR  SSIM  Our hashing  LPCC  0.8463  0.9290  0.9039  ROCC  0.8451  0.9291  0.9012  RMSE  8.7611  5.1884  7.0341  TABLE 8. Performance comparisons based on 174 distorted images attacked by bit errors. Index  Algorithm  PSNR  SSIM  Our hashing  LPCC  0.8463  0.9290  0.9039  ROCC  0.8451  0.9291  0.9012  RMSE  8.7611  5.1884  7.0341  Index  Algorithm  PSNR  SSIM  Our hashing  LPCC  0.8463  0.9290  0.9039  ROCC  0.8451  0.9291  0.9012  RMSE  8.7611  5.1884  7.0341  TABLE 9. Performance comparisons based on 522 distorted images. Index  Algorithm  PSNR  SSIM  Our hashing  LPCC  0.7242  0.8546  0.8664  ROCC  0.7336  0.8678  0.8700  RMSE  11.0684  8.3351  8.0157  Index  Algorithm  PSNR  SSIM  Our hashing  LPCC  0.7242  0.8546  0.8664  ROCC  0.7336  0.8678  0.8700  RMSE  11.0684  8.3351  8.0157  TABLE 9. Performance comparisons based on 522 distorted images. Index  Algorithm  PSNR  SSIM  Our hashing  LPCC  0.7242  0.8546  0.8664  ROCC  0.7336  0.8678  0.8700  RMSE  11.0684  8.3351  8.0157  Index  Algorithm  PSNR  SSIM  Our hashing  LPCC  0.7242  0.8546  0.8664  ROCC  0.7336  0.8678  0.8700  RMSE  11.0684  8.3351  8.0157  Moreover, scatter plots about DMOS and DMOSP are drawn and their fitted curves estimated by the logistic regression analysis [43] are also presented. Figure 13 is the comparison results based on 174 blurred images, where (a) is the result of PSNR, (b) is the results of SSIM and (c) is our result. It is observed that those points of PSNR and SSIM are scattered in a big area and our points concentrate around the fitted curve. Figure 14 is the comparison results based on 174 noise images. Clearly, our points are scattered in a large area and the fitted results of PSNR and SSIM are good. Figure 15 is the comparisons of scatter plots and the fitted curves based on 174 distorted images attacked by bit errors. It is found that the fitted results of SSIM and our hashing have similar shape but inverse direction and both outperform the fitted result of PSNR. Figure 16 presents the comparisons of scatter plots and the fitted curves based on 522 distorted images. It can be seen that the fitted result of our hashing is better than those of PSNR and SSIM. This also verifies that the whole IQA performance of our hashing is better than those of PSNR and SSIM. From the above experimental results, we can draw a conclusion that our hashing has good IQA performance and can be applied to the application of RR-IQA. FIGURE 13. View largeDownload slide Comparisons of scatter plots and the fitted curves based on 174 blurred images. (a) PSNR, (b) SSIM and (c) our hashing. FIGURE 13. View largeDownload slide Comparisons of scatter plots and the fitted curves based on 174 blurred images. (a) PSNR, (b) SSIM and (c) our hashing. FIGURE 14. View largeDownload slide Comparisons of scatter plots and the fitted curves based on 174 noise images. (a) PSNR, (b) SSIM and (c) our hashing. FIGURE 14. View largeDownload slide Comparisons of scatter plots and the fitted curves based on 174 noise images. (a) PSNR, (b) SSIM and (c) our hashing. FIGURE 15. View largeDownload slide Comparisons of scatter plots and the fitted curves based on 174 distorted images attacked by bit errors. (a) PSNR, (b) SSIM and (c) our hashing. FIGURE 15. View largeDownload slide Comparisons of scatter plots and the fitted curves based on 174 distorted images attacked by bit errors. (a) PSNR, (b) SSIM and (c) our hashing. FIGURE 16. View largeDownload slide Comparisons of scatter plots and the fitted curves based on 522 distorted images. (a) PSNR, (b) SSIM and (c) our hashing. FIGURE 16. View largeDownload slide Comparisons of scatter plots and the fitted curves based on 522 distorted images. (a) PSNR, (b) SSIM and (c) our hashing. 6. CONCLUSIONS In this paper, we have proposed a perceptual image hashing with weighted DWT features, which not only reaches good classification between robustness and discrimination but also can be applied to RR-IQA. A key step of our hashing is the extraction of weighted DWT statistical features, which provide us the capability of measuring perceptual change on distorted images. Experiments with three open image databases have been conducted to validate efficiency of our hashing. The ROC curve comparisons have shown that our hashing outperforms seven state-of-the-art hashing algorithms in classification performance with respect to robustness and discrimination. IQA comparisons have illustrated that the whole performance of our hashing is superior to PSNR and SSIM. Our research on image hashing algorithms is under way. Future research will be focused on hashing algorithms based on visual attention models, hashing algorithms based on sparse models, hashing algorithms with deep learning techniques, and so on. ACKNOWLEDGEMENTS The authors would like to thank the anonymous referees and the editor for their valuable comments and suggestions, which can substantially improve this paper. FUNDING This work was supported by the National Natural Science Foundation of China (grant numbers 61562007, 61300109, 61762017, 61702332, 61363034), Guangxi ‘Bagui Scholar’ Teams for Innovation and Research, the Guangxi Natural Science Foundation (grant number 2017GXNSFAA198222, 2015GXNSFDA139040), the Project of Guangxi Science and Technology (grant number GuiKeAD17195062) and the Project of the Guangxi Key Lab of Multi-source Information Mining & Security (grant numbers 16-A-02-02, 15-A-02-02, 14-A-02-02). REFERENCES 1 Venkatesan, R., Koon, S.-M., Jakubowski, M.H. and Moulin, P. ( 2000) Robust image hashing. Proc. IEEE Int. Conf. Image Processing (ICIP 2000), Vancouver, BC, Canada, 10–13 September, pp. 664–666. IEEE Press, New York. 2 Neelima, A. and Singh, K.M. ( 2016) Perceptual hash function based on scale-invariant feature transform and singular value decomposition. Comput. J. , 59, 1275– 1281. Google Scholar CrossRef Search ADS   3 Slaney, M. and Casey, M. ( 2008) Locality-sensitive hashing for finding nearest neighbors. IEEE Signal Process. Mag. , 25, 128– 131. Google Scholar CrossRef Search ADS   4 Wang, X., Zheng, N., Xue, J. and Liu, Z. ( 2012) A novel image signature method for content authentication. Comput. J. , 55, 686– 701. Google Scholar CrossRef Search ADS   5 Vadlamudi, L.N., Vaddella, R.P.V. and Devara, V. ( 2016) Robust hash generation technique for content-based image authentication using histogram. Multimed. Tools Appl. , 75, 6585– 6604. Google Scholar CrossRef Search ADS   6 Varghese, A., Balakrishnan, K., Varghese, R.R. and Paul, J.S. ( 2014) Content-based image retrieval of axial brain slices using a novel LBP with a ternary encoding. Comput. J. , 57, 1383– 1394. Google Scholar CrossRef Search ADS   7 Lu, C.-S., Hsu, C.Y., Sun, S.-W. and Chang, P.-C. ( 2004) Robust mesh-based hashing for copy detection and tracing of images. Proc. IEEE Int. Conf. Multimedia & Expo (ICME 2004), Taipei, Taiwan, 27–30 June, vol. 1, pp. 731–734. IEEE Press, New York. 8 Lu, W. and Wu, M. ( 2010) Multimedia forensic hash based on visual words. Proc. IEEE Int. Conf. Image Processing (ICIP 2010), Hong Kong, China, 26–29 September, pp. 989–992. IEEE Press, New York. 9 Tang, Z., Wang, S., Zhang, X., Wei, W. and Su, S. ( 2008) Robust image hashing for tamper detection using non-negative matrix factorization. J. Ubiquitous Convergence Technol. , 2, 18– 26. 10 Tang, Z., Zhang, X., Li, X. and Zhang, S. ( 2016) Robust image hashing with ring partition and invariant vector distance. IEEE Trans. Inf. Forensics Secur. , 11, 200– 214. Google Scholar CrossRef Search ADS   11 Fridrich, J. and Goljan, M. ( 2000) Robust hash functions for digital watermarking. Proc. IEEE Int. Conf. Information Technology: Coding and Computing, Las Vegas, Nevada, USA, 27–29 March, pp. 178–183. IEEE Press, New York. 12 Lefebvre, F., Macq, B. and Legat, J.-D. ( 2002) RASH: Radon soft hash algorithm. Proc. European Signal Processing Conf. (EUSIPCO 2002), Toulouse, France, 3–6 September, pp. 299–302. IEEE Press, New York. 13 Ou, Y. and Rhee, K.H. ( 2009) A key-dependent secure image hashing scheme by using Radon transform. Proc. IEEE Int. Symp. Intelligent Signal Processing and Communication Systems (ISPACS 2009), Kanazawa, Japan, 7–9 December, pp. 595−598. IEEE Press, New York. 14 Lei, Y., Wang, Y. and Huang, J. ( 2011) Robust image hash in Radon transform domain for authentication. Signal Process. Image Commun. , 26, 280– 288. Google Scholar CrossRef Search ADS   15 Lin, C. and Chang, S. ( 2001) A robust image authentication method distinguishing JPEG compression from malicious manipulation. IEEE Trans. Circuits Syst. Video Technol. , 11, 153– 168. Google Scholar CrossRef Search ADS   16 Ahmed, F., Siyal, M. and Abbas, V. ( 2010) A secure and robust hash-based scheme for image authentication. Signal Process. , 90, 1456– 1470. Google Scholar CrossRef Search ADS   17 Tang, Z., Wang, S., Zhang, X., Wei, W. and Zhao, Y. ( 2011) Lexicographical framework for image hashing with implementation based on DCT and NMF. Multimed. Tools Appl. , 52, 325– 345. Google Scholar CrossRef Search ADS   18 Tang, Z., Dai, Y., Zhang, X., Huang, L. and Yang, F. ( 2014) Robust image hashing via colour vector angles and discrete wavelet transform. IET Image Process. , 8, 142– 149. Google Scholar CrossRef Search ADS   19 Kozat, S.S., Mihcak, K. and Venkatesan, R. ( 2004) Robust perceptual image hashing via matrix invariants. Proc. IEEE Int. Conf. Image Processing (ICIP 2004), Singapore, 24–27 October, pp. 3443–3446. IEEE Press, New York. 20 Davarzani, R., Mozaffariand, S. and Yaghmaie, K. ( 2016) Perceptual image hashing using center-symmetric local binary patterns. Multimed. Tools Appl. , 75, 4639– 4667. Google Scholar CrossRef Search ADS   21 Monga, V. and Mihcak, M.K. ( 2007) Robust and secure image hashing via non-negative matrix factorizations. IEEE Trans. Inf. Forensics Secur. , 2, 376– 390. Google Scholar CrossRef Search ADS   22 Tang, Z., Zhang, X. and Zhang, S. ( 2014) Robust Perceptual image hashing based on ring partition and NMF. IEEE Trans. Knowl. Data Eng. , 26, 711– 724. Google Scholar CrossRef Search ADS   23 Li, Y., Lu, Z., Zhu, C. and Niu, X. ( 2012) Robust image hashing based on random Gabor filtering and dithered lattice vector quantization. IEEE Trans. Image Process. , 21, 1963– 1980. Google Scholar CrossRef Search ADS PubMed  24 Tang, Z., Huang, L., Dai, Y. and Yang, F. ( 2012) Robust image hashing based on multiple histograms. Int. J. Digit. Content Technol. Appl. , 6, 39– 47. Google Scholar CrossRef Search ADS   25 Zhao, Y., Wang, S., Zhang, X. and Yao, H. ( 2013) Robust hashing for image authentication using Zernike moments and local features. IEEE Trans. Inf. Forensics Secur. , 8, 55– 63. Google Scholar CrossRef Search ADS   26 Yan, C., Pun, C. and Yuan, X. ( 2016) Multi-scale image hashing using adaptive local feature extraction for robust tampering detection. Signal Process. , 121, 1– 16. Google Scholar CrossRef Search ADS   27 Qin, C., Chen, X., Ye, D., Wang, J. and Sun, X. ( 2016) A novel image hashing scheme with perceptual robustness using block truncation coding. Inf. Sci. , 361, 84– 99. Google Scholar CrossRef Search ADS   28 Huang, X., Liu, X., Wang, G. and Su, M. ( 2016) A robust image hashing with enhanced randomness by using random walk on zigzag blocking. Proc. 2016 IEEE Trust-Com/BigDataSE/ISPA, Tianjin, China, 23–26 August, pp. 14–18. IEEE Press, New York. 29 Tang, Z., Huang, Z., Zhang, X. and Lao, H. ( 2017) Robust image hashing with multidimensional scaling. Signal Process. , 137, 240– 250. Google Scholar CrossRef Search ADS   30 Tang, Z., Huang, L., Zhang, X. and Lao, H. ( 2016) Robust image hashing based on color vector angle and Canny operator. AEÜ-Int. J. Electron. Commun. , 70, 833– 841. Google Scholar CrossRef Search ADS   31 Zhang, J., Chang, W. and Wu, L. ( 2010) Edge detection based on general grey correlation and LoG operator. Proc. 2010 Int. Conf. Artificial Intelligence and Computational Intelligence, Sanya, China, 23–24 October, pp. 480–483. IEEE Press, New York. 32 Canny, J. ( 1986) A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. , 8, 679– 698. Google Scholar CrossRef Search ADS PubMed  33 Jegou, H., Copydays dataset, http://lear.inrialpes.fr/~jegou/data.php (accessed May 28, 2016). 34 Petitcolas, F.A.P. ( 2000) Watermarking schemes evaluation. IEEE Signal Process. Mag. , 17, 58– 64. Google Scholar CrossRef Search ADS   35 Schaefer, G. and Stich, M. ( 2004) UCID - An Uncompressed Colour Image Database. Proc. SPIE, Storage and Retrieval Methods and Applications for Multimedia, San Jose, USA, 20 January, pp. 472–480. SPIE Press, Bellingham, Washington. 36 Fawcett, T. ( 2006) An introduction to ROC analysis. Pattern Recognit. Lett. , 27, 861– 874. Google Scholar CrossRef Search ADS   37 IEEE Std 754–2008 ( 2008) IEEE Standard for Floating-Point Arithmetic , pp. 1– 70. IEEE Press, New York. 38 Wu, J., Lin, W., Shi, G. and Liu, A. ( 2013) Reduced-reference image quality assessment with visual information fidelity. IEEE Trans. Multimed. , 15, 1700– 1705. Google Scholar CrossRef Search ADS   39 Tang, Z., Wang, S., Zhang, X. and Wei, W. ( 2009) Perceptual similarity metric resilient to rotation for application in robust image hashing. Proc. 3rd Int. Conf. Multimedia and Ubiquitous Engineering (MUE 2009), Qingdao, China, 4–6 June, pp. 183–188. IEEE Press, New York. 40 Sheikh, H.R., Wang, Z., Cormack, L. and Bovik, A.C. ( 2012) LIVE Image Quality Assessment Database Release 2, http://live.ece.utexas.edu/research/quality (accessed May 12, 2012). 41 Tanchenko, A. ( 2014) Visual-PSNR measure of image quality. J. Vis. Commun. Image Representation , 25, 874– 878. Google Scholar CrossRef Search ADS   42 Wang, Z., Bovik, A.C., Sheikh, H.R. and Simoncelli, E.P. ( 2004) Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. , 13, 600– 612. Google Scholar CrossRef Search ADS PubMed  43 Kim, J., Lee, J., Lee, C., Park, E., Kim, J., Kim, H., Lee, J. and Jeong, H. ( 2013) Optimal feature selection for pedestrian detection based on logistic regression analysis. Proc. 2013 IEEE Int. Conf. Systems, Man, and Cybernetics (SMC 2013), Manchester, United Kingdom, 13–16 October, pp. 239–242. IEEE Press, New York. Author notes Handling editor: Fionn Murtagh © The British Computer Society 2018. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/about_us/legal/notices) http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png The Computer Journal Oxford University Press

Perceptual Image Hashing with Weighted DWT Features for Reduced-Reference Image Quality Assessment

Loading next page...
 
/lp/ou_press/perceptual-image-hashing-with-weighted-dwt-features-for-reduced-j0IHr32xEe
Publisher
Oxford University Press
Copyright
© The British Computer Society 2018. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
ISSN
0010-4620
eISSN
1460-2067
D.O.I.
10.1093/comjnl/bxy047
Publisher site
See Article on Publisher Site

Abstract

Abstract We propose a novel perceptual image hashing based on weighted discrete wavelet transform (DWT) statistical features. This hashing converts input image into a normalized image by bi-linear interpolation and color space conversion, extracts edge image of the normalized image via Canny operator, and divides the edge image into non-overlapping blocks. For each block, a three-level 2D DWT is applied to obtain different sub-bands and the weighted sum of the DWT statistics of these sub-bands is calculated. Finally, image hash is generated by concatenating and quantizing these weighted DWT features. Similarity of image hashes is measured by Euclidean distance. The Copydays dataset and the Uncompressed Color Image Database (UCID) are both used to evaluate classification between robustness and discrimination. Receiver operating characteristics curve comparisons illustrate that our hashing is superior to some state-of-the-art algorithms in classification performance with respect to robustness and discrimination. The LIVE Image Quality Assessment Database is used to validate our application in reduced-reference image quality assessment. Experimental results show that our hashing has better performance in image quality assessment than two popular measures, i.e. peak signal-to-noise ratio and structural similarity. 1. INTRODUCTION Wide applications of digital image provide large-scale image databases and thus call for efficient techniques of image storage and management [1]. To tackle this issue, many researchers have focused on a novel technique called image hashing in the past years. Image hashing is an efficient technique for processing digital images. It maps any size input image into a content-based compact code called image hash [2], which is used to represent the input image itself. Image hashing has been successfully used in many applications [3–8], such as image retrieval, image authentication, image indexing, image copy detection, digital watermarking and multimedia forensics. Note that, in many practical applications, digital images often undergo some content-preserving operations, such as JPEG compression, brightness adjustment, contrast adjustment, gamma correction and low-pass filtering. These operations will alter their digital representations, but do not change their visual contents. Therefore, the content-based image hash should be kept unchanged after these operations. In other words, image hashing should be robust against content-preserving operations. This is the first property of image hashing called perceptual robustness [1, 9]. Another basic property of image hashing is called discrimination [1, 10]. It requires that those images with different visual contents should have different image hashes. Besides the two basic properties, image hashing should satisfy additional property for some special applications. For example, it should measure perceptual difference of digital images for quality assessment. In the past decade, many image hashing algorithms have been proposed for tackling different applications. For example, Venkatesan et al. [1] used discrete wavelet transform (DWT) coefficients to construct image hash. This hashing can be used for image indexing. Fridrich and Goljan [11] exploited the projections between input image and direct current (DC)-free random smooth patterns to generate hash. This method can be applied to digital watermarking. Lefebvre et al. [12] used Radon transform (RT) to extract image hash. This scheme can resist geometric transform (e.g. image rotation and image scaling), but its discrimination must be improved. Ou and Rhee [13] applied 1D discrete cosine transform (DCT) to the selected RT projections and took the first alternating current (AC) coefficient of each projection to make hash. This RT–DCT hashing is robust to JPEG compression and filtering, but its discrimination is poor. In other work, Lei et al. [14] used RT, invariant moment and discrete Fourier transform (DFT) to construct image hash. Lin and Chang [15] designed a hashing algorithm with invariant relation between DCT coefficients to distinguish JPEG compression from malicious tampering operations. Ahmed et al. [16] combined DWT and Secure Hash Algorithm 1 (SHA-1) to design a hashing algorithm. These hashing algorithms [14–16] can be used for image authentication, but have weaknesses in resisting some content-preserving operations, such as image rotation and brightness adjustment. Tang et al. [17] proposed a novel lexicographical framework for hash generation, and presented a hashing algorithm with DCT and non-negative matrix factorization (NMF). This algorithm has good performance in image retrieval. To exploit discrimination of color images, Tang et al. [18] took color vector angles (CVA) as color feature and compressed them with DWT. Kozat et al. [19] used singular value decomposition (SVD) to construct image hash. They randomly divided input image into overlapping blocks, applied SVD to every block and constructed a secondary image by using the ‘first’ left and right singular vectors of all blocks. Next, they divided the secondary image into overlapping blocks, re-applied SVD to every block and formed image hash by combining the ‘first’ left and right singular vectors of all blocks. The SVD–SVD hashing [19] can resist image rotation, but its discrimination is far away from desirable performance. Davarzani et al. [20] exploited SVD and center-symmetric local binary patterns (CSLBP) to design image hashing for authentication. Discrimination of the SVD–CSLBP hashing is also not desirable. Inspired by the SVD–SVD hashing [19], Monga and Mihcak [21] presented a similar image hashing algorithm by replacing SVD with NMF. The NMF–NMF–statistics quantization (SQ) hashing [21] outperforms the SVD–SVD hashing, but is sensitive to watermark embedding. In another work, Tang et al. [22] designed an efficient image hashing with a ring partition and NMF, where the ring partition divides input image into some rings, i.e. annuloid regions. This hashing shows better classification performance than NMF–NMF–SQ hashing, and can be used in content change detection. Li et al. [23] incorporated random gabor filtering (GF) with dithered lattice vector quantization (LVQ) to design image hashing. The GF–LVQ hashing achieves good robustness against some digital operations, but its discrimination should be improved. Tang et al. [24] extracted multiple histograms (MH) from different rings of input image to generate hash. The MH-based hashing can resist any-angle rotation. Zhao et al. [25] exploited Zernike moments and salient features to form image hash for authentication. This hashing can resist rotation within five degrees. Recently, Yan et al. [26] used adaptive local feature extraction to design multi-scale image hashing. This method can be used for tampering detection. Qin et al. [27] presented an image hashing scheme with block truncation coding. This scheme has good robustness and can be applied in image retrieval. Huang et al. [28] designed a secure image hashing via random walk on zigzag blocking. The random walk-based hashing reaches good security. Tang et al. [29] proposed to calculate image hash by jointly using multidimensional scaling, log-polar transform (LPT) and DFT. This algorithm can be used in image copy detection. Although researchers have proposed some useful image hashing techniques, there are still many unsolved problems in practice. For example, the current hashing algorithms do not reach desirable classification performance between robustness and discrimination. In addition, their performances in applications to reduced-reference (RR) image quality assessment (IQA) are rarely investigated. In this paper, we propose a perceptual image hashing with weighted DWT statistical features. Our hashing can reach good classification performance between robustness and discrimination. Experiments with three open image datasets are conducted to validate efficiency of our hashing. Receiver operating characteristics (ROC) results illustrate that our classification performance outperforms those of some state-of-the-art algorithms. Application of our hashing in RR-IQA is discussed, and the results show that our hashing has better performance than peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) in IQA. The remainder of this paper is organized as follows. Section 2 introduces the proposed hashing algorithm. Sections 3 and 4 present experimental results and performance comparisons, respectively. Section 5 describes the application in IQA. Conclusions are finally given in Section 6. 2. PROPOSED IMAGE HASHING Image edge is an important visual feature for distinguishing images by human vision system (HVS) [30]. As image edge is generally robust to content-preserving operations and the influences of these operations are well preserved in edge image, it can be used to design perceptual image hashing. In addition, HVS has different sensitivities when observing different image information in terms of their frequencies and directions. Since 2D DWT can decompose an input image into a coarse representation and several detail representations in different directions, it can be used to extract perceptual features for image hashing. Based on these considerations, we propose a perceptual image hashing by incorporating image edge with 2D DWT. As shown in Fig. 1, our hashing consists of four steps. The step of preprocessing is to create a normalized image. The second step is to find image edge and the third step is the extraction of weighted DWT features. Finally, the weighted features are quantized to make a compact hash. Details of these steps are explained in the following sections. FIGURE 1. View largeDownload slide Steps of the proposed hashing. FIGURE 1. View largeDownload slide Steps of the proposed hashing. 2.1. Preprocessing In this section, the bi-linear interpolation and color space conversion are exploited to generate a normalized image for feature extraction. Specifically, bi-linear interpolation is firstly used to convert the input image into a standard size M × M. This is to ensure that our hashing can resist image scaling and digital images with different resolutions will have the same hash length. Digital image in RGB color space is then mapped to YCbCr color space and the luminance component Y is taken for representation as it contains most of the geometric and visually significant information. Let R, G and B be the red, blue and green components of a pixel. Thus, color space conversion from RGB color space to YCbCr color space can be conducted by the following equation [22].   [YCbCr]=[65.481128.55324.966−37.797−74.203112112−93.786−18.214][RGB]+[16128128] (1)where Y is the luminance component of the pixel, and Cb and Cr are the blue-difference chroma and the red difference chroma of the pixel, respectively. 2.2. Edge detection Image edge is a useful feature for distinguishing images and has been successfully used in many applications, such as image retrieval, image classification and image recognition. In this paper, we exploit image edge to construct hash. This is based on the consideration that image edge can represent input image and the influences of digital operations are well preserved in edge image. In the past years, researchers have proposed some useful edge detection methods [31, 32], such as Prewitt operator, Sobel operator, LoG (Laplacian of Gaussian) operator and Canny operator. Here we choose Canny operator as the detection method since it can keep a good tradeoff between detection performance and computational cost. Steps of the classical Canny operator are summarized as follows: (1) Calculate a smooth image with a Gaussian filter for reducing noise influence on detection result. (2) Compute gradients of all pixels in the smooth image. (3) Use non-maximum suppression to avoid spurious response. (4) Determine candidate edges by double thresholds. (5) Track edge via hysteresis. For more details of Canny operator, please refer to its original paper [32]. Figure 2 shows an example of edge detection, where (a) is a grayscale image and (b) is the detection result. FIGURE 2. View largeDownload slide An example of edge detection. (a) Grayscale image and (b) image edge. FIGURE 2. View largeDownload slide An example of edge detection. (a) Grayscale image and (b) image edge. 2.3. Weighted DWT feature extraction In general, some content-preserving operations, such as JPEG compression, low-pass filtering and noise contamination, have slight influence on the low frequency sub-band, but will significantly change the high frequency sub-band. To measure perceptual change of digital image, features extracted from different frequent sub-bands should have different weights. Specifically, the weight of image feature extracted from high frequency sub-band should be bigger than that of image feature extracted from low frequency sub-band. Note that, when a single-level 2D DWT is applied to an image, four sub-bands will be generated, i.e. LL sub-band, LH sub-band, HL sub-band and HH sub-band, where LL sub-band is a low frequency sub-band and other three sub-bands are high frequency sub-bands. Based on these considerations, we propose to extract weighted DWT features for constructing perceptual hash. Detailed extraction is explained as follows. First, the edge image is divided into non-overlapping blocks of size N × N, where M is the integral multiple of N for simplicity. Consequently, L = (M/N)2 image blocks are available. For each block, a three-level 2D DWT is applied and then 10 sub-bands are generated, including a low frequency sub-band, i.e. SLL3, and nine high frequency sub-bands, i.e. SLH3, SHL3, SHH3, SLH2, SHL2, SHH2, SLH1, SHL1 and SHH1. Figure 3 illustrates the schematic diagram of a three-level 2D DWT, where LL sub-band at level 3 is represented with SLL3, LH sub-band at level 3 is represented with SLH3,…, HH sub-band at level 1 is represented with SHH1. Clearly, the 10 sub-bands can be further divided into four categories in terms of the property of their DWT coefficients. (i) Approximation coefficients: Elements in SLL3 are the approximation coefficients. (ii) Detail coefficients in horizontal direction: Elements in SLH1, SLH2 and SLH3 are the detail coefficients at different levels in horizontal direction. (iii) Detail coefficients in vertical direction: Elements in SHL1, SHL2 and SHL3 are the detail coefficients at different levels in vertical direction. (iv) Detail coefficients in diagonal direction: Elements in SHH1, SHH2 and SHH3 are the detail coefficients at different levels in diagonal direction. FIGURE 3. View largeDownload slide Schematic diagram of a three-level 2D DWT. FIGURE 3. View largeDownload slide Schematic diagram of a three-level 2D DWT. According to the categories of DWT coefficients, the weighted DWT feature of each block is calculated as follows. (1) Calculate the mean μ0 of the DWT coefficients of SLL3  μ0=1p∑i=1pSLL3(i) (2)where p is the total number of DWT coefficients of SLL3 and SLL3(i) is the ith element of SLL3 (1 ≤ i ≤ p). (2) Concatenate the DWT coefficients of SLH1, SLH2 and SLH3 to form a vector SLH. Calculate the variance v1 of the elements of SLH by the following equation:   v1=1k−1∑j=1k[SLH(j)−μLH]2 (3) in which k is the number of the elements of SLH, SLH(j) is the jth element of SLH (1 ≤ j ≤ k), and μLH is the mean of the elements of SLH defined as follows:   μLH=1k∑j=1kSLH(j) (4) (3) Concatenate the DWT coefficients of SHL1, SHL2 and SHL3 to form a vector SHL. Compute the variance v2 of the elements of SHL by the following equation:   v2=1b−1∑j=1b[SHL(j)−μHL]2 (5)in which b is the number of the elements of SHL, SHL(j) is the jth element of SHL (1 ≤ j ≤ b), and μHL is the mean of the elements of SHL defined as follows:   μHL=1b∑j=1bSHL(j) (6) (4) Similarly, concatenate the DWT coefficients of SHH1, SHH2 and SHH3 to form a vector SHH. Calculate the variance v3 of the elements of SHH by the following equation:   v3=1t−1∑j=1t[SHH(j)−μHH]2 (7)where t is the number of the elements of SHH, SHH(j) is the jth element of SHH (1 ≤ j ≤ t), and μHH is the mean of the elements of SHH defined as follows:   μHH=1t∑j=1tSHH(j) (8) (5) The DWT feature is determined by the weighted sum of the above statistics of DWT coefficients as follows:   s=μ0w0+v1w1+v2w2+v3w3 (9)where w0, w1, w2 and w3 are the weights of μ0, v1, v2 and v3, which satisfy the following equation:   w0+w1+w2+w3=1 (10) In general, we select the weights following the relation: w0 < w1 < w3 and w2 = w1. This is based on the following considerations. Influences of content-preserving operations are mainly on high frequency sub-bands. Their change on low frequency sub-band is slight, while their influence on the sub-band in diagonal direction is the most significant one. Moreover, the sub-bands in horizontal direction and vertical direction are generally the same importance. Let si be the weighted DWT feature of the ith image block, where 1 ≤ i ≤ L. Thus, we obtain a feature sequence s for representing the input image as follows:   s=[s1,s2,…,sL] (11) 2.4. Quantization To reduce storage cost of the proposed hashing, the feature sequence s is quantized to an integer representation by the following equation:   hi=round(si×100) (12)where round(·) is the function rounding input digit to the nearest integer. Finally, our image hash h is available as follows:   h=[h1,h2,…,hL] (13) Clearly, our hash length is L integers. In the experiment, we find that the integer of our hash only requires 6 bits for storage. Therefore, our hash length is 6L bits in binary form, which will be validated in Section 3.3. 2.5. Hash similarity calculation Assume that h1=[h1(1),h2(1),…,hL(1)] and h2=[h1(2),h2(2),…,hL(2)] are image hashes of two images. To measure their similarity, the Euclidean distance is taken as the metric, which is defined as follows:   d(h1,h2)=∑i=1L(hi(1)−hi(2))2 (14) In general, a smaller Euclidean distance means more similar images. If the Euclidean distance d is bigger than a given threshold T, the images of the input hashes are judged as different images. Otherwise, the images are viewed as a pair of similar images. 3. EXPERIMENTAL RESULTS In this section, many experiments are carried out to validate performance of our image hashing. In these experiments, our used parameter settings are as follows. The input image is resized to 512 × 512 and the block size is 64 × 64, i.e. M = 512 and N = 64. For Canny operator, the standard deviation of Gaussian filter is 1.5 and the double thresholds are ρ1 = 0.04 and ρ2 = 0.10. The symlet wavelet (‘sym8’ in MATLAB) is used as the wavelet filter of 2D DWT. The weights for the statistics of DWT coefficients are w0= 0.1, w1 = 0.15, w2 = 0.15 and w3 = 0.6. Therefore, our hash length is L = (M/N)2 = 64 integers. Sections 3.1 and 3.2 analyze robustness and discrimination, respectively. Our hash length in binary form is discussed in Section 3.3. ROC performances under different parameter settings are presented in Section 3.4. 3.1. Robustness The well-known open dataset called Copydays dataset [33] is selected as the image database for robustness validation. This image database contains 157 color images, whose image sizes range from 1200 × 1600 to 3008 × 2000. Figure 4 presents some sample images of this database. To generate visually similar versions of these 157 color images, some content-preserving operations provided by Photoshop, MATLAB and StirMark 4.0 [34] are exploited to conduct robustness attacks. The StirMark 4.0 can be freely downloaded from the following website: http://www.petitcolas.net/watermarking/stirmark/. The used operations include brightness adjustment, contrast adjustment, gamma correction, 3 × 3 Gaussian low-pass filtering, speckle noise, salt and pepper noise, JPEG compression, watermark embedding, image scaling, and the combinational operation of rotation, cropping and rescaling. Table 1 lists the detailed parameter settings of these content-preserving operations. Clearly, 74 different manipulations in total are used in the robustness test. This means that every color image has 74 visually similar versions and the number of similar images is 157 × 74 = 11 618. Therefore, the total number of the used images in robustness experiment is 11 618 + 157 = 11 775. FIGURE 4. View largeDownload slide Sample images of Copydays dataset. FIGURE 4. View largeDownload slide Sample images of Copydays dataset. TABLE 1. Digital operations and their parameter settings with the number of images created. Tool  Operation  Parameter  Parameter setting  Number  Photoshop  Brightness adjustment  Photoshop’s scale  ±10, ±20  4  Photoshop  Contrast adjustment  Photoshop’s scale  ±10, ±20  4  MATLAB  Gamma correction  γ  0.75, 0.9, 1.1, 1.25  4  MATLAB  3 × 3 Gaussian low-pass filtering  Standard deviation  0.3, 0.4,…,1.0  8  MATLAB  Speckle noise  Variance  0.001, 0.002,…,0.01  10  MATLAB  Salt and pepper noise  Density  0.001, 0.002,…,0.01  10  StirMark  JPEG compression  Quality factor  30, 40,…,100  8  StirMark  Watermark embedding  Strength  10, 20,…,100  10  StirMark  Image scaling  Ratio  0.5, 0.75, 0.9, 1.1, 1.5, 2.0  6  StirMark  Rotation, cropping and rescaling  Angle in degree  ±1, ±2, ±3, ±4, ±5  10  Total        74  Tool  Operation  Parameter  Parameter setting  Number  Photoshop  Brightness adjustment  Photoshop’s scale  ±10, ±20  4  Photoshop  Contrast adjustment  Photoshop’s scale  ±10, ±20  4  MATLAB  Gamma correction  γ  0.75, 0.9, 1.1, 1.25  4  MATLAB  3 × 3 Gaussian low-pass filtering  Standard deviation  0.3, 0.4,…,1.0  8  MATLAB  Speckle noise  Variance  0.001, 0.002,…,0.01  10  MATLAB  Salt and pepper noise  Density  0.001, 0.002,…,0.01  10  StirMark  JPEG compression  Quality factor  30, 40,…,100  8  StirMark  Watermark embedding  Strength  10, 20,…,100  10  StirMark  Image scaling  Ratio  0.5, 0.75, 0.9, 1.1, 1.5, 2.0  6  StirMark  Rotation, cropping and rescaling  Angle in degree  ±1, ±2, ±3, ±4, ±5  10  Total        74  View Large TABLE 1. Digital operations and their parameter settings with the number of images created. Tool  Operation  Parameter  Parameter setting  Number  Photoshop  Brightness adjustment  Photoshop’s scale  ±10, ±20  4  Photoshop  Contrast adjustment  Photoshop’s scale  ±10, ±20  4  MATLAB  Gamma correction  γ  0.75, 0.9, 1.1, 1.25  4  MATLAB  3 × 3 Gaussian low-pass filtering  Standard deviation  0.3, 0.4,…,1.0  8  MATLAB  Speckle noise  Variance  0.001, 0.002,…,0.01  10  MATLAB  Salt and pepper noise  Density  0.001, 0.002,…,0.01  10  StirMark  JPEG compression  Quality factor  30, 40,…,100  8  StirMark  Watermark embedding  Strength  10, 20,…,100  10  StirMark  Image scaling  Ratio  0.5, 0.75, 0.9, 1.1, 1.5, 2.0  6  StirMark  Rotation, cropping and rescaling  Angle in degree  ±1, ±2, ±3, ±4, ±5  10  Total        74  Tool  Operation  Parameter  Parameter setting  Number  Photoshop  Brightness adjustment  Photoshop’s scale  ±10, ±20  4  Photoshop  Contrast adjustment  Photoshop’s scale  ±10, ±20  4  MATLAB  Gamma correction  γ  0.75, 0.9, 1.1, 1.25  4  MATLAB  3 × 3 Gaussian low-pass filtering  Standard deviation  0.3, 0.4,…,1.0  8  MATLAB  Speckle noise  Variance  0.001, 0.002,…,0.01  10  MATLAB  Salt and pepper noise  Density  0.001, 0.002,…,0.01  10  StirMark  JPEG compression  Quality factor  30, 40,…,100  8  StirMark  Watermark embedding  Strength  10, 20,…,100  10  StirMark  Image scaling  Ratio  0.5, 0.75, 0.9, 1.1, 1.5, 2.0  6  StirMark  Rotation, cropping and rescaling  Angle in degree  ±1, ±2, ±3, ±4, ±5  10  Total        74  View Large We extract image hashes of the 157 original color images and their similar versions, and evaluate their similarity with Euclidean distance. Table 2 illustrates statistical results of hash distances under different content-preserving operations. It is observed that the minimum distances of these operations are all smaller than 7, and their maximum distances are smaller than 60. As to the mean distance, all values are smaller than 20, except the operation of rotation, cropping and rescaling. The mean distance of rotation, cropping and rescaling is 30.24, which is much bigger than those of other operations. This is because the operation of rotation, cropping and rescaling is a combinational attack, which will introduce more distortions on digital images. Moreover, all standard deviations are small and not bigger than 9. If the threshold is selected as T = 30, 92.12% similar images are correctly identified. As T = 40, the correct detection rate of the similar images will be 98.26%. When T reaches 60, the correct detection rate is 100%. TABLE 2. Statistical results of hash distances under different operations. Operation  Minimum  Maximum  Mean  Standard deviation  Brightness adjustment  1.41  44.52  11.20  6.33  Contrast adjustment  1.00  34.47  9.68  4.92  Gamma correction  1.73  55.91  13.36  7.56  3 × 3 Gaussian low-pass filtering  0  32.62  7.83  5.58  Speckle noise  2.24  36.89  10.36  5.56  Salt and Pepper noise  1.00  45.55  9.97  5.13  JPEG compression  1.00  42.98  10.31  5.81  Watermark embedding  0  45.17  9.57  6.34  Image scaling  1.41  37.54  10.32  5.45  Rotation, cropping and rescaling  6.63  54.18  30.24  8.01  Operation  Minimum  Maximum  Mean  Standard deviation  Brightness adjustment  1.41  44.52  11.20  6.33  Contrast adjustment  1.00  34.47  9.68  4.92  Gamma correction  1.73  55.91  13.36  7.56  3 × 3 Gaussian low-pass filtering  0  32.62  7.83  5.58  Speckle noise  2.24  36.89  10.36  5.56  Salt and Pepper noise  1.00  45.55  9.97  5.13  JPEG compression  1.00  42.98  10.31  5.81  Watermark embedding  0  45.17  9.57  6.34  Image scaling  1.41  37.54  10.32  5.45  Rotation, cropping and rescaling  6.63  54.18  30.24  8.01  TABLE 2. Statistical results of hash distances under different operations. Operation  Minimum  Maximum  Mean  Standard deviation  Brightness adjustment  1.41  44.52  11.20  6.33  Contrast adjustment  1.00  34.47  9.68  4.92  Gamma correction  1.73  55.91  13.36  7.56  3 × 3 Gaussian low-pass filtering  0  32.62  7.83  5.58  Speckle noise  2.24  36.89  10.36  5.56  Salt and Pepper noise  1.00  45.55  9.97  5.13  JPEG compression  1.00  42.98  10.31  5.81  Watermark embedding  0  45.17  9.57  6.34  Image scaling  1.41  37.54  10.32  5.45  Rotation, cropping and rescaling  6.63  54.18  30.24  8.01  Operation  Minimum  Maximum  Mean  Standard deviation  Brightness adjustment  1.41  44.52  11.20  6.33  Contrast adjustment  1.00  34.47  9.68  4.92  Gamma correction  1.73  55.91  13.36  7.56  3 × 3 Gaussian low-pass filtering  0  32.62  7.83  5.58  Speckle noise  2.24  36.89  10.36  5.56  Salt and Pepper noise  1.00  45.55  9.97  5.13  JPEG compression  1.00  42.98  10.31  5.81  Watermark embedding  0  45.17  9.57  6.34  Image scaling  1.41  37.54  10.32  5.45  Rotation, cropping and rescaling  6.63  54.18  30.24  8.01  3.2. Discrimination The open image database called Uncompressed Color Image Database (UCID) [35] is used to test discrimination of our hashing. The UCID contains 1338 color images, whose sizes are 512 × 384 or 384 × 512. Figure 5 presents some sample images of UCID. In this experiment, hashes of these 1338 color images are firstly extracted. For each image, the Euclidean distances between its hash and the hashes of other 1337 color images are calculated. Finally, 894 453 valid distances are obtained. Figure 6 is the distribution of these distances, where the x-axis represents the Euclidean distance, and the y-axis is the frequency of Euclidean distance. It is observed that the mean of these Euclidean distances is 112.87 and their standard deviation is 34.38. Obviously, the mean and standard deviation of different images are much bigger than those of similar images (the biggest mean and standard deviation of similar images are 30.24 and 8.01). This illustrates good discrimination of our hashing. For example, if T = 30, there are only 0.001% different images wrongly identified as similar images. If T = 40, there are 0.013% different images mistakenly classified. Moreover, if T = 60, there are 1.197% different images wrongly detected as similar images. FIGURE 5. View largeDownload slide Sample images of UCID. FIGURE 5. View largeDownload slide Sample images of UCID. FIGURE 6. View largeDownload slide Distribution of Euclidean distances between hashes of different images. FIGURE 6. View largeDownload slide Distribution of Euclidean distances between hashes of different images. 3.3. Hash length To determine our hash length in binary form, hashes of 1388 color images generated in discrimination test are taken as data source for analysis. Since every hash contains 64 integers, there are 64 × 1338 = 85 632 hash elements in total. The distribution of these elements is shown in Fig. 7, where the x-axis is the value of hash element and the y-axis represents the frequency of element value. From the result, it is observed that the minimum value is 0 and the maximum value is 63. As 6 bits can represent integers ranging from 0 to 26−1 = 63, each hash element only requires 6 bits for storage. Therefore, our hash length in binary form is 6L bits. In the experiment, L is 64 and thus our hash length is 6 × 64 = 384 bits. FIGURE 7. View largeDownload slide Distribution of hash elements. FIGURE 7. View largeDownload slide Distribution of hash elements. 3.4. ROC performances under different parameters In this section, we discuss effect of the main parameters (i.e. block size and weights of DWT features) on ROC performances. The test image databases used in Sections 3.1 and 3.2 are also adopted. In the experiment, we only change one kind of parameters (block size or weights of DWT features) and keep other parameters unchanged. The ROC graph [36] is exploited to make visual classification comparisons with respect to robustness and discrimination. In the ROC graph, the x-axis is generally defined as false positive rate (FPR) PFPR and the y-axis is the true positive rate (TPR) PTPR, which can be determined by the following equations:   PFPR(d≤T)=nFPRNFPR (15)  PTPR(d≤T)=nTPRNTPR (16)in which nFPR is the number of the pairs of different images mistakenly considered as similar images, NFPR is the total pairs of different images, nTPR is the number of the pairs of visually similar images correctly identified as the similar images and NTPR is the total pairs of visually similar images. It is obvious that PFPR and PTPR can indicate discrimination and robustness, respectively. A small PFPR represents good discrimination, and a big PTPR means good robustness. Note that an ROC curve is obtained by varying the threshold T to generate a set of points (PFPR, PTPR). The ROC curve near the top-left corner implies a small FPR and a big TPR, and shows better classification performance than those curves far away from the top-left corner. For block size, the used parameter settings are 32 × 32, 64 × 64 and 128 × 128. Figure 8 is the ROC curve comparisons among different block sizes. It is observed that all ROC curves are close to the top-left corner, meaning good robustness and discrimination performances. Moreover, the ROC curves near the top-left corner are enlarged and presented in the right-bottom part of Fig. 8. Clearly, the ROC curve of 64 × 64 is closer to the top-left corner than the curves of other block sizes. To make quantitative analysis, the area under the ROC curve (AUC) [36] is calculated. Note that the range of AUC is [0, 1]. The bigger the AUC, the better the classification performance. AUC results under different block sizes are listed in Table 3. It is found that the AUC of 64 × 64 is bigger than those of 32 × 32 and 128 × 128. This means that the ROC performance of 64 × 64 is better than the performances of 32 × 32 and 128 × 128. FIGURE 8. View largeDownload slide ROC curve comparisons among different block sizes. FIGURE 8. View largeDownload slide ROC curve comparisons among different block sizes. TABLE 3. AUC results under different block sizes. Block size  AUC  32 × 32  0.99992  64 × 64  0.99998  128 × 128  0.99931  Block size  AUC  32 × 32  0.99992  64 × 64  0.99998  128 × 128  0.99931  View Large TABLE 3. AUC results under different block sizes. Block size  AUC  32 × 32  0.99992  64 × 64  0.99998  128 × 128  0.99931  Block size  AUC  32 × 32  0.99992  64 × 64  0.99998  128 × 128  0.99931  View Large For the weights of DWT features, we select four combinations as follows. (i) w0 = 0.05, w1 = 0.15, w2 = 0.15, w3 = 0.65; (ii) w0 = 0.1, w1 = 0.15, w2 = 0.15, w3 = 0.6; (iii) w0 = 0.15, w1 = 0.2, w2 = 0.2, w3 = 0.45; (iv) w0 = 0.2, w1 = 0.25, w2 = 0.25, w3 = 0.3. Figure 9 presents the ROC curve comparisons under different weights, where the curves near the top-left corner are enlarged and shown in the right-bottom part for viewing details. It is observed that the ROC curve of w0 = 0.1, w1 = 0.15, w2 = 0.15 and w3 = 0.6 is closer to the top-left corner than those of other weight combinations. The AUC of each weight combination is also calculated for quantitative comparison. The results are listed in Table 4. It is found that the AUC of w0 = 0.1, w1 = 0.15, w2 = 0.15 and w3 = 0.6 is bigger than those of other weight combinations. This illustrates that the ROC performance of w0 = 0.1, w1 = 0.15, w2 = 0.15 and w3 = 0.6 is better than the performances of other weight combinations. FIGURE 9. View largeDownload slide ROC curve comparisons among different weights. FIGURE 9. View largeDownload slide ROC curve comparisons among different weights. TABLE 4. AUC results under different weights. Weights  AUC  w0 = 0.05, w1 = 0.15, w2 = 0.15, w3 = 0.65  0.99992  w0 = 0.10, w1 = 0.15, w2 = 0.15, w3 = 0.60  0.99998  w0 = 0.15, w1 = 0.20, w2 = 0.20, w3 = 0.45  0.99979  w0 = 0.20, w1 = 0.25, w2 = 0.25, w3 = 0.30  0.99694  Weights  AUC  w0 = 0.05, w1 = 0.15, w2 = 0.15, w3 = 0.65  0.99992  w0 = 0.10, w1 = 0.15, w2 = 0.15, w3 = 0.60  0.99998  w0 = 0.15, w1 = 0.20, w2 = 0.20, w3 = 0.45  0.99979  w0 = 0.20, w1 = 0.25, w2 = 0.25, w3 = 0.30  0.99694  View Large TABLE 4. AUC results under different weights. Weights  AUC  w0 = 0.05, w1 = 0.15, w2 = 0.15, w3 = 0.65  0.99992  w0 = 0.10, w1 = 0.15, w2 = 0.15, w3 = 0.60  0.99998  w0 = 0.15, w1 = 0.20, w2 = 0.20, w3 = 0.45  0.99979  w0 = 0.20, w1 = 0.25, w2 = 0.25, w3 = 0.30  0.99694  Weights  AUC  w0 = 0.05, w1 = 0.15, w2 = 0.15, w3 = 0.65  0.99992  w0 = 0.10, w1 = 0.15, w2 = 0.15, w3 = 0.60  0.99998  w0 = 0.15, w1 = 0.20, w2 = 0.20, w3 = 0.45  0.99979  w0 = 0.20, w1 = 0.25, w2 = 0.25, w3 = 0.30  0.99694  View Large 4. PERFORMANCE COMPARISONS To show our advantages, we compare our image hashing with seven popular hashing algorithms, including NMF–NMF–SQ hashing [21], CVA–DWT hashing [18], GF–LVQ hashing [23], SVD–CSLBP hashing [20], random walk-based hashing [28], MH-based hashing [24] and RT–DCT hashing [13]. In the comparisons, the two open image databases used in Section 3 are taken and the parameter settings of the assessed algorithms are as follows. For the NMF–NMF-SQ hashing, the normalized image size is 512 × 512, block size is 64 × 64, block number is 80, ranks of the first and the second NMFs are 2 and 1, respectively. For CVA–DWT hashing, its default parameters are used, i.e. the image size is 512 × 512 and block size is 32 × 32. For GF–LVQ hashing, the normalized image size is 512 × 512, 40 rings are selected for hash generation and each has a width of three pixels. For SVD–CSLBP hashing, its optimal parameters reported in the original paper are taken, i.e. the image size is 256 × 256 and block size is 32 × 32. For random walk-based hashing, 20 × 20 grids are selected from image, 48 blocks are then picked out by random walk and each block is represented with 3 bits. For MH-based hashing, the normalized image size is 512 × 512, the number of rings is 5 and the number of segments is 3. For RT–DCT hashing, the normalized image size is 512 × 512, 40 AC coefficients are selected and each is represented with 6 bits. To make fair comparisons, the original metrics of hash similarity of the compared algorithms are adopted here. Therefore, the hash lengths of NMF–NMF-SQ hashing, SVD–CSLBP hashing and MH-based hashing are 64, 64 and 15 floats, respectively. For CVA–DWT hashing, GF–LVQ hashing, random walk-based hashing and RT–DCT hashing, their hash lengths are 960, 120, 144 and 240 bits, respectively. Figure 10 is the ROC curve comparisons among different hashing algorithms, where the right-bottom part shows the enlarged results of the curves near the top-left corner. As can be seen from Fig. 10, the ROC curve of our hashing is closer to the top-left corner than those of the compared algorithms. Therefore, it can be intuitionally concluded that our hashing has better classification performance than the compared algorithms. To make quantitative analysis, the AUCs of these algorithms are also calculated. Note that the range of AUC is [0, 1]. The bigger the AUC, the better the classification performance. It is found that the AUCs of NMF–NMF-SQ hashing, CVA–DWT hashing, GF–LVQ hashing, SVD-CSLBP hashing, random walk-based hashing, MH-based hashing and RT–DCT hashing are 0.9979, 0.9939, 0.9794, 0.9507, 0.9398, 0.8791 and 0.7842, respectively. For our hashing, the AUC is 0.9999, which is bigger than those of all compared algorithms. This illustrates that our hashing is better than the compared algorithms in classification performance between robustness and discrimination. FIGURE 10. View largeDownload slide ROC curve comparisons among different algorithms. FIGURE 10. View largeDownload slide ROC curve comparisons among different algorithms. In addition, the computational time of different algorithms is calculated. All algorithms are coded with MATLAB R2016a, running on a desktop PC with 3.60 GHz Intel Core i7-7700 CPU and 8.0 GB RAM. The operating system is Windows 10 (64-bit version). The total consumed time of extracting hashes of 1338 images in the respective discrimination test of each algorithm is recorded and the average running time for generating a hash is then calculated. It is observed that the average time of NMF–NMF-SQ hashing, CVA–DWT hashing, GF–LVQ hashing, SVD-CSLBP hashing, random walk-based hashing, MH-based hashing and RT–DCT hashing are 0.281, 0.018, 0.283, 0.062, 0.014, 0.023 and 0.322 s, respectively. Our average time is 0.123. Clearly, our hashing is slower than CVA–DWT hashing, SVD-CSLBP hashing, random walk-based hashing and MH-based hashing, but is faster than NMF–NMF-SQ hashing, GF–LVQ hashing and RT–DCT hashing. Summary of performance comparisons is listed in Table 5. Our AUC is the biggest one among the assessed algorithms, indicating that our hashing is superior to the compared algorithms in classification between robustness and discrimination. As to computational time, our hashing has moderate performance. For hash storage, the length of our hashing is 64 integers, i.e. 384 bits in binary form. It is shorter than those of NMF–NMF–SQ hashing, CVA–DWT hashing and SVD–CSLBP hashing. Note that a floating point value requires more bits than an integer for storage. But our length is longer than those of GF–LVQ hashing, random walk-based hashing and RT–DCT hashing. For MH-based hashing, its hash length is 15 floats. According to the IEEE Standard [37], 32 bits at least are required for storing a float. Therefore, the hash length of MH-based hashing is 15 × 32 = 480 bits in binary form, which is longer than our hash length. TABLE 5. Performance comparisons among different hashing algorithms. Performance  Algorithm  NMF–NMF–SQ hashing  CVA–DWT hashing  GF–LVQ hashing  SVD–CSLBP hashing  Random walk-based hashing  MH-based hashing  RT–DCT hashing  Our hashing  AUC  0.9979  0.9939  0.9794  0.9507  0.9398  0.8791  0.7842  0.9999  Time (s)  0.281  0.018  0.283  0.062  0.014  0.023  0.322  0.123  Hash length  64 floats  960 bits  120 bits  64 floats  144 bits  15 floats  240 bits  384 bits  Performance  Algorithm  NMF–NMF–SQ hashing  CVA–DWT hashing  GF–LVQ hashing  SVD–CSLBP hashing  Random walk-based hashing  MH-based hashing  RT–DCT hashing  Our hashing  AUC  0.9979  0.9939  0.9794  0.9507  0.9398  0.8791  0.7842  0.9999  Time (s)  0.281  0.018  0.283  0.062  0.014  0.023  0.322  0.123  Hash length  64 floats  960 bits  120 bits  64 floats  144 bits  15 floats  240 bits  384 bits  TABLE 5. Performance comparisons among different hashing algorithms. Performance  Algorithm  NMF–NMF–SQ hashing  CVA–DWT hashing  GF–LVQ hashing  SVD–CSLBP hashing  Random walk-based hashing  MH-based hashing  RT–DCT hashing  Our hashing  AUC  0.9979  0.9939  0.9794  0.9507  0.9398  0.8791  0.7842  0.9999  Time (s)  0.281  0.018  0.283  0.062  0.014  0.023  0.322  0.123  Hash length  64 floats  960 bits  120 bits  64 floats  144 bits  15 floats  240 bits  384 bits  Performance  Algorithm  NMF–NMF–SQ hashing  CVA–DWT hashing  GF–LVQ hashing  SVD–CSLBP hashing  Random walk-based hashing  MH-based hashing  RT–DCT hashing  Our hashing  AUC  0.9979  0.9939  0.9794  0.9507  0.9398  0.8791  0.7842  0.9999  Time (s)  0.281  0.018  0.283  0.062  0.014  0.023  0.322  0.123  Hash length  64 floats  960 bits  120 bits  64 floats  144 bits  15 floats  240 bits  384 bits  5. APPLICATION IN REDUCED-REFERENCE IMAGE QUALITY ASSESSMENT IQA [38] plays an important role in image compression, image transmission, image display and so on. Generally, the reported metrics of IQA can be divided into three kinds [39]. (i) full-reference (FR) IQA: The reference (original) image and its distorted image are both available for quality assessment. (ii) Reduced-reference (RR) IQA: Some features of the reference image are known and the distorted image is available. (iii) No-reference IQA: The reference image is unavailable and only the distorted image is used to evaluate quality. In this section, we illustrate our application in RR-IQA. Figure 11 presents the block diagram of our hashing in application to RR-IQA. At the sender’s side, hash of the reference image is generated by our hashing and then is sent to the receiver’s side via auxiliary channel. Meanwhile, the reference image is sent to the receiver’s side through transmission channel. At the receiver’s side, the distorted version of the reference image and the hash of the reference image are both received. Next, hash of the distorted image is extracted by our hashing. Finally, objective score is obtained by calculating the Euclidean distance between the received hash and the extracted hash. Section 5.1 introduces the used dataset for IQA and Section 5.2 presents IQA performance comparisons. FIGURE 11. View largeDownload slide Block diagram of our hashing in application to RR-IQA. FIGURE 11. View largeDownload slide Block diagram of our hashing in application to RR-IQA. 5.1. Dataset for IQA The well-known open dataset called the LIVE Image Quality Assessment Database [40] is used for IQA. The LIVE image database is provided by the Laboratory for Image & Video Engineering (LIVE) at the University of Texas at Austin. It contains 29 RGB color images and their distorted images. In the experiment, we use these 29 original color images and their distorted versions attacked by Gaussian blur, white Gaussian noise and bit errors in JPEG2000 bitstream when transmitted over a simulated fast-fading Rayleigh channel (6 distorted images for each operation). The fast-fading Rayleigh channel means that the signal transmitted over the channel is affected by fast Rayleigh fading. As there are 29 × 6 = 174 distorted images for each operation, the total number of all distorted images is 174 × 3 = 522. Figure 12 presents some sample images of the LIVE image database, where (a) are the original images, (b) are the blurred versions, (c) are the noise versions and (d) are the distorted version attacked by bit errors in JPEG2000 bitstream. For easy comparisons, the Differential Mean Opinion Score (DMOS) of every distorted image is provided in the LIVE image database. Note that the range of DMOS is [0, 100]. The smaller the DMOS, the better the quality of the distorted image. FIGURE 12. View largeDownload slide Sample images of the LIVE image database. (a) Original images, (b) blurred images, (c) noise images and (d) distorted images attacked by bit errors in JPEG2000 bitstream. FIGURE 12. View largeDownload slide Sample images of the LIVE image database. (a) Original images, (b) blurred images, (c) noise images and (d) distorted images attacked by bit errors in JPEG2000 bitstream. 5.2. IQA performance comparison To evaluate IQA performance of our hashing, three well-known objective measures are taken, including Linear Pearson Correlation Coefficient (LPCC), Spearman Rank Order Correlation Coefficient (ROCC), and Root Mean Square Error (RMSE). Note that the inputs of these metrics are the provided DMOS and the DMOS Prediction (DMOSP). Since human vision system perceives images in nonlinear fashion, the DMOS is generally nonlinearly changed. To simulate this nonlinear characteristic, objective score of every distorted image is mapped to DMOSP by the well-known nonlinear function called logistic function. Let yi be the ith DMOS, xi be the corresponding DMOSP of yi, and n be the number of distorted images. Thus, the LPCC, ROCC and RMSE are defined as follows:   LPCC=∑i=1n(xi−ux)(yi−uy)∑i=1n(xi−ux)2∑i=1n(yi−uy)2 (17)  ROCC=1−6∑i=1n(xi−yi)2n(n2−1) (18)  RMSE=1n∑i=1n(xi−yi)2 (19) where ux and uy are the means of {x1, x2,…,xn} and {y1, y2,…,yn}, respectively. The ranges of LPCC and ROCC are both [0, 1]. A big LPCC or ROCC means good RR-IQA performance. Since RMSE indicates the errors between DMOSP and DMOS, a small RMSE implies good RR-IQA performance. As reference, the LPCC, ROCC and RMSE results of two popular FR-IQA algorithms called PSNR [41] and SSIM [42] are also calculated. Table 6 lists performance comparisons based on 174 blurred images. It is observed that the LPCC and ROCC values of our hashing are bigger than those of PSNR and SSIM, and our RMSE is smaller than those of PSNR and SSIM. This implies that our hashing is better than PSNR and SSIM in measuring blurred images. Table 7 is performance comparisons based on 174 noise images. It is found that our LPCC and ROCC values are smaller than those of PSNR and SSIM, while our RMSE is bigger than those of PSNR and SSIM. This means that PSNR and SSIM are superior to our hashing in evaluating noise images. Table 8 presents performance comparisons based on 174 distorted images attacked by bit errors. Obviously, our LPCC and ROCC values are bigger than those of PSNR, but are smaller than those of SSIM. Similarly, our RMSE is smaller that of PSNR, but is bigger than that of SSIM. This illustrates that, our hashing is better than PSNR, but is worse than SSIM in evaluating quality of distorted images attacked by bit errors. PSNR and SSIM have better IQA performance than our hashing in some cases. This is because PSNR and SSIM are both FR-IQA metrics, while our hashing is an RR-IQA technique. Since FR-IQA metric can use the whole information of reference image and RR-IQA metric can only use partial information of reference image, FR-IQA metric is generally better than RR-IQA metric. Table 9 presents the whole performance comparisons based on all 522 distorted images. Obviously, our LPCC and ROCC values are bigger than those of PSNR and SSIM, and our RMSE is smaller than those of PSNR and SSIM. Therefore, it can be concluded that our whole IQA performance is better than those of PSNR and SSIM. TABLE 6. Performance comparisons based on 174 blurred images. Index  Algorithm  PSNR  SSIM  Our hashing  LPCC  0.7420  0.9015  0.9290  ROCC  0.7256  0.9146  0.9205  RMSE  10.5408  6.8041  5.8178  Index  Algorithm  PSNR  SSIM  Our hashing  LPCC  0.7420  0.9015  0.9290  ROCC  0.7256  0.9146  0.9205  RMSE  10.5408  6.8041  5.8178  TABLE 6. Performance comparisons based on 174 blurred images. Index  Algorithm  PSNR  SSIM  Our hashing  LPCC  0.7420  0.9015  0.9290  ROCC  0.7256  0.9146  0.9205  RMSE  10.5408  6.8041  5.8178  Index  Algorithm  PSNR  SSIM  Our hashing  LPCC  0.7420  0.9015  0.9290  ROCC  0.7256  0.9146  0.9205  RMSE  10.5408  6.8041  5.8178  TABLE 7. Performance comparisons based on 174 noise images. Index  Algorithm  PSNR  SSIM  Our hashing  LPCC  0.9689  0.9708  0.9182  ROCC  0.9820  0.9653  0.9285  RMSE  3.9498  3.8291  6.3267  Index  Algorithm  PSNR  SSIM  Our hashing  LPCC  0.9689  0.9708  0.9182  ROCC  0.9820  0.9653  0.9285  RMSE  3.9498  3.8291  6.3267  TABLE 7. Performance comparisons based on 174 noise images. Index  Algorithm  PSNR  SSIM  Our hashing  LPCC  0.9689  0.9708  0.9182  ROCC  0.9820  0.9653  0.9285  RMSE  3.9498  3.8291  6.3267  Index  Algorithm  PSNR  SSIM  Our hashing  LPCC  0.9689  0.9708  0.9182  ROCC  0.9820  0.9653  0.9285  RMSE  3.9498  3.8291  6.3267  TABLE 8. Performance comparisons based on 174 distorted images attacked by bit errors. Index  Algorithm  PSNR  SSIM  Our hashing  LPCC  0.8463  0.9290  0.9039  ROCC  0.8451  0.9291  0.9012  RMSE  8.7611  5.1884  7.0341  Index  Algorithm  PSNR  SSIM  Our hashing  LPCC  0.8463  0.9290  0.9039  ROCC  0.8451  0.9291  0.9012  RMSE  8.7611  5.1884  7.0341  TABLE 8. Performance comparisons based on 174 distorted images attacked by bit errors. Index  Algorithm  PSNR  SSIM  Our hashing  LPCC  0.8463  0.9290  0.9039  ROCC  0.8451  0.9291  0.9012  RMSE  8.7611  5.1884  7.0341  Index  Algorithm  PSNR  SSIM  Our hashing  LPCC  0.8463  0.9290  0.9039  ROCC  0.8451  0.9291  0.9012  RMSE  8.7611  5.1884  7.0341  TABLE 9. Performance comparisons based on 522 distorted images. Index  Algorithm  PSNR  SSIM  Our hashing  LPCC  0.7242  0.8546  0.8664  ROCC  0.7336  0.8678  0.8700  RMSE  11.0684  8.3351  8.0157  Index  Algorithm  PSNR  SSIM  Our hashing  LPCC  0.7242  0.8546  0.8664  ROCC  0.7336  0.8678  0.8700  RMSE  11.0684  8.3351  8.0157  TABLE 9. Performance comparisons based on 522 distorted images. Index  Algorithm  PSNR  SSIM  Our hashing  LPCC  0.7242  0.8546  0.8664  ROCC  0.7336  0.8678  0.8700  RMSE  11.0684  8.3351  8.0157  Index  Algorithm  PSNR  SSIM  Our hashing  LPCC  0.7242  0.8546  0.8664  ROCC  0.7336  0.8678  0.8700  RMSE  11.0684  8.3351  8.0157  Moreover, scatter plots about DMOS and DMOSP are drawn and their fitted curves estimated by the logistic regression analysis [43] are also presented. Figure 13 is the comparison results based on 174 blurred images, where (a) is the result of PSNR, (b) is the results of SSIM and (c) is our result. It is observed that those points of PSNR and SSIM are scattered in a big area and our points concentrate around the fitted curve. Figure 14 is the comparison results based on 174 noise images. Clearly, our points are scattered in a large area and the fitted results of PSNR and SSIM are good. Figure 15 is the comparisons of scatter plots and the fitted curves based on 174 distorted images attacked by bit errors. It is found that the fitted results of SSIM and our hashing have similar shape but inverse direction and both outperform the fitted result of PSNR. Figure 16 presents the comparisons of scatter plots and the fitted curves based on 522 distorted images. It can be seen that the fitted result of our hashing is better than those of PSNR and SSIM. This also verifies that the whole IQA performance of our hashing is better than those of PSNR and SSIM. From the above experimental results, we can draw a conclusion that our hashing has good IQA performance and can be applied to the application of RR-IQA. FIGURE 13. View largeDownload slide Comparisons of scatter plots and the fitted curves based on 174 blurred images. (a) PSNR, (b) SSIM and (c) our hashing. FIGURE 13. View largeDownload slide Comparisons of scatter plots and the fitted curves based on 174 blurred images. (a) PSNR, (b) SSIM and (c) our hashing. FIGURE 14. View largeDownload slide Comparisons of scatter plots and the fitted curves based on 174 noise images. (a) PSNR, (b) SSIM and (c) our hashing. FIGURE 14. View largeDownload slide Comparisons of scatter plots and the fitted curves based on 174 noise images. (a) PSNR, (b) SSIM and (c) our hashing. FIGURE 15. View largeDownload slide Comparisons of scatter plots and the fitted curves based on 174 distorted images attacked by bit errors. (a) PSNR, (b) SSIM and (c) our hashing. FIGURE 15. View largeDownload slide Comparisons of scatter plots and the fitted curves based on 174 distorted images attacked by bit errors. (a) PSNR, (b) SSIM and (c) our hashing. FIGURE 16. View largeDownload slide Comparisons of scatter plots and the fitted curves based on 522 distorted images. (a) PSNR, (b) SSIM and (c) our hashing. FIGURE 16. View largeDownload slide Comparisons of scatter plots and the fitted curves based on 522 distorted images. (a) PSNR, (b) SSIM and (c) our hashing. 6. CONCLUSIONS In this paper, we have proposed a perceptual image hashing with weighted DWT features, which not only reaches good classification between robustness and discrimination but also can be applied to RR-IQA. A key step of our hashing is the extraction of weighted DWT statistical features, which provide us the capability of measuring perceptual change on distorted images. Experiments with three open image databases have been conducted to validate efficiency of our hashing. The ROC curve comparisons have shown that our hashing outperforms seven state-of-the-art hashing algorithms in classification performance with respect to robustness and discrimination. IQA comparisons have illustrated that the whole performance of our hashing is superior to PSNR and SSIM. Our research on image hashing algorithms is under way. Future research will be focused on hashing algorithms based on visual attention models, hashing algorithms based on sparse models, hashing algorithms with deep learning techniques, and so on. ACKNOWLEDGEMENTS The authors would like to thank the anonymous referees and the editor for their valuable comments and suggestions, which can substantially improve this paper. FUNDING This work was supported by the National Natural Science Foundation of China (grant numbers 61562007, 61300109, 61762017, 61702332, 61363034), Guangxi ‘Bagui Scholar’ Teams for Innovation and Research, the Guangxi Natural Science Foundation (grant number 2017GXNSFAA198222, 2015GXNSFDA139040), the Project of Guangxi Science and Technology (grant number GuiKeAD17195062) and the Project of the Guangxi Key Lab of Multi-source Information Mining & Security (grant numbers 16-A-02-02, 15-A-02-02, 14-A-02-02). REFERENCES 1 Venkatesan, R., Koon, S.-M., Jakubowski, M.H. and Moulin, P. ( 2000) Robust image hashing. Proc. IEEE Int. Conf. Image Processing (ICIP 2000), Vancouver, BC, Canada, 10–13 September, pp. 664–666. IEEE Press, New York. 2 Neelima, A. and Singh, K.M. ( 2016) Perceptual hash function based on scale-invariant feature transform and singular value decomposition. Comput. J. , 59, 1275– 1281. Google Scholar CrossRef Search ADS   3 Slaney, M. and Casey, M. ( 2008) Locality-sensitive hashing for finding nearest neighbors. IEEE Signal Process. Mag. , 25, 128– 131. Google Scholar CrossRef Search ADS   4 Wang, X., Zheng, N., Xue, J. and Liu, Z. ( 2012) A novel image signature method for content authentication. Comput. J. , 55, 686– 701. Google Scholar CrossRef Search ADS   5 Vadlamudi, L.N., Vaddella, R.P.V. and Devara, V. ( 2016) Robust hash generation technique for content-based image authentication using histogram. Multimed. Tools Appl. , 75, 6585– 6604. Google Scholar CrossRef Search ADS   6 Varghese, A., Balakrishnan, K., Varghese, R.R. and Paul, J.S. ( 2014) Content-based image retrieval of axial brain slices using a novel LBP with a ternary encoding. Comput. J. , 57, 1383– 1394. Google Scholar CrossRef Search ADS   7 Lu, C.-S., Hsu, C.Y., Sun, S.-W. and Chang, P.-C. ( 2004) Robust mesh-based hashing for copy detection and tracing of images. Proc. IEEE Int. Conf. Multimedia & Expo (ICME 2004), Taipei, Taiwan, 27–30 June, vol. 1, pp. 731–734. IEEE Press, New York. 8 Lu, W. and Wu, M. ( 2010) Multimedia forensic hash based on visual words. Proc. IEEE Int. Conf. Image Processing (ICIP 2010), Hong Kong, China, 26–29 September, pp. 989–992. IEEE Press, New York. 9 Tang, Z., Wang, S., Zhang, X., Wei, W. and Su, S. ( 2008) Robust image hashing for tamper detection using non-negative matrix factorization. J. Ubiquitous Convergence Technol. , 2, 18– 26. 10 Tang, Z., Zhang, X., Li, X. and Zhang, S. ( 2016) Robust image hashing with ring partition and invariant vector distance. IEEE Trans. Inf. Forensics Secur. , 11, 200– 214. Google Scholar CrossRef Search ADS   11 Fridrich, J. and Goljan, M. ( 2000) Robust hash functions for digital watermarking. Proc. IEEE Int. Conf. Information Technology: Coding and Computing, Las Vegas, Nevada, USA, 27–29 March, pp. 178–183. IEEE Press, New York. 12 Lefebvre, F., Macq, B. and Legat, J.-D. ( 2002) RASH: Radon soft hash algorithm. Proc. European Signal Processing Conf. (EUSIPCO 2002), Toulouse, France, 3–6 September, pp. 299–302. IEEE Press, New York. 13 Ou, Y. and Rhee, K.H. ( 2009) A key-dependent secure image hashing scheme by using Radon transform. Proc. IEEE Int. Symp. Intelligent Signal Processing and Communication Systems (ISPACS 2009), Kanazawa, Japan, 7–9 December, pp. 595−598. IEEE Press, New York. 14 Lei, Y., Wang, Y. and Huang, J. ( 2011) Robust image hash in Radon transform domain for authentication. Signal Process. Image Commun. , 26, 280– 288. Google Scholar CrossRef Search ADS   15 Lin, C. and Chang, S. ( 2001) A robust image authentication method distinguishing JPEG compression from malicious manipulation. IEEE Trans. Circuits Syst. Video Technol. , 11, 153– 168. Google Scholar CrossRef Search ADS   16 Ahmed, F., Siyal, M. and Abbas, V. ( 2010) A secure and robust hash-based scheme for image authentication. Signal Process. , 90, 1456– 1470. Google Scholar CrossRef Search ADS   17 Tang, Z., Wang, S., Zhang, X., Wei, W. and Zhao, Y. ( 2011) Lexicographical framework for image hashing with implementation based on DCT and NMF. Multimed. Tools Appl. , 52, 325– 345. Google Scholar CrossRef Search ADS   18 Tang, Z., Dai, Y., Zhang, X., Huang, L. and Yang, F. ( 2014) Robust image hashing via colour vector angles and discrete wavelet transform. IET Image Process. , 8, 142– 149. Google Scholar CrossRef Search ADS   19 Kozat, S.S., Mihcak, K. and Venkatesan, R. ( 2004) Robust perceptual image hashing via matrix invariants. Proc. IEEE Int. Conf. Image Processing (ICIP 2004), Singapore, 24–27 October, pp. 3443–3446. IEEE Press, New York. 20 Davarzani, R., Mozaffariand, S. and Yaghmaie, K. ( 2016) Perceptual image hashing using center-symmetric local binary patterns. Multimed. Tools Appl. , 75, 4639– 4667. Google Scholar CrossRef Search ADS   21 Monga, V. and Mihcak, M.K. ( 2007) Robust and secure image hashing via non-negative matrix factorizations. IEEE Trans. Inf. Forensics Secur. , 2, 376– 390. Google Scholar CrossRef Search ADS   22 Tang, Z., Zhang, X. and Zhang, S. ( 2014) Robust Perceptual image hashing based on ring partition and NMF. IEEE Trans. Knowl. Data Eng. , 26, 711– 724. Google Scholar CrossRef Search ADS   23 Li, Y., Lu, Z., Zhu, C. and Niu, X. ( 2012) Robust image hashing based on random Gabor filtering and dithered lattice vector quantization. IEEE Trans. Image Process. , 21, 1963– 1980. Google Scholar CrossRef Search ADS PubMed  24 Tang, Z., Huang, L., Dai, Y. and Yang, F. ( 2012) Robust image hashing based on multiple histograms. Int. J. Digit. Content Technol. Appl. , 6, 39– 47. Google Scholar CrossRef Search ADS   25 Zhao, Y., Wang, S., Zhang, X. and Yao, H. ( 2013) Robust hashing for image authentication using Zernike moments and local features. IEEE Trans. Inf. Forensics Secur. , 8, 55– 63. Google Scholar CrossRef Search ADS   26 Yan, C., Pun, C. and Yuan, X. ( 2016) Multi-scale image hashing using adaptive local feature extraction for robust tampering detection. Signal Process. , 121, 1– 16. Google Scholar CrossRef Search ADS   27 Qin, C., Chen, X., Ye, D., Wang, J. and Sun, X. ( 2016) A novel image hashing scheme with perceptual robustness using block truncation coding. Inf. Sci. , 361, 84– 99. Google Scholar CrossRef Search ADS   28 Huang, X., Liu, X., Wang, G. and Su, M. ( 2016) A robust image hashing with enhanced randomness by using random walk on zigzag blocking. Proc. 2016 IEEE Trust-Com/BigDataSE/ISPA, Tianjin, China, 23–26 August, pp. 14–18. IEEE Press, New York. 29 Tang, Z., Huang, Z., Zhang, X. and Lao, H. ( 2017) Robust image hashing with multidimensional scaling. Signal Process. , 137, 240– 250. Google Scholar CrossRef Search ADS   30 Tang, Z., Huang, L., Zhang, X. and Lao, H. ( 2016) Robust image hashing based on color vector angle and Canny operator. AEÜ-Int. J. Electron. Commun. , 70, 833– 841. Google Scholar CrossRef Search ADS   31 Zhang, J., Chang, W. and Wu, L. ( 2010) Edge detection based on general grey correlation and LoG operator. Proc. 2010 Int. Conf. Artificial Intelligence and Computational Intelligence, Sanya, China, 23–24 October, pp. 480–483. IEEE Press, New York. 32 Canny, J. ( 1986) A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. , 8, 679– 698. Google Scholar CrossRef Search ADS PubMed  33 Jegou, H., Copydays dataset, http://lear.inrialpes.fr/~jegou/data.php (accessed May 28, 2016). 34 Petitcolas, F.A.P. ( 2000) Watermarking schemes evaluation. IEEE Signal Process. Mag. , 17, 58– 64. Google Scholar CrossRef Search ADS   35 Schaefer, G. and Stich, M. ( 2004) UCID - An Uncompressed Colour Image Database. Proc. SPIE, Storage and Retrieval Methods and Applications for Multimedia, San Jose, USA, 20 January, pp. 472–480. SPIE Press, Bellingham, Washington. 36 Fawcett, T. ( 2006) An introduction to ROC analysis. Pattern Recognit. Lett. , 27, 861– 874. Google Scholar CrossRef Search ADS   37 IEEE Std 754–2008 ( 2008) IEEE Standard for Floating-Point Arithmetic , pp. 1– 70. IEEE Press, New York. 38 Wu, J., Lin, W., Shi, G. and Liu, A. ( 2013) Reduced-reference image quality assessment with visual information fidelity. IEEE Trans. Multimed. , 15, 1700– 1705. Google Scholar CrossRef Search ADS   39 Tang, Z., Wang, S., Zhang, X. and Wei, W. ( 2009) Perceptual similarity metric resilient to rotation for application in robust image hashing. Proc. 3rd Int. Conf. Multimedia and Ubiquitous Engineering (MUE 2009), Qingdao, China, 4–6 June, pp. 183–188. IEEE Press, New York. 40 Sheikh, H.R., Wang, Z., Cormack, L. and Bovik, A.C. ( 2012) LIVE Image Quality Assessment Database Release 2, http://live.ece.utexas.edu/research/quality (accessed May 12, 2012). 41 Tanchenko, A. ( 2014) Visual-PSNR measure of image quality. J. Vis. Commun. Image Representation , 25, 874– 878. Google Scholar CrossRef Search ADS   42 Wang, Z., Bovik, A.C., Sheikh, H.R. and Simoncelli, E.P. ( 2004) Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. , 13, 600– 612. Google Scholar CrossRef Search ADS PubMed  43 Kim, J., Lee, J., Lee, C., Park, E., Kim, J., Kim, H., Lee, J. and Jeong, H. ( 2013) Optimal feature selection for pedestrian detection based on logistic regression analysis. Proc. 2013 IEEE Int. Conf. Systems, Man, and Cybernetics (SMC 2013), Manchester, United Kingdom, 13–16 October, pp. 239–242. IEEE Press, New York. Author notes Handling editor: Fionn Murtagh © The British Computer Society 2018. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/about_us/legal/notices)

Journal

The Computer JournalOxford University Press

Published: May 4, 2018

There are no references for this article.

You’re reading a free preview. Subscribe to read the entire article.


DeepDyve is your
personal research library

It’s your single place to instantly
discover and read the research
that matters to you.

Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.

All for just $49/month

Explore the DeepDyve Library

Search

Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly

Organize

Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.

Access

Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.

Your journals are on DeepDyve

Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.

All the latest content is available, no embargo periods.

See the journals in your area

DeepDyve

Freelancer

DeepDyve

Pro

Price

FREE

$49/month
$360/year

Save searches from
Google Scholar,
PubMed

Create lists to
organize your research

Export lists, citations

Read DeepDyve articles

Abstract access only

Unlimited access to over
18 million full-text articles

Print

20 pages / month

PDF Discount

20% off