An Experiment-Based Review of Low-Light Image Enhancement Methods
An Experiment-Based Review of Low-Light Image Enhancement Methods
Wang, Wencheng;Wu, Xiaojin;Yuan, Xiaohui;Gao, Zairui;
2020-01-01 00:00:00
Received April 20, 2020, accepted May 2, 2020, date of publication May 6, 2020, date of current version May 21, 2020. Digital Object Identifier 10.1109/ACCESS.2020.2992749 An Experiment-Based Review of Low-Light Image Enhancement Methods 1 1 WENCHENG WANG , (Member, IEEE), XIAOJIN WU , 2 1 XIAOHUI YUAN , (Senior Member, IEEE), AND ZAIRUI GAO College of Information and Control Engineering, Weifang University, Weifang 261061, China College of Engineering, University of North Texas, Denton, TX 76207, USA Corresponding authors: Wencheng Wang ([email protected]) and Xiaojin Wu ([email protected]) This work was supported in part by the Shandong Provincial Natural Science Foundation under Grant ZR2019FM059, in part by the Science and Technology Plan for the Youth Innovation of Shandong's Universities under Grant 2019KJN012, and in part by the National Natural Science Foundation of China under Grant 61403283. ABSTRACT Images captured under poor illumination conditions often exhibit characteristics such as low brightness, low contrast, a narrow gray range, and color distortion, as well as considerable noise, which seriously affect the subjective visual effect on human eyes and greatly limit the performance of various machine vision systems. The role of low-light image enhancement is to improve the visual effect of such images for the benet of subsequent processing. This paper reviews the main techniques of low-light image enhancement developed over the past decades. First, we present a new classication of these algorithms, dividing them into seven categories: gray transformation methods, histogram equalization methods, Retinex methods, frequency-domain methods, image fusion methods, defogging model methods and machine learning methods. Then, all the categories of methods, including subcategories, are introduced in accordance with their principles and characteristics. In addition, various quality evaluation methods for enhanced images are detailed, and comparisons of different algorithms are discussed. Finally, the current research progress is summarized, and future research directions are suggested. INDEX TERMS Review, survey, low-light image enhancement, Retinex method, image enhancement, quality evaluation. I. INTRODUCTION the quality of this kind of low-light image is seriously further With the rapid development of computer vision technology, reduced. digital image processing systems have been widely used in Low light, as the name implies, refers to the environ- many elds, such as industrial production [1], video mon- mental conditions where illuminance does not meet the nor- itoring [2], intelligent transportation [3], and remote sens- mal standard [11]. Any images captured in an environment ing monitoring, and thus play important roles in industrial with relatively weak light are often regarded as low-light production [4], daily life [5], military applications [6], etc. images [12], [13]. Nevertheless, it has thus far been impos- However, some uncontrollable factors often exist during the sible to identify specic theoretical values that dene a low- process of image acquisition, resulting in various image light environment in practical applications, and consequently, defects. In particular, under poor illumination conditions, no unied standard exists. Therefore, each image-sensor such as indoors, nighttime, or cloudy days, the light reected manufacturer has its own standards; for example, Hikvision from the object surface may be weak; consequently, the usually classies low-light environments into the follow- image quality of such a low-light image may be seriously ing categories: dark level (0.01 Lux - 0.1 Lux), moonlight degraded due to color distortions and noise [7][10]. After level (0.001 Lux - 0.01 Lux) and starlight level (less than image conversion, storage, transmission and other operations, 0.001 Lux). Images captured in these types of environments exhibit characteristics such as low brightness, low contrast, a narrow gray range and color distortion as well as consid- The associate editor coordinating the review of this manuscript and erable noise [14], [15]. Fig. 1 shows three images with low approving it for publication was Yudong Zhang . brightness and their corresponding gray histograms, where This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ 87884 VOLUME 8, 2020 W. Wang et al.: Experiment-Based Review of Low-Light Image Enhancement Methods under low illumination to obtain clear images or videos [28]. Such enhancement can not only render images more consis- tent with the subjective visual perception of individuals and improve the reliability and robustness of outdoor visual sys- tems but also allow such images to be more conveniently ana- lyzed and processed by computer vision equipment, which is of great importance for promoting the development of image information mining [29], [30]. Related research results can be widely applied in elds such as urban trafc monitoring, outdoor video acquisition, satellite remote sensing, and mil- itary aviation investigation and can be used as a reference FIGURE 1. Examples of low-light images. for studies on topics such as underwater image analysis and haze image clarity [31]. Moreover, as an important branch of research in the eld of image processing, low-light image the X-axis shows the grayscale values and the Y-axis repre- enhancement has interdisciplinary and innovative appeal and sents the number of pixels. The pixel values of these images broad application prospects and has become a focus of inter- are mainly focused in the lower range due to the lack of disciplinary research in recent years [32]. A large number of illumination, and the gray difference of the corresponding researchers at home and abroad have been paying increasing pixels between the various channels of the color image is attention to this eld for quite some time [33][38]. limited. There is only a small gap between the maximum In the real world, color images are most commonly used, and minimum gray levels of the image. The whole color so most of the algorithms are either designed for color image layer exhibits deviations, and the edge information is weak; enhancement or derived from gray image enhancement meth- consequently, it is difcult to distinguish details of the image. ods. The major methods are listed below. These characteristics reduce the usability of such images, (i) Enhancement based on the RGB (red, green, blue) color seriously degrade their subjective visual effect, and greatly space. The specic steps are as follows. The three color limit the functionality of various visual systems [16][18]. components (R, G and B) are extracted from the original To weaken the impact of video/image acquisition from RGB color image. Then, these three components are each low-illumination environments, researchers have pursued individually enhanced using a grayscale image enhancement various improvements from both the hardware and software method. Finally, the three components are merged, and the perspectives. One approach is to improve the image acqui- enhanced results are output. The specic principle is visually sition system hardware [19][21]. Another is to process the summarized in Fig. 2. This method is simple but can result images after they are generated. Because low-illumination in serious color deviations in the enhanced images because it cameras use high-performance charge-coupled device (CCD) neglects the correlations between the components. or complementary metaloxidesemiconductor (CMOS) technology, professional low-light circuits, and lters as the core components to improve the imaging quality for low- light-level imaging, their manufacturing process is highly rigorous, and the technology is complex [22]. Although some professional low-light cameras produced by companies such as Sony, Photonis, SiOnyx and Texas Instruments (TI) have appeared on the market, they are not widely used in daily life because of their high prices. As an alternative approach, the improvement of software algorithms offers great ex- FIGURE 2. Image enhancement in the RGB color space. ibility, and improving the quality of low-light videos and images by means of digital image processing has always been an important direction of research. Therefore, it is of (ii) Enhancement based on the HSI (hue, saturation, inten- great signicance and practical value to study enhancement sity) color space (or the YCbCr, L a b, YUV color space) algorithms for low-light images to improve the performance [39][42]. The brightness component I in the HSI color of imaging devices. space is separate from and unrelated to the chrominance The main purpose of low-light image enhancement is component H, i.e., the color information of an image. When to improve the overall and local contrast of the image, the chrominance does not change, the brightness and satu- improve its visual effect, and transform the image into a form ration will determine all of the image information. Hence, more suitable for human observation or computer process- to enhance a color image, the I and S components are usu- ing, while avoiding noise amplication and achieving good ally enhanced separately while maintaining the same chro- real-time performance. [23][27]. To this end, it is essen- maticity H. Fig. 3 shows the ow chart of this enhancement tial to enhance the validity and availability of data captured approach. VOLUME 8, 2020 87885 W. Wang et al.: Experiment-Based Review of Low-Light Image Enhancement Methods FIGURE 4. Classification of low-light image enhancement algorithms. FIGURE 3. Image enhancement in the HSI color space. defogging model methods and machine learning methods. These methods can be further divided into different sub- classes in accordance with the differences in their principles. In recent years, a common method used to process color The overall classication is depicted in Fig. 4. images has been to leave one component unchanged while enhancing the other components based on a space transfor- A. GRAY TRANSFORMATION METHODS mation. Notably, the available transformations of the color A gray transformation method is a spatial-domain image space are diverse, and the form of an image is not limited to enhancement algorithm based on the principle of transform- a certain space. Regardless of the color space used, the pro- ing the gray values of single pixels into other gray values cessing steps are similar to those of an enhancement method by means of a mathematical function [46], which is usually based on the HSI space [43][45]. Studies on low-light image called a mapping-based approach. Such a method enhances enhancement technology are currently still being conducted an image by modifying the distribution and dynamic range by many researchers. Although promising ndings have been of the gray values of the pixels [1], [7]. The main sub- obtained, this technology is still not mature. In particular, classes of this type method include linear and nonlinear the available algorithms often have a better effect in a certain transformations. aspect than in others. Thus, the eld still has signicant research value and offers a large development space that is 1) LINEAR TRANSFORMATION attractive to researchers. A linear transformation of gray values, also known as a linear The remainder of this paper is organized as follows. stretching, is a linear function of the gray values of the input Section II introduces a classication of low-light image image [1], and the formula is as follows: enhancement algorithms according to their underlying prin- g(x; y) D C f (x; y)C R (1) ciples, and the characteristics of the different categories of algorithms are analyzed in detail. In Section III, related qual- where f (x; y) and g(x; y) represent the input and output ity assessment criteria for enhanced images are described. images, respectively, and C and R are the coefcients of the Several experiments implemented to test the performance of linear transformation. An image can be enhanced to different the representative methods are described in Section IV, and degrees by adjusting the values of the coefcients in the above the conclusions are summarized and future research direc- formula. A corresponding transformation curve is shown in tions suggested in the last section. Fig. 5(a). A common formula for a linear gray stretch is as follows: II. CLASSIFICATION OF LOW-LIGHT IMAGE f (x; y) f min ENHANCEMENT ALGORITHMS g(x; y) D (g g )C g (2) max min min f f max min Scholars worldwide have proposed many image enhance- ment algorithms for images captured under low-illumination where f and f represent the maximum and minimum max min conditions to improve low-light videos and images from dif- gray values of the input image, respectively, and g and max ferent perspectives [1], [3], [7]. In accordance with the algo- g represent the maximum and minimum gray values of min rithms used for brightness enhancement, this paper divides the output image, respectively [7]. Thus, the dynamic range these processing methods into seven classes: gray transforma- of the image is transformed from [f ; f ] to [g ; g ] min max min max tion methods, histogram equalization (HE) methods, Retinex to enhance the brightness and contrast. Sometimes, the gray methods, frequency-domain methods, image fusion methods, values in only a specic area of the image need to be stretched 87886 VOLUME 8, 2020 W. Wang et al.: Experiment-Based Review of Low-Light Image Enhancement Methods pixels with higher gray values [54]. The typical formula is as follows: g(x; y) D log(1C c f (x; y)) (4) where c is a control parameter. The shapes of several logarithmic transformation functions are shown in Fig. 6(a). A logarithmic transformation function stretches the gray values of pixels in low-gray-value areas and compresses the values of pixels in high-gray-value areas. FIGURE 5. Linear transformation curves. or compressed by applying a piecewise linear transformation to adjust the contrast; the formula for such a transformation is as follows. f (x; y) 0 f (x; y) a >a d c g(x; y) D (f (x; y) a)C c a f (x; y) b (3) > b a FIGURE 6. Nonlinear transformation curves. h d (f (x; y) b)C d b f (x; y) e e b The gamma function is a nonlinear transformation with The various functions in the piecewise formula are broad application whose formula is as follows: represented by colored polylines in the coordinate system g(x; y) D f (x; y) (5) corresponding to the transformation. The positions of the discontinuity points must be determined individually for each where
denotes the gamma correction parameter, which is specic image. An example of such a piecewise linear trans- usually a constant. Several gamma transformation curves are formation curve is shown in Fig. 5(b). shown in Fig. 6(b). In the piecewise linear transformation method, parameter As shown in Fig. 6(b), several different transformation optimization can be performed only based on experience curves can be obtained by varying the parameter
. When or with considerable human participation; thus, it lacks an > 1, the transformation will stretch the dynamic range adaptive mechanism. Additionally, it is difcult to achieve of the low-gray-value areas of the image and compress the the optimal enhancement effect [46], [47], [47][49]. To over- range of the high-gray-value areas. In contrast, when
< 1, come these problems, a hybrid genetic algorithm com- the transformation will compress low gray values and stretch bined with differential evolution has been applied for image high gray values. When
D 1, the output remains the enhancement processing [50]. The optimal transformation same [55]. Therefore, different gray regions of an image curve was obtained through the adaptive mutation and quick can be selectively stretched or compressed by adjusting this search capabilities of this algorithm. In summary, the princi- parameter to obtain a better enhancement effect. Drago et al. ple of linear image enhancement is simple and fast to execute, suggested that the dynamic range of an image can be effec- but the effect is not satisfactory, with some image details tively compressed by mapping the gray values of the image typically being lost due to uneven image enhancement. using an adaptive logarithmic function [56]. Tao et al. used a gray value corresponding to the cumulative histogram of 2) NONLINEAR TRANSFORMATION an image with a value of 0.1 to self-adaptively obtain a The basic idea of a nonlinear gray transformation is to use nonlinear mapping function that can enhance the brightness a nonlinear function to transform the gray values of an of dark regions while inhibiting the enhancement of bright image [51]. Frequently used nonlinear transformation func- regions [57]. Likewise, a low-light color image enhance- tions include logarithmic functions, gamma functions and ment algorithm based on a logarithmic processing model was various other improved functions [52], [53]. A logarithmic proposed by Tian et al. [58]. First, this algorithm applies a transformation function implies that there is a logarithmic nonlinear enhancement process to the brightness components relationship between the value of each pixel in the output of an image; then, the membership function is amended by image and the value of the corresponding pixel in the input introducing an enhancement operator. Finally, the enhanced image. This type of transformation is suitable for an exces- image is obtained by means of the inverse transform of sively dark image because it can stretch the lower gray values the membership function. Huang et al. [59] proposed an of the image while compressing the dynamic range of the adaptive gamma correction algorithm in which the gamma VOLUME 8, 2020 87887 W. Wang et al.: Experiment-Based Review of Low-Light Image Enhancement Methods correction parameter is adaptively obtained in accordance with the cumulative probability distribution histogram. However, because this gamma correction method relies on a single parameter, it is prone to cause overenhancement of bright areas. To overcome this shortcoming, a double gamma function was constructed by Zhi et al. and adjusted based on FIGURE 7. Example of HE on a grayscale image. the distribution characteristics of the illumination map [60], thus improving the gray values in low-brightness areas while input low-light image, (c) presents the enhanced image after suppressing the gray values in local high-brightness regions. HE, and (d) displays the histogram of the enhanced image. Moreover, an arctangent hyperbola has been used to map The principle of the standard HE algorithm is simple and the hue component of an image to an appropriate range can be executed in real time. However, the brightness of the by Yu et al. [61], and later, low-light image enhancement enhanced image will be uneven, and some details may be lost based on the optimal hyperbolic tangent prole was pro- due to gray-level merging. posed [62]. Nonlinear transformation requires more com- plex calculations and consequently a longer time than linear transformation [63], [64]. In summary, gray transformation can highlight gray areas of interest and has the advantages of simple implementation and fast speed. However, such methods do not consider the overall gray distribution of an image; consequently, their enhancement ability is limited, and their adaptability is poor. B. HISTOGRAM EQUALIZATION (HE) METHODS If the pixel values of an image are evenly distributed across all possible gray levels, then the image shows high contrast and a large dynamic range. On the basis of this charac- teristic, the HE algorithm uses the cumulative distribution function (CDF) to adjust the output gray levels to have a FIGURE 8. Basic model of GHE algorithms. probability density function that corresponds to a uniform distribution; in this way, hidden details in dark areas can be 2) BASIC MODELS OF HE METHODS made to reappear, and the visual effect of the input image can Depending on the regions considered in the calculation, be effectively enhanced [65], [66]. HE methods can be divided into global histogram equaliza- tion (GHE) and local histogram equalization (LHE) [72]. 1) PRINCIPLE OF HE The general concept of a GHE algorithm is illustrated by In the HE method, the CDF is used as the transformation the model shown in Fig. 8, where X represents the original curve for the image gray values [67][71]. Let I and L denote image, Y represents the enhanced image generated by the HE an image and its gray levels, respectively. I(i; j) represents the algorithm, Y D f (X) represents the traditional HE process gray value at the position with coordinates (i; j), N represents or an improved version, and X ; X ; X ; ; X represent n the total number of pixels in the image, and n represents 1 2 3 n subimages composed of pixels in the original image that sat- the number of pixels of gray level k. Then, the gray-level isfy certain conditions according to a given property, which is probability density function of image I is dened as dened as Q(x). The parameter x represents the magnitude of p(k) D ; (k D 0; 1; 2;:::; L 1) (6) the image gray value, Y ; Y ; Y ; ; Y denote the equalized 1 2 3 n images corresponding to the n subimages, and the image Y The CDF of the gray levels of image I is after equalization is obtained by merging the subimages in accordance with the pixel positions. c(k) D p(r); k D 0; 1; 2;:::; L 1 (7) The GHE model has several advantages, such as relatively rD0 few calculations and high efciency, and it is especially The standard HE algorithm maps the original image to an suitable for the enhancement of overall darker or brighter enhanced image with an approximately uniform gray-level images [58]. However, it is difcult for a global algorithm, distribution based on the CDF. The mapping relationship is which conducts statistical operations based on the gray values as follows: of the whole image, to obtain the optimal recovered values for each local region. Such an algorithm is unable to adapt f (k) D (L 1) c(k) (8) to the local brightness characteristics of the input image, An example of HE is shown in Fig. 7, in which (a) presents and consequently, the sense of depth in the image will be the input low-light image, (b) displays the histogram of the decreased after processing. 87888 VOLUME 8, 2020 W. Wang et al.: Experiment-Based Review of Low-Light Image Enhancement Methods brightness-preserving bi-histogram equalization (BBHE) to maintain the image brightness [73], in which the input image is divided into two subimages I and I (satisfying the condi- L U tions I D I [I and I D I \I D 8) using the mean bright- L U L U ness of the original image as a threshold. HE is then applied to each subimage to address the issue of uneven brightness in local areas of the enhanced image. Subsequently, Wang et al. proposed the equal-area dualistic subimage histogram equal- ization (DSIHE) algorithm [74]. This algorithm uses the median gray value of the original image as a threshold to divide the image into two parts of the same size to maximize its entropy value, thus overcoming the loss of image infor- mation caused by the standard HE algorithm. Later, Chen proposed the minimum mean brightness error bi-histogram equalization (MMBEBHE) model [75], which minimizes the mean brightness error between the output image and the original image. Furthermore, Shen et al. proposed the FIGURE 9. Basic model of LHE algorithms. iterative brightness bi-histogram equalization (IBBHE) algo- rithm [76], in which the segmentation threshold is selected through an iterative method to drive the mean to converge To solve this problem, many scholars have proposed that an to the optimum while avoiding the confusion between target LHE algorithm should be used instead, and such algorithms and background that can occur in traditional HE. Similarly, have hence been put into wide practice. The basic idea of a BBHE algorithm that preserves color information was pro- LHE is to apply the HE operation separately to various local posed by Tian et al. [77]. This algorithm not only retains areas of an image. The original image is spatially divided into the color information of the input image but also enhances multiple subblocks, and equalization is conducted separately the image details. Other optimization methods, including on each subblock to adaptively enhance the local information local approaches, have also been continuously emerging. For of the image to achieve the desired enhancement effect. LHE example, a standard adaptive histogram equalization (AHE) methods can be further divided into three approaches [72], algorithm was proposed in [78], while in [79], a block iter- namely, LHE with nonoverlapping subblocks, LHE with ative histogram method was used to enhance the image con- overlapping subblocks and LHE with partially overlapping trast, and a moving template was used for partially overlapped subblocks, as shown in Fig. 9. The implementation process subblock histogram equalization (POSHE) processing for for these algorithms is as follows. each part of the image. Liu et al. [80] proposed an LHE (i) For an input image of a given size, M N, a subblock method that uses a histogram-number-based gray-level pro- with dimensions of m n is dened in the upper left corner tection mechanism on the basis of nonoverlapping subblocks. of the image, and additional subblocks are dened by moving The spatial positions of the pixels in each block are taken into along the horizontal and vertical directions with step sizes of account when setting the weights, thus effectively eliminating h and w, respectively. the block effect. Huang and Yeh [81] proposed a novel LHE (ii) HE processing is applied to each subblock in the same algorithm that can improve the contrast of an image while manner as for a GHE algorithm. maintaining its brightness. Reza [82] proposed the contrast- Then, the results are added to the output image, and the limited adaptive histogram equalization (CLAHE) algorithm. cumulative number of subblock processing rounds for each This algorithm effectively mitigates the block effect that pixel is recorded. arises in the enhancement process and limits local contrast (iii) The next subblocks are dened by moving horizontally enhancement by setting a threshold, thus avoiding excessive with the horizontal step size h and vertically with the vertical enhancement of the image contrast. The CLAHE method step size w. For each subblock that does not exceed the can be combined with the Wiener lter (WF) or a nite image boundary, step (ii) is repeated; if no such unprocessed impulse response lter (FIRF) for image contrast enhance- subblocks remain, the method proceeds to the next step. ment, as discussed in [83] and [84], respectively. Based on (iv) The output image is obtained by dividing the gray BBHE and recursive mean-separate histogram equalization value of each pixel in the output image by the corresponding (RMSHE) [85], an LHE algorithm that maintains image cumulative number of subblock processing rounds. The dis- brightness was proposed in [86]. In this algorithm, RMSHE advantages of this algorithm are the local block effects and is applied to each subblock, thus effectively maintaining the the large number of calculations. mean brightness of the subblocks. Based on DSIHE and 3) HE ALGORITHMS RMSHE, a POSHE method with equal-area recursive decom- Many algorithms have been developed based on the position was proposed in [87]. In this algorithm, multiple classic HE approach. For example, Kim rst proposed equal-area recursive decompositions are implemented on the VOLUME 8, 2020 87889 W. Wang et al.: Experiment-Based Review of Low-Light Image Enhancement Methods subblocks in combination with DSIHE and RMSHE to more effectively maintain the image brightness compared with DSIHE. Simultaneously, a bilinear difference measure is used to eliminate the differences in contrast between each subblock and the two externally adjacent subblocks, thus improving the visual effect of the image. In addition, an LHE algorithm based on entropy maximization and brightness maintenance was proposed in [88] that maximizes the entropy of the subblocks while leaving their mean brightness unchanged before and after equalization, thus effectively enhancing the image details. In [89], the contextual and variational con- trast (CVC) enhancement algorithm was proposed; in this algorithm, the two-dimensional histogram and context infor- mation model of the input image are used to implement nonlinear data mapping to enhance a weakly lighted image. FIGURE 10. Image decomposition (figure best viewed in color). In [59], an HE method with gamma correction was pro- posed and achieved a balance between the quality of the output image and the computing time. However, the HE methods discussed above typically fail to effectively elim- inate the potentially severe interference of noise in weakly illuminated images; in fact, they may even amplify such noise. Therefore, researchers have proposed interpolation- based HE algorithms, in which linear interpolation methods are used to determine the transformation function for the current pixels, thus overcoming the "block effect" caused by nonoverlapping subblocks in HE and achieving a better enhancement effect. In recent years, the newly proposed algorithms have all been combined with image analysis. The background brightness-preserving histogram equaliza- tion (BBPHE) algorithm [90] divides the input image into the background region and the target region, while the dominant orientation-based texture histogram equalization (DOTHE) FIGURE 11. Image enhancement using RGB model (figure best viewed in algorithm [91] divides the image into textured and smooth color). regions. Other typical algorithms include gain-controllable clipped histogram equalization (GCCHE) [72], recursive subimage histogram equalization (RSIHE) [92], entropy- based dynamic subhistogram equalization (EDSHE) [93], dynamic histogram equalization (DHE) [94], brightness- preserving dynamic histogram equalization (BPDHE) [95], bi-histogram equalization with a plateau limit (BHEPL) [96], median-mean-based subimage-clipped histogram equaliza- tion (MMSICHE) [97], exposure-based subimage histogram equalization (ESIHE) [98], adaptively modied histogram equalization (AMHE) [99], weighted histogram equaliza- tion (WHE) [100], a histogram modication framework (HMF) [101], gap adjustment for histogram equalization (CegaHE) [102], and unsharp masking with histogram equal- ization (UMHE) [103]. To illustrate the performance of HE methods on color images, the AMHE [99], BBHE [73], CLAHE [82], DSIHE [74], HE [66], RMSHE [85], RSIHE [92], and FIGURE 12. Image enhancement using HSI model. WHE [100] algorithms are tested here in both the RGB and HSI color spaces. The test image and its results are shown in Figs. 10-12. The contrast of gray images can be effectively enhanced; (i) Equalization and merging of the three R, G and B however, for a color image with three components subimages (R, G and B), serious color distortion of the image may be 87890 VOLUME 8, 2020 W. Wang et al.: Experiment-Based Review of Low-Light Image Enhancement Methods observed if the nal image is obtained by simply equalizing and merging the R, G, and B subimages after equalization. The main reason is that the traditional HE algorithm exces- sively enhances the image brightness. If the mean brightness of one of the three R, G and B subimages of a color image is too dark or too bright, then the mean brightness of this subimage after equalization will be near the median gray FIGURE 13. Light reflection model. value of this component. As a result, the color corresponding to this subimage will be either strengthened or weakened after enhancement, resulting in obvious color distortion and inconsistency in the nal color image. Therefore, the primary goal of HE for a color image using this method is to maintain the mean brightness of the image while enhancing the image contrast. FIGURE 14. General process of the Retinex algorithm. (ii) In the HSI model, only the brightness component is equalized. In this method, the input color image is rst con- verted from the RGB color space to the HSI color space, and I(x; y), then the reection component can be separated from then HE enhancement is applied to the brightness compo- the total amount of light, and the inuence of the illumination nent I. Finally, the color image is converted back to the RGB component on the image can be reduced to enhance the space. In this way, the number of equalizations is reduced image [111]. The Retinex algorithm features a sharpening from 3 to 1. However, some calculations are still needed for capability, color constancy, large dynamic range compression the transformation between the color spaces, and there is still and high color delity. The general process of the Retinex a risk of excessive image enhancement. algorithm is shown in Fig. 14, where Log denotes the loga- In summary, HE algorithms can effectively enhance low- rithmic operation and Exp denotes the exponential operation. light images and are often used in combination with other Many researchers have proposed effective image enhance- methods. The visual effect of such an image can be improved ment algorithms based on the Retinex theory. First, Land based on the contrast and detail enhancement provided by an proposed that the illumination component could be estimated HE algorithm. However, these methods can also easily cause by using a random path algorithm to reduce the effect of a loss of color delity and the generation of noise, resulting uneven illumination. However, this random path algorithm is in image distortion. complex and has a common effect. Later, a two-dimensional path selection method, namely, the central Retinex algorithm, C. RETINEX METHODS was proposed. Its core idea is as follows: an appropriate The Retinex theory, namely, the theory of the retinal cortex, surround function is selected to determine the weighting of established by Land and McCann, is based on the percep- the pixel values in the neighborhood of the current pixel, tion of color by the human eye and the modeling of color which are then used to replace the current pixel value. Subse- invariance [104]. The essence of this theory is to determine quently, Jobson et al. proposed the single-scale Retinex (SSR) the reective nature of an object by removing the effects of algorithm [112], [113], followed by the multiscale Retinex the illuminating light from the image. According to Retinex (MSR) algorithm and the multiscale Retinex algorithm with theory, the human visual system processes information in color restoration (MSRCR) [114], [115]. a specic way during the transmission of visual informa- tion, thus removing a series of uncertain factors such as the 1) SINGLE-SCALE RETINEX (SSR) intensity of the light source and unevenness of light. Conse- Essentially, the SSR algorithm obtains a reection image by quently, only information that reects essential characteris- estimating the ambient brightness. The formula is as follows: tics of the object, such as the reection coefcient, is retained [105][109]. Based on the illumination-reection model (as log R (x; j) D log I (x; y) log[G(x; y) I (x; y)] (10) i i i shown in Fig. 13), an image can be expressed as the product of where I(x; y) represents the input image, R(x; y) represents a reection component and an illumination component [110]: the reection image, i represents the various color channels, I(x; y) D R(x; y)L(x; y) (9) (x; y) represents the position of a pixel in the image, G(x; y) represents the Gaussian surround function, and represents where R(x; y) is the reection component, which represents the convolution operator. the reective characteristics of the object surface; L(x; y) The formula for the Gaussian surround function is is the illumination component, which depends on the envi- 2 2 x Cy ronmental light characteristics; and I(x; y) is the received ( ) G(x; y) D Ke (11) image. L(x; y) determines the dynamic range of the image, whereas R(x; y) determines the inherent nature of the image. where is a scale parameter. The smaller the value of this According to Retinex theory, if L(x; y) can be estimated from parameter is, the larger the dynamic range compression of VOLUME 8, 2020 87891 W. Wang et al.: Experiment-Based Review of Low-Light Image Enhancement Methods the image is, and the clearer the local values are. K is a 3) MULTISCALE RETINEX WITH COLOR normalization factor to ensure that the Gaussian function RESTORATION (MSRCR) satises During the process of image enhancement, the SSR or MSR ZZ algorithm is applied separately to the three color channels, R, G and B. Therefore, compared with the original image, G(x; y)dxdy D 1 (12) the relative proportions of the three color channels may change after enhancement, thus resulting in color distortion. To better mimic the characteristics of the human visual To overcome this problem, MSRCR has been proposed. This system, automatic gain compensation is often required; that algorithm includes a color recovery factor C for each channel, is, the output image is mapped to [0, 255] using a linear gray which is calculated based on the proportional relationship stretching algorithm. The mathematical formula is as follows: among the three color channels in the input image and is then used to correct the color of the output image to eliminate color R (x; y) R i min R (x; y) D 255 (13) i distortion. R R max min The formula for the color recovery factor is as follows: 0 th where R (x; y) is the output after gray stretching of the i I (x; y) i i C (x; y) D f ( ) (16) color channel, and R and R are the maximum and max min 3 minimum gray levels of the original image, respectively. I (x; y) Fig. 15 shows the enhancement results obtained using the SSR method when is 15, 80 and 250, respectively. where f denotes the mapping function, and C(x; y) is the color recovery factor. Jobson et al. found that the best color recovery effect is achieved when the mapping function is a logarithmic function, namely, I (x; y) C (x; y) D log( ) (17) I (x; y) FIGURE 15. Enhancement with the SSR algorithm (figure best viewed in The mathematical expression for the MSRCR algorithm color). can be obtained by combining formulas (18) and (15): MSRCR D log R (x; y) However, the SSR algorithm has some limitations. It is i kD1 difcult to maintain a balance between detailed information D C ! flog I (x; y) log[G (x; y) I (x; y)]g enhancement and color delity in images processed with this i k i k i algorithm due to the use of a single scale parameter. (18) 2) MULTISCALE RETINEX (MSR) The algorithm takes advantage of the convolution oper- To maintain a balance between dynamic range compression ation with Gaussian functions. Dynamic range compres- and color constancy, Jobson, Rahman et al. extended the sion and color constancy are achieved for features at large, single-scale algorithm to a multiscale algorithm, namely, the medium and small scales, thus yielding a relatively ideal MSR algorithm [114], which is expressed as follows: visual effect. Experimental results obtained with the SSR, MSR and MSRCR algorithms are shown in Fig. 16. MSR D log R (x; y) kD1 D ! flog I (x; y) log[G (x; y) I (x; y)]g (14) k i k i ! D 1 (15) kD1 FIGURE 16. Enhancement with different Retinex algorithms. where i represents the three color channels; k represents the Gaussian surround scales; N is the number of scales, gener- 4) OTHER RETINEX ALGORITHMS ally 3; and the ! parameters are the scale weights. Compared with the SSR algorithm, the MSR algorithm can take advan- Retinex theory conforms to the characteristics of human tage of the benets of multiple scales. The MSR algorithm not visual perception; consequently, it has been widely only enhances image details and contrast but also produces applied and developed. Many enhancement algorithms enhanced images that exhibit better color consistency and an have been proposed based on Retinex theory [116][119]. improved visual effect. Kimmel et al. used the transcendental hypothesis to propose 87892 VOLUME 8, 2020 W. Wang et al.: Experiment-Based Review of Low-Light Image Enhancement Methods a Retinex algorithm based on a variational framework [120], a bright-pass lter in combination with neighborhood bright- in which the problem of illumination estimation is trans- ness information to maintain image naturalness, not only formed into an optimal quadratic programming problem. improving the image contrast but also better maintaining Despite its high complexity, this algorithm achieves good natural brightness without requiring naturalness preserved results. Elad et al. proposed a noniterative Retinex algorithm enhancement (NPE). Based on Retinex theory, some schol- that can process the edges in an image and suppress noise ars have separated the reection and illumination compo- in dark areas [121]. Meylan et al. proposed a new model for nents and then enhanced the latter using a local nonlinear presenting images with a high dynamic range and adapting transformation model to render a brighter and more natural images both globally and locally to the human visual system. image [137]. As an alternative, an enhancement adjustment In this algorithm, an adaptive lter is applied to reduce the factor has been introduced [138] to adjust the enhancement chromatic aberrations caused by halo effects and brightness degrees of different brightness values to avoid noise ampli- modication [122]. Xu et al. proposed a rapid Retinex image cation and color distortion. Fu et al. [139] proposed a Retinex enhancement method that eliminates the halo phenomenon algorithm based on a variation framework that effectively encountered with the traditional Retinex algorithm in areas enhances the contour details of an image while suppressing of high light and dark contrast [123]. abnormal enhancement. In 2014, Zhao proposed a Retinex Likewise, Marcelo et al. proposed a kernel-based Retinex algorithm based on a Markov random eld model. This (KBR) method, which relies on calculating the expected algorithm estimates the illumination component of an image value of a suitable random-variable-weighted kernel func- by means of guided ltering and solves for the reection tion to reduce color error and improve details in a shadow component of the object of interest based on the Markov image [124]. In 2011, Ng et al. proposed a total variation random eld model; additionally, this algorithm solves prob- model based on the Retinex algorithm; in this model, the illu- lems such as detail loss, color distortion and halo effects mination component is spatially smooth, and the reection encountered when the MSRCR algorithm is used to process component is piecewise continuous. Moreover, a fast calcu- nighttime color images [140]. Later, Zhao et al. [141] pro- lation method was used to solve the minimization problem posed a Retinex algorithm based on weighted least squares. posed by the variation model. Finally, the validity of the pro- Jae et al. [142] proposed an MSR algorithm based on posed model was veried through experiments [125]. In addi- subband decomposition with a fusion strategy. Moreover, tion, Fu et al. proposed a weighted variation model for the Liu et al. [143] combined the Retinex algorithm with bilateral simultaneous reectivity and illumination estimation (SRIE) ltering, thereby effectively improving the color distortion of observed images; this model can precisely retain the esti- and detail loss in the nal image but also increasing the mated reectivity while inhibiting noise to some extent [126]. complexity of the algorithm [144], [145]. Yin et al. [146], Petro et al. proposed a multiscale Retinex algorithm with Mulyantini and Choi [147], Zhang et al. [148], Ji et al. [149], chromaticity preservation (MSRCP) [127]. First, the image and Zhang et al. [150] proposed Retinex-based algorithms brightness data are processed using the MSR algorithm; then, combined with guided lters [151]. Particularly during the the results are mapped to each channel in accordance with early stage of research on Retinex algorithms, scholars the original proportional relationship among the R, G and B obtained many fruitful ndings. channels. Thus, the image is enhanced while retaining the In short, Retinex algorithms have clear benets and can original color distribution, and the grayish color typically be easily implemented. These methods can not only increase observed in images enhanced using the MSRCR algorithm the contrast and brightness of an image but also has obvious is effectively improved [128]. Later, Matin et al. optimized advantages in terms of color image enhancement. However, the MSRCP method using particle swarm optimization (PSO) these algorithms use the Gaussian convolution template for to avoid manual adjustment of the parameters [129]. illumination estimation and do not have the ability to preserve Chen and Beghdadi [130] proposed an image enhance- edges; consequently, they may lead to halo phenomena in ment algorithm based on Retinex and a histogram stretch some regions with sharp boundaries or cause the whole image method to maintain the natural color of images. Shen and to be too bright. Hwang [131] proposed an image enhancement algorithm D. FREQUENCY-DOMAIN METHODS based on Retinex with a robust envelope. To avoid color With the development of multiscale image analysis tech- distortion, Jang et al. [132] proposed an image enhancement algorithm based on the use of MSR to estimate the main color nology, research on image enhancement algorithms has of an image. Inspired by Retinex theory, Wang et al. [133], been extended from the spatial domain to the frequency Xiao et al. [134] used a bionic method to enhance images. domain [152]. Image enhancement methods based on the Chang Hsing Lee et al. proposed an adaptive MSR algo- frequency domain transform an image into the frequency rithm [135] based on brightness classication. For pix- domain for ltering by means of Fourier analysis, and the els in dark areas and bright areas, higher weights were nal image is then inversely transformed back into the spatial given to larger-scale SSR components to enhance the over- domain. Typical frequency-domain methods include homo- all visual effect of the image. Wang et al. [136] proposed morphic ltering (HF) and wavelet transform (WT) methods. VOLUME 8, 2020 87893 W. Wang et al.: Experiment-Based Review of Low-Light Image Enhancement Methods 1) HOMOMORPHIC FILTERING (HF) equation (24), the image after frequency-domain correction is obtained as follows: HF-based enhancement methods use the characteristics of the illumination-reection model to transform the illumination G(x; y) D expjh (x; y)j expjh (x; y)j (24) L R and reection components in the form of a sum in the log- arithmic domain rather than a product. A high-pass lter is Therefore, the core of the HF technique is to design an used to enhance the high-frequency reection component and appropriate lter H(u; v) based on the image properties char- suppress the low-frequency illumination component in the acterized by the illumination component and the reection Fourier transform domain [7]. component in combination with a frequency lter and a gray The specic steps of the HF process are listed as follows. transformation to compress the dynamic range and enhance (i) In the illumination-reection model, the illumination the contrast. A homomorphic lter has the following general component is multiplied by the reection component, which form: cannot be transformed into the frequency domain. There- H(u; v) D (
)H (u; v)C
(25) H L hp L fore, to allow these components to be processed separately, the logarithmic transformation should rst be implemented to where
< 1 and
< 1; the purpose of these parameters L H transform these multiplicative components into additive com- is to control the scope of the lter amplitude. H is usually hp ponents. Taking the logarithm of both sides of equation (10) a high-pass lter, such as a Gaussian high-pass lter, a But- yields the following: terworth high-pass lter, or a Laplacian lter. If a Gaussian lter is used as H , then hp ln I(x; y) D ln L(x; y)C ln R(x; y) (19) 2 2 H D 1 exp[ c (D (u; v)=D )] (26) hp (ii) The image is transformed from the spatial domain into the frequency domain by means of the Fourier transform, where c is a constant that controls the form of the lter. i.e., the Fourier transform is applied to both sides of the above The larger the value of the transition gradient from low equation: frequency to high frequency is, the steeper the slope, as shown in Fig. 17. F[ln I(x; y)] D F[ln L(x; y)C ln R(x; y)] (20) This equation can be written more concisely as I(u; v) D L(u; v)C R(u; v) (21) where I(u; v), L(u; v) and R(u; v) are the Fourier transforms of I(u; v), L(u; v) and R(u; v), respectively. The spectral function L(u; v) is mainly concentrated in the low-frequency range, while the spectral function R(u; v) is mainly concentrated in FIGURE 17. Amplitude-frequency curve of a homomorphic filter. the high-frequency range. (iii) For contrast enhancement, an appropriate high-pass lter is selected, and the R(u; v) component in the frequency domain is enhanced by the transfer function H(u; v). The resulting expression is as follows: S(u; v) D H(u; v)I(u; v) D H(u; v)L(u; v)C H(u; v)R(u; v) FIGURE 18. Flowchart of the HF process. (22) The specic algorithm ow of the HF method is shown (iv) The inverse Fourier transform is used to transform in Fig. 18. In this gure, Log is the logarithmic transform, the image from the frequency domain back into the spatial FFT is the fast Fourier transform, H(u,v) is the frequency domain. Let s(u; v) denote the inverse Fourier transform cor- ltering function, IFFT is the inverse FFT, and Exp is the responding to S(u; v); then, the inverse Fourier transform of exponential operation. equation (10) is The traditional HF algorithm requires two Fourier trans- 1 1 forms and thus is not suitable for real-time processing. s(u; v) D F (H(u; v)L(u; v))C F (H(u; v)R(u; v)) To address this issue, some scholars have proposed an HF D h (x; y)C h (u; v) (23) L R algorithm based on a spatial lter [153], [154]. The main Therefore, the enhanced image corresponds to the super- idea is similar to that of the traditional HF algorithm. position of the illumination component and reection First, the original image is transformed into the logarithmic component. domain, and then, the output of a low-pass lter is used to (v) The inverse logarithmic transform G(x; y) D estimate the illumination component. Finally, the reection exp[s(x; y)] is applied to equation (24) to obtain the nal component is added to obtain the enhanced image. Because corrected image. Thus, by taking the exponent of both sides of the traditional HF algorithm does not account for the local 87894 VOLUME 8, 2020 W. Wang et al.: Experiment-Based Review of Low-Light Image Enhancement Methods features of the image space, Zhang and Xie [155] proposed WT methods. In a WT-based image enhancement algorithm, an HF algorithm based on the block discrete cosine trans- the input image is rst decomposed into low-frequency and form (DCT) and removed the block effect after HF by con- high-frequency image components; then, image components sidering the average boundaries with adjacent subimages as at different frequencies are separately enhanced to highlight well as the characteristics of the DCT. Images processed with the details of the image. The main idea of the wavelet anal- this algorithm show a good effect in terms of local con- ysis method is to apply wavelet decomposition to the orig- trast. A two-channel HF color image enhancement method inal image to obtain the wavelet coefcients for different based on the HSV color space was proposed in [41] by subbands, adjust these wavelet coefcients, and then apply Han Lina et al. First, the input color image is transformed the inverse transformation to the new coefcients to obtain from RGB space into HSV space, thus obtaining separate the processed image. Such an image enhancement algorithm chroma-, saturation- and brightness-channel images. Then, can enhance an image at multiple scales based on the WT. the saturation (S)-channel image is enhanced via Butterworth It is believed that low-illumination conditions have a greater HF, and the brightness (V)-channel image is enhanced via inuence on high-frequency image components, which are Gaussian HF. Finally, the image is transformed back into generally concentrated at the edges of an image and in con- RGB space to obtain the enhanced image. Fig. 19 shows tour regions [160]. Therefore, a WT-based algorithm will the effects of enhancement processing with a Gaussian high- enhance the high-frequency components of the input image pass lter and presents the histograms corresponding to the and suppress its low-frequency components. In particular, images. The image brightness is improved after the HF pro- the dual-tree complex WT can usually achieve satisfactory cess, but the image details are fuzzy. results [161][165]. The basic process of WT-based image enhancement is as follows. Processing of the displacement is carried out for the function (t) describing the basic wavelet (parent wavelet); then, a wavelet sequence can be obtained by taking the inner product between the processed (t) and the signal x(t) to be analyzed at various scales a. FIGURE 19. HF effect (figure best viewed in color). C1 1 t WT (a;) D p x(t) dt (a > 0) (27) a a Each coin has two sides: HF has the advantage of better maintaining the original image content, but its disadvan- tage is that it requires two Fourier transforms, namely, one The equivalent expression in the frequency domain is exponential operation and one logarithmic operation, and therefore involves more calculation [156][158]. If the cut-off Z C1 j! frequency of the high-pass lter is too high, then the dynamic WT (a;) D X(!) (a!)e d! (28) range will be compressed and details will be lost. If the cut- off frequency is too low, the dynamic range compression will be minimal, and the algorithm will lack self-adaptability. where X(!) and (!) represent the Fourier transforms of x(t) This method is based on the premise of uniform illumination; and (t), respectively. consequently, the enhancement effect is poor for nighttime In standard WT-based image enhancement, the input image images with both bright and dark areas. is usually rst decomposed into one low-pass subimage and HF algorithms can remove uneven regions generated by three directional high-pass subimages, namely, a horizon- light while maintaining the contour information of an image. tal detail image, a vertical detail image, and a diagonal However, such an algorithm requires two Fourier transfor- detail image. The low-pass subimage represents the low- mations, i.e., one exponential operation and one logarithmic frequency information in the image, which corresponds to operation, for each pixel in an image; therefore, its computa- smooth regions. The high-pass subimages represent the high- tional burden is large. frequency information in the image, which correspond to detailed image information. Based on the characteristics of 2) WAVELET TRANSFORM (WT) these subimages, the most effective method is selected to Similar to the Fourier transform, the WT is a mathematical enhance the coefcients of the different frequency compo- transform that uses a group of functions called a wavelet nents. Finally, the enhanced image in the spatial domain is function basis to represent or approximate a signal [159]. The obtained through inverse transformation. WT can be used not only to characterize the local features of The steps of WT-based image enhancement are as signals in the time and frequency domains but also to conduct follows [163], [164]. a multiscale analysis of functions or signals through oper- (i) The original image is input. ations such as scaling and translation. Thus, great progress (ii) The low-frequency and high-frequency components of in image contrast enhancement has been achieved using the original image are obtained via wavelet decomposition. VOLUME 8, 2020 87895 W. Wang et al.: Experiment-Based Review of Low-Light Image Enhancement Methods (iii) The wavelet coefcients are nonlinearly enhanced the enhanced image. The KGWT algorithm improves the with a functional relationship that satises overall brightness and contrast of images. A WT-based image enhancement algorithm based on contrast entropy was >W C G (T 1) W > T i i proposed in [170]. After wavelet decomposition, the low- W D G W jWj T (29) o i i frequency components of the image are enhanced via HE, W G (T 1) W < T i i and the high-frequency components are enhanced by maxi- mizing the contrast entropy. Likewise, in [171], the singular where G is the gain for the wavelet coefcient, T is the value matrices of low-frequency images were obtained with threshold for the wavelet coefcient, W is the wavelet coef- an enhanced wavelet decomposition approach, which also cient after image decomposition, and W is the wavelet achieved an improved image enhancement effect. A fast and coefcient after enhancement. adaptive enhancement algorithm for low-light images based (iv) The enhanced wavelet coefcients are inversely on the WT was proposed in [172]. In this algorithm, the RGB transformed to obtain the reconstructed enhanced image. input image is transformed into HSV space, and the discrete The basic ow of the WT-based image enhancement wavelet transform (DWT) is applied to the brightness (V ) process is illustrated in Fig. 20. image to separate high-frequency and low-frequency sub- bands. The illumination components in the low-frequency WT subbands of the image are rapidly estimated and removed using bilateral ltering, while a fuzzy transformation is used to realize the enhancement and denoising of edge and texture information. WT-based image enhancement theory is often FIGURE 20. Flowchart of WT-based image enhancement. combined with other approaches, such as fuzzy theory, image fusion, and HE. As discussed in [173], [174], after wavelet decomposition is performed on the original image, HE can be performed on each subband image individually. Finally, the inverse WT can be used to reconstruct the enhanced, noise-reduced image [175]. The WT approach has also been combined with Retinex theory to enhance low-light images, thus achieving a better enhancement effect [176], [177]. FIGURE 21. WT-based low-light image enhancement (figure best viewed Russo [178] proposed a method of improving image quality in color). by means of multiscale equalization in the wavelet domain. The results of WT-based image enhancement are shown Chen [179] proposed an image enhancement method that in Fig. 21, where n is the wavelet scale. combines wavelet and fractional differential models. The In low-light images, it is difcult to distinguish image WT can reect both the time-domain and frequency-domain noise from image details. A high-frequency analysis is con- features of an image. Specically, this model not only extracts ducted on the WT-decomposed image, and then, to process edge information from an image but also extracts its overall the decomposed wavelet coefcients, various thresholds and structure, which is consistent with the needs of low-light enhancement factors are applied to effectively remove noise image enhancement. However, because the wavelet basis while enhancing detail components. Generally, the enhance- needs to be dened in advance, the application of this algo- ment effect is better than that of traditional image enhance- rithm is limited. ment algorithms [166]. Zong et al. proposed a contrast The curvelet transform is a multiscale analysis method enhancement method based on multiscale wavelet analy- developed based on the WT that can overcome the limitations sis [167]. In this method, a multiscale nonlinear high-pass of the WT by enhancing the curved edges in an image [180]. function is used to process the wavelet coefcients, thus Starck et al. [181] proposed a multiscale analysis method enabling the enhancement of ultrasonic images. Loza et al. based on the curvelet transform and compared it with the proposed an adaptive contrast enhancement method based on WT algorithm to demonstrate its superiority for color image the statistics of local wavelet coefcients [168]. A model for enhancement. The curvelet-transform-based enhancement the local wavelet coefcients was established on the basis algorithm achieves a better effect for noisy images; however, of the binary Cauchy distribution, thus yielding a nonlinear it is not as effective as the WT method for noiseless or enhancement function for wavelet coefcient compression. nearly noiseless images. In [182], the WT was combined with A WT-based image enhancement algorithm based on a knee the curvelet transform to achieve image enhancement with function and gamma correction (KGWT) has been proposed edge preservation. The specic steps are as follows. First, in which an improved knee function and a gamma trans- the curvelet transform is applied to remove noise without the form function are used to enhance the low-frequency coef- loss of edge details; then, the image is enhanced using the WT. cients [169]. Then, after enhancement, the low-frequency In [183], an improved enhancement algorithm for noisy low- coefcients are combined with the high-frequency coef- light color images based on the second-generation curvelet cients, and nally, the inverse WT is applied to obtain transform was proposed. A compromise factor for the YUV 87896 VOLUME 8, 2020 W. Wang et al.: Experiment-Based Review of Low-Light Image Enhancement Methods (luminance and chromaticity) color space and an improved infrared and visible images [192]; this algorithm enhances gain function were used to suppress, elevate or maintain the the clarity of the image details while retaining the unique curvelet coefcients. This approach effectively suppresses information captured by various sensors. Additionally, this noise while optimally recovering both the edges and smooth algorithm involves only addition and subtraction operations parts in an image acquired under low illumination. Thus, and thus can be implemented using simple hardware in real the image enhancement quality is effectively improved. The time [193]. Zhu et al. proposed a fusion framework for night main advantage of image enhancement methods based on vision applications called night vision context enhancement the WT is that they allow the time- and frequency-domain (FNCE) [194], in which the fused result is obtained by com- features of images to be analyzed on multiple scales [184]. bining decomposed images using three different rules. Another advantage of wavelet analysis lies in the rened Furthermore, many scholars have studied the use of night local analysis capabilities, as such methods have better local vision technology for single- and double-channel low-light characteristics in both the spatial and frequency domains and color fusion based on bispectral and trispectral features. thus are benecial for analyzing and highlighting the details Vision technology has been developed based on low-light of an image. Wavelet analysis is mainly used for infrared and infrared thermal image fusion, low-light and longwave images [185], [186] and medical image enhancement [187]. infrared image fusion, ultraviolet and low-light image fusion, The disadvantage is that overbright illumination cannot be and even trispectral color fusion based on low-light, medium- avoided. wave and longwave infrared images [195][197]. However, In summary, frequency-domain-based algorithms can the visible and infrared images need to be acquired simul- effectively highlight the details of an image through enhance- taneously, which constrains such algorithms in terms of the ment of the wavelet coefcients, but they can also easily hardware conditions necessary to support them. Moreover, magnify the noise in the image. Like other frequency-domain the intelligence and adaptability of these algorithms are poor, transformation methods, these image enhancement meth- and their parameters need to be articially set. Therefore, ods require large amounts of calculation, and the selec- these algorithms have still not been widely adopted. tion of the transformation parameters often requires manual 2) IMAGE FUSION BASED ON BACKGROUND intervention. HIGHLIGHTING E. METHODS BASED ON IMAGE FUSION Generally, image fusion methods based on background high- Another direction of research on low-light image enhance- lighting rely on the integration of low-light images with ment involves methods based on image fusion tech- daytime images to enhance the image details, thus improving niques [188]. In these methods, many images of the same the visual effect of the low-light images [198]. The general scene are obtained with different sensors, or additional process is described as follows. First, an image is obtained images are obtained with the same sensor using various imag- in the daytime under reasonably sufcient lighting condi- ing methods or at different times. Finally, as much useful tions for use as the source of the background for the fused information as possible is extracted from every image to image. Then, another image is obtained in the same position synthesize a high-quality image, thus improving the utiliza- under low illumination, and the background of this image tion rate of the image information. The synthesized image is removed. The remainder of the latter image is taken as can reect multilevel information from the original images the foreground of the fused image. Finally, the background to comprehensively describe the scene, thus allowing the and foreground are integrated into a single image using a available image information to better meet the requirements suitable algorithm. For example, Raskar et al. estimated the of both human observers and computer vision systems. intensity of the mixed gradient eld of multiple low-light images and daytime images of the same scene, thus improv- 1) MULTISPECTRAL IMAGE FUSION ing the visual effect of the low-light images [199]. Rao et al. Multispectral image fusion is an improved method of obtain- proposed a low-light enhancement method based on video ing the details of a low-light imaged scene by fusing a frame fusion [200]. The foreground area of each low-light visible image with an infrared image. Near-infrared (NIR) video frame was fused with the background area from a light has a longer wavelength and stronger penetration abil- daytime video frame of the same scene to improve the bright- ity than does visible light, allowing redundant information ness of the low-light video and compensate for detail loss. to be removed from a ltered infrared image. Addition- In [201], daytime images from the same site at various times ally, a low-light visible image can provide rich background were fused, and the nal fused image was obtained using a information; consequently, better images can be obtained moving object extraction technique and weighting processing through image fusion. For example, in a method developed by based on brightness estimation theory. This process is shown the US Naval Research Laboratory (NRL), images obtained in Fig. 22. with an infrared thermal imager are integrated with the R-, Multi-image fusion methods such as those presented in G- and B-channel images obtained with a low-light night [202], [203] achieve a better enhancement effect but require vision device to obtain night vision color images [189][191]. high-quality daytime video information from the same scene. Toet et al. proposed a pseudocolor fusion algorithm for For example, such methods are not suitable for use in VOLUME 8, 2020 87897 W. Wang et al.: Experiment-Based Review of Low-Light Image Enhancement Methods The fusion of multiple images acquired from the same scene can be applied to effectively enhance low-light images. Because good-quality image information from the same scene is needed, methods of this kind have stringent require- ments in terms of image acquisition; in particular, the camera equipment needs to be stable. Since a long shooting time is required, this method cannot be applied for real-time imaging or video enhancement. Moreover, the enhancement effect for FIGURE 22. Fusion based on background highlighting. images of globally low brightness is poor. underground mines, because no high-quality daytime video information is available for such areas. Therefore, the appli- cations of these algorithms are limited. Moreover, the large number of iterations required complicates the calculation. 3) FUSION BASED ON MULTIPLE EXPOSURES Image fusion is the process of combining multiple images FIGURE 23. Fusion based on a single image. of the same scene into a single high-quality image that contains more information than any single input image. 4) FUSION BASED ON A SINGLE IMAGE Petschnigg et al. proposed a method of obtaining vari- ous images with both ash and nonash technologies and Many scholars have studied the synthesis of the entire then realizing low-light image enhancement through image dynamic range of a scene [213], [214], including details fusion [204]. In this method, a ash image is captured to extracted in a variety of ways from a single image, to break record detailed information of the scene, and a nonash the dependence on image sequences, as shown in Fig. 23. image is captured to record the brightness information of the Le and Li [215] improved the contrast of an image by fusing background. Then, the image detail information is integrated the original image with the image obtained after a logarithmic with the background brightness information. The resulting transformation. Yamakawa and Sugita presented an image image contains not only the details from the ash image fusion technique that used a source image and the Retinex- but also the brightness information from the nonash image. processed image to achieve high visibility in both bright and Similarly, high-dynamic-range (HDR) [205][207] imaging dark areas [216]. In Ref. [217], Wang et al. adaptively gen- using multiexposure fusion (MEF) techniques has become erated two new images based on nonlinear functional trans- very popular in recent years. MEF methods use multiple formations in accordance with the illumination-reection images of the same scene with different exposures. The nal model and multiscale theory and used a principal component HDR image is obtained by synthesizing the best details from analysis (PCA)-based fusion method to enhance a low-light the images corresponding to each exposure time. A gradient- image. In [218], an adaptive histogram separation method domain HDR compression algorithm was proposed in [208]. was used to construct underexposed and overexposed images In this algorithm, different gradients are proportionally com- from an original image sequence; these images were then sep- pressed in the gradient domain of the images, and Pois- arately processed, and nally, HDR images were generated son's equation is solved in the modied gradient domain via multiexposure image fusion. In addition, Fu et al. [219] to obtain output images with a low dynamic range. This proposed an image enhancement algorithm based on the algorithm can also reveal detailed information in areas of fusion of the results of multiple enhancement techniques. various brightness in HDR night images. Li et al. proposed This algorithm integrates multiple image enhancement tech- an image enhancement algorithm based on multiple image niques by means of a linear weighted fusion strategy to fusion [209]. In this algorithm, multiple images of the same improve the enhancement effect. However, this strategy is scene are rst acquired with different exposure times, and too complex to satisfy real-time requirements. The algorithm then, various details are extracted from each image. Finally, proposed in [220] integrates the color contrast, saturation and these details are integrated to generate an enhanced HDR exposure brightness of an original or preprocessed image image [210]. Merianos and Mitianoudis combined two image by incorporating MSRCR into a pyramid algorithm using fusion methods, one for the fusion of the luminance channel the gold tower technique and specifying different weight and one for the fusion of the color channels. The fusion out- parameters depending on the image information to achieve put thus derived outperforms both individual methods [211]. the effective color enhancement of a traditional low-light In Ref. [212], the author proposed a new fusion approach image. A camera response model (CRM) is often adopted in the spatial domain using a propagated image lter. In the for generating multiple images [221]. In [222], the authors proposed approach, a weight map is calculated for every input proposed a single-image-based method of generating HDR image using the propagated image lter and gradient-domain images based on camera response function (CRF) recon- postprocessing. struction. Ying et al. [223] proposed a novel bioinspired 87898 VOLUME 8, 2020 W. Wang et al.: Experiment-Based Review of Low-Light Image Enhancement Methods enhancement model in which the source image is generated on the basis of a simulated CRM, and the weight matrix for image fusion is designed using illumination estimation techniques [224], [225]. Unlike the model presented in [223], the model presented in the later cited papers avoids any heuristic judgment of whether an image pixel is underexposed and thus is more exible in generating more intermediate enhancement results. In Ref. [226], a framework based on a CRM and a weighted least squares strategy was proposed in which every pixel is adjusted in accordance with a calculated exposure map and Retinex theory; this framework can pre- serve details while improving contrast, color correction, and noise suppression. In addition, Zhou et al. [227] generated FIGURE 24. Comparison of histograms of foggy, low-light and inverted multiple enhanced images based on a lightness-aware CRM images. and then performed mid-level fusion of these images based on a patch-based image decomposition model. This model, however, has a limited ability to improve images in which one area is already overenhanced. In this case, the overenhanced area is even more strongly enhanced, resulting in the loss of important details. In short, the main idea of methods based on image fusion is that useful information on the same target collected from multiple sources can be further utilized, without requiring a physical model, to obtain a nal high-quality image through image processing and computer technology. These fusion-based methods are simple and can achieve good results. However, they require two or more different images of the same scene; therefore, it is difcult to realize image enhancement within a short time, as is needed for real- time monitoring situations, and these methods are difcult to apply and popularize in practice. FIGURE 25. Framework of low-light image enhancement based on a dark channel prior. F. METHODS BASED ON DEFOGGING MODELS As one branch of the eld of image enhancement, image have certain issues. For example, low-light images inevitably defogging techniques have seen great progress and produced contain noise; however, the foggy image degradation model good results in recent years. In 2009, He Kaiming pro- used in such an algorithm does not consider the effect of posed the dark channel prior theory for images, which has noise. Therefore, the image noise will typically be amplied, been widely applied [228]. In 2011, a low-light enhance- which will visually impact the results of image enhance- ment algorithm [229], also called a bright channel prior ment [232][234]. Considering the need for noise process- method [230], [231], was proposed by Dong et al. based on ing, Liu Yang et al. optimized the processing speed of defogging theory; this method relies on the statistical analysis such an algorithm without accurately extracting the trans- of a dark primary color version of a low-light inverted image mittance; therefore, the nal enhanced images exhibited a and a dark primary color version of a foggy image. The main blocky effect. Zhang et al. also proposed an optimized algo- idea of the algorithm is that when an RGB image captured rithm, in which the parameters for transmittance estima- in a dark environment is inverted, the visual effect is similar tion are selected directly based on experience; consequently, to that of a daytime image acquired in a foggy environment the robustness of this algorithm is poor [235]. Jiang et al. (as shown in Fig. 24). Hence, a defogging algorithm based on used lters to remove details and introduced a pyramid oper- a dark channel prior can be used to process the inverted low- ation to calculate a smooth transmission coefcient, which light image; then, the image can be inverted again to obtain not only improved the processing speed but also yielded an enhanced low-light image. better naturalness [236]. Simultaneously, the noise was sup- This enhancement method greatly improves the image pressed. Later, Song et al. [237] improved upon this model brightness and enhances the visual details of the image by to overcome an issue related to block artifacts. Then, Pang analyzing the features of the low-light image and model- introduced a gamma transformation to improve the image ing them with a foggy image degradation model. The basic contrast [238]. By combining the defogging approach with process is shown in Fig. 25. Low-light image enhancement bilateral ltering, Zhang et al. proposed a low-light image methods based on dark primary color defogging techniques enhancement method that can operate in real time. After the VOLUME 8, 2020 87899 W. Wang et al.: Experiment-Based Review of Low-Light Image Enhancement Methods parameters are initially estimated using a dark channel prior, enhancement algorithm was adaptively selected for different they are optimized using a bilateral lter; thus, the effect of images to achieve image enhancement. This method can also noise is reduced [239]. Tao et al. combined a bright chan- be used to objectively and accurately evaluate the image nel prior with a convolutional neural network (CNN) [240], enhancement effect. and Park et al. combined a bright channel prior [241] with Since 2016, several deep-learning-based methods for a Retinex enhancement algorithm. Both achieved improved image enhancement have also emerged. For example, results [242]. A fast enhancement algorithm for low-light Yan et al. proposed the rst deep-learning-based method for video has been proposed by combining Retinex theory with photo adjustment [253]. Lore et al. adopted a stacked sparse dark channel prior theory [243], and this algorithm can be fur- denoising autoencoder in a framework for training an LLNet ther combined with scene detection, edge compensation and for low-light image enhancement [254]. In this framework, interframe compensation techniques for video enhancement. a sparsity regularized reconstruction loss was taken as the In [244], a method was proposed to solve the transmittance loss function, and deep learning based on the self-encoder problem based on a foggy degradation model and a CNN, approach was used to learn the features of image signals in which the transmission map and atmospheric light map are acquired under various low-illumination conditions to realize amended by means of guided ltering to obtain an enhanced adaptive brightness adjustment and denoising. Park et al. low-light image. Recently, an enhancement method with proposed a dual autoencoder network model based on Retinex strong illumination mitigation and bright halo suppression theory [255]; in this model, a stacked autoencoder was com- has been presented, which combines a dehazing algorithm bined with a convolutional autoencoder to realize low-light with a dark channel prior and a denoising method to achieve enhancement and noise reduction. The stacked autoencoder, a better visual effect [245]. with a small number of hidden units, was used to estimate Algorithms based on defogging models offer good perfor- the smooth illumination component in the space, and the con- mance with low computational complexity. However, their volutional autoencoder was used to process two-dimensional physical interpretation is somewhat lacking, and they are image information to reduce the amplication of noise during still susceptible to overenhancement in some detailed areas. the process of brightness enhancement. Inverted low-light images have their own unique character- CNNs have been used as the basis of deep learning frame- istics, and the direct application of defogging algorithms works in many research works [256][259]. Tao et al. pro- to such images is still not an ideal approach for image posed a low-light CNN (LLCNN) in which a multistage enhancement. characteristic map was used to generate an enhanced image by learning from low-light images with different nuclei [260]. G. METHODS BASED ON MACHINE LEARNING In [261], a global illumination-aware and detail-preserving Most existing low-light image enhancement techniques are network (GLADNet) was designed. In this network, the input model-based techniques rather than data-driven techniques. image is rst scaled to a certain size and then passed to Only in recent years have methods based on machine learn- encoder and decoder networks to generate global prior knowl- ing for image enhancement begun to emerge in signicant edge of the illumination. Based on this prior information numbers [246][248]. For example, in [249], the reection and the original images, a convolutional network is then component of an object was represented using a sparse used to reconstruct the image details. Ignatov et al. took a representation method, and the image details contained in different approach of learning a mapping between images the reection component were learned using a dictionary acquired by a mobile phone camera and a digital single-lens learning method, thus achieving an improved enhancement reex (DSLR) camera. They built a dataset consisting of effect. However, noise can easily be introduced during the images of the same scene taken by the different cameras [262] machine learning process. An image enhancement method and presented an end-to-end deep learning approach for trans- based on a color estimation model (CEM) was proposed lating ordinary photos into DSLR-quality images. Lv et al. by Fu et al. [250], in which the dynamic range of color proposed a new network (MBLLEN) consisting of a feature images in the RGB color space was controlled by adjusting extraction module (FEM), an enhancement module (EM) the CEM parameters to effectively inhibit oversaturation of and a fusion module (FM) [263], which produces output the enhanced images. Fotiadou et al. proposed a low-light images via feature fusion. Gabriel et al. designed a deep image enhancement algorithm based on a sparse image repre- convolutional neural network (DCNN) [264] based on a large sentation [251] in which both a low-light condition dictionary dataset of HDR images, and Liu et al. trained the DCNN and a daylight condition dictionary were established. The using only synthetic data to recover the details lost due sparse constraint was used as prior knowledge to update to quantization [265]. Gharbi et al. constructed a learning the dictionaries, and low-light image blocks were used to framework based on a deep bilateral network, thus achieving approximately estimate the corresponding daylight images. real-time processing for image enhancement [266]. Later, An image enhancement algorithm based on fuzzy rule reason- Chen et al. [267] introduced a dataset of raw short-exposure ing [252] was proposed in which three traditional enhance- low-light images (the See-in-the-Dark (SID) database) and ment methods were combined by applying fuzzy theory and developed a pipeline for processing these images based on machine learning to establish a set of fuzzy rules, and the best a fully convolutional network (FCN). Through end-to-end 87900 VOLUME 8, 2020 W. Wang et al.: Experiment-Based Review of Low-Light Image Enhancement Methods training, a good improvement over the traditional method of make the training images and used an advanced generative low-light image processing was achieved. adversarial network to build Low-Lightgan. The key advan- Based on Retinex theory, Shen et al. [268] analyzed the tages of such networks are that they can be trained easily performance of the MSR algorithm from the perspective of a and can achieve better experimental results than traditional CNN framework and proposed a method of enhancing low- enhancers [279]. light images by using an MSR network (MSR-net) based Undoubtedly, deep-learning-based methods can achieve on a CNN architecture, while Guo et al. [269] proposed excellent performance in low-light image enhancement, and our pipeline neural network consisting of a denoising net they also represent a major trend of current development in and low light image enhancement net (LLIE-net). Wei et al. image processing research. However, such methods must be assumed that observed images could be decomposed into supported by large datasets, and an increase in the complexity their reectance and illumination components, collected of a model will lead to a sharp increase in the time complexity a LOw-Light (LOL) dataset containing low-/normal-light of the corresponding algorithm. With the steady growth of image pairs, and proposed a deep network called Retinex- research on low-light image enhancement, not only are some Net [270]. Li et al. designed a network called LightenNet low-light data available from widely used public benchmark based on a CNN architecture [271]; this network takes a datasets such as PASCAL VOC [281], ImageNet [282], and weakly illuminated image as input and outputs a correspond- Microsoft COCO [283], but researchers are also building ing illumination map, which is subsequently used to generate public datasets specically designed for low-light image an enhanced image based on the Retinex model. Zhang et al. processing, such as SID [267] and EDD (Exclusively Dark built a simple yet effective network called the Kindling the Dataset) [284]. Darkness (KinD) network [272], which is composed of a III. EVALUATION METHODS layer decomposition net, a reectance restoration net, and an Image quality assessment (IQA) focuses mainly on two illumination adjustment net, and trained it on pairs of images aspects, namely, the delity of the image and the readabil- captured under different exposure conditions. ity of the image, which can be regarded as subjective and Inspired by the multiple image fusion method, objective evaluation standards, respectively. A subjective Cai et al. [273] proposed a framework based on a CNN trained to enhance single images. In this work, thirteen different MEF evaluation method measures image quality on the basis and HDR compression methods were used to generate an of the subjective perception of the human visual sys- enhanced image for each series of images from a large-scale tem, i.e., whether the image conveys a certain experience. multiexposure image dataset. Finally, low-light images were However, it is still difcult to accurately simulate the human enhanced by the CNN after end-to-end training on the low- visual system. Therefore, current subjective evaluation sys- contrast and high-contrast image dataset. Yang et al. used tems based on the human visual system can evaluate image two CNNs to build a tool for RGB image enhancement [274] quality only qualitatively rather than quantitatively [285]. in which intermediate HDR images are rst generated from A. SUBJECTIVE EVALUATION the input RGB images to ultimately produce high-quality LDR images. Nevertheless, generating HDR images from In a subjective evaluation method, human observers are asked single images is a challenging problem. To handle both to evaluate the quality of processed images in accordance local and global features, Kinoshita and Kiya proposed an with their visual effects based on a predetermined evaluation architecture consisting of a local encoder, a global encoder, scale. Such an evaluation depends on subjective assessment and a decoder trained on tone-mapped images obtained from of the image processing results to determine the advantages existing HDR images [275]. Experimental results showed and disadvantages of a particular algorithm. The score is its excellent performance compared with conventional image typically divided into 5 grades (1-5 points), and the number enhancement methods, including CNN-based methods. of raters should typically be no fewer than 20 [286]. Some of In contrast to supervised learning methods, genera- the raters should have experience in image processing, while tive adversarial network (GAN)-based methods can be some should not. The raters will evaluate the visual effects used for image enhancement without training on pairs of of the images in accordance with their personal experience images [276], [277]. For example, Meng et al. proposed a or agreed-upon evaluation criteria. To ensure fairness and GAN-based framework for nighttime image enhancement, equity, the nal scores will be weighted to obtain the nal which takes advantage of GANs' powerful ability to generate subjective quality evaluation result for each image. The typi- images from real data distributions, and the results demon- cal evaluation standards are summarized in Table 1. strate its effectiveness. To our knowledge, this was the rst This method is simple and can reect the visual quality of time that GANs were applied for the purpose of nighttime images. Such a subjective evaluation can accurately represent image enhancement [278]. In Ref. [279], the authors fully the visual perception of the majority of observers. However, utilized the advantages of both GANs and CNNs, using a such an evaluation lacks stability and can be easily affected transitive CNN to map the enhanced images back to the space by the experimental conditions as well as the knowledge of the source images to relax the need for paired ground- background, emotional state, motivation and degree of fatigue truth photos. Kim et al. [280] applied local illumination to of the observer. In studies related to image enhancement, VOLUME 8, 2020 87901 W. Wang et al.: Experiment-Based Review of Low-Light Image Enhancement Methods TABLE 1. Criteria for subjective assessment. TABLE 2. NIQA metrics and related references. it is necessary to provide key details of different magnied parts of images for comparison to assess, e.g., lack of unifor- mity. However, this process is time consuming and arduous in practice and thus often cannot be applied in engineering applications. B. OBJECTIVE EVALUATION An objective evaluation is an evaluation using specic data and based on certain objective criteria. To the best of our knowledge, there are no IQA methods that have been speci- cally designed for the evaluation of low-light image enhance- (i) Mean value (MV). The MV mainly refers to the mean ment methods. Hence, different researchers utilize different of the gray values of an image, and it mainly reects the strategies to evaluate their results. At present, the objec- color or degree of brightness of the image. The smaller the tive evaluation methods for image enhancement can be image mean is, the darker the image. Conversely, the larger divided into full-reference methods and no-reference meth- the mean is, the brighter the image, and the lighter the colors. ods depending on whether they require reference images The formula is as follows: (ground-truth images or synthetic images). Objective evalua- M N XX tion methods have various advantages, such as simple calcu- D f (i; j) (30) lations, fast execution, ease of quantitative calculation based M N iD1 jD1 on a constructed model, and high stability; therefore, data from objective evaluations are generally adopted as image where M and N are the width and height, respectively, of the quality scores [287]. image and f (i; j) is the gray value at pixel point (i; j). (ii) Standard difference (STD). The variance of the gray 1) NO-REFERENCE IQA (NIQA) METRICS values reects the degree of dispersion of the image relative to the mean and thus is a measure of the contrast within a Since no objective reference image is available in the case certain range. The larger the variance is, the more information of a low-light input image, most methods that are suitable for is contained in the image, and the better the visual effect. low-light image enhancement assessment are based on NIQA When the variance is smaller, the information contained in metrics. The most common NIQA metrics include the mean the image is less, and the image is more monochromatic and value (MV), standard difference (STD), average gradient uniform. The formula is (AG), and information entropy (IE). In addition, there are sev- eral general methods available for image quality evaluation, M N P P including the Blind/Referenceless Image Spatial QUality u 2 f (i; j)(f (i; j) ) Evaluator (BRISQUE) [288], the Naturalness Image Quality t iD1 jD1 STD D (31) Evaluator (NIQE) [289], the BLind Image Integrity Notator M N using DCT Statistics (BLIINDS-II) [290], the blind tone- mapped image quality index (BTMQI) [291], gradient ratio- where M and N are the image width and height, respectively; ing at visible edges (GRVE) [292], the autoregressive-based f (i; j) is the gray value at pixel point (i; j); and is the MV image sharpness metric (ARISM) [293], the no-reference of the image image quality metric for contrast distortion (NIQMC) [294], (iii) Average gradient (AG). The AG represents the clarity the Global Contrast Factor (GCF) [295], the average informa- of an image, reecting the image's ability to express con- tion content (AIC) [296], the effective measure of enhance- trasting details. This metric measures the rate of change in ment (EME) [297], PixDist [298], and the no-reference the image values based on changes in the contrast of minute free-energy-based robust metric (NFERM) [299]. The com- details or the relative clarity of the image. In an image, faster monly used NIQA metrics and related references are shown gray changes in a certain direction result in larger image in Table 2. Descriptions of several of these metrics follow. gradients; therefore, this metric can be used to determine 87902 VOLUME 8, 2020 W. Wang et al.: Experiment-Based Review of Low-Light Image Enhancement Methods TABLE 3. FIQA metrics and related references. whether or not an image is clear. The AG can be expressed as M N XX 2 2 1 (@f =@x) C (@f =@y) AG D (32) M N 2 iD1 jD1 where M and N are the image width and height, respectively, and @f =@x and @f =@x are the horizontal and vertical gradi- ents, respectively. (iv) Information entropy (IE). Entropy can be used as a measure of an amount of information and is widely used to evaluate image quality [300,48]. A static image is regarded as an information source with random output; the set A of source symbols is dened as the set of all possible sym- bols {a }, and the probability of source symbol a is P(a ). i i i Thus, the average information quantity of an image can be expressed as H D P(a ) log P(a ) (33) i i iD1 According to entropy theory, the larger the IE value is, the larger the amount of information contained in the image, and the richer the image detail. between the images before and after processing. An exces- sively high PSNR indicates that the effect of the denoising 2) FULL-REFERENCE IQA (FIQA) METRICS algorithm is not obvious. A smaller PSNR indicates a greater The most common FIQA metrics include the mean square difference between the images before and after processing. error (MSE), the peak signal-to-noise ratio (PSNR), the struc- An excessively low PSNR may suggest that the image is tural similarity index metric (SSIM) [301], and the light- distorted. The specic expression is as follows: ness order error (LOE) [136]. Other available FIQA max metrics include the patch-based contrast quality index PSNR D 10 lg (35) MSE (PCQI) [302], the colorfulness-based PCQI (CPCQI) [303], where f is the maximum gray value, f D 255. the Gradient Magnitude Similarity Deviation (GMSD) [304], max max (iii) Structural similarity index metric (SSIM). The above the visual information delity (VIF) [305], the visual methods do not consider the characteristics of the human saliency index (VSI) [306], the tone-mapped image qual- visual system when assessing image quality; they compute ity index (TMQI) [307], the Statistical Naturalness Measure only a simple random error between the input image and (SNM) [307], and the Feature SIMilarity Index (FSIM) [308]. the processed image and analyze the difference between the The commonly used NIQA metrics and related references are input and output images from a mathematical perspective. shown in Table 3. Descriptions of several of these metrics Therefore, the above metrics cannot fully and accurately follow. reect the image quality. Researchers have found that nat- (i) Mean square error (MSE). This metric represents the ural images exhibit certain special structural features, such direct deviation between the enhanced image and the original as strong correlations between pixels, and these correlations image; it has the same meaning as the absolute mean bright- capture a large amount of important structural information for ness error (AMBE) [75]. an image. Therefore, Wang et al. proposed a method based M N XX on structural similarity for evaluating image quality [301]. MSE D [f (i; j) f (i; j)] (34) The SSIM evaluates the quality of a processed image relative M N iD1 jD1 to the reference image based on comparisons of luminance where M and N are the width and height, respectively, of the (l(f, f )), contrast (c(f, f )) and structure (s(f, f )) between the e e e image; f (i; j) represents the input image; and f (i; j) represents two images. These three values are combined to obtain the the enhanced image. In an image quality evaluation, a smaller overall similarity measure. The formula is as follows: MSE value indicates higher similarity between the enhanced SSIM D F[1(f ; f ); c(f ; f ); s(f ; f )] (36) e e e and original images. (ii) Peak signal-to-noise ratio (PSNR). The PSNR of an The ow of the SSIM algorithm is shown in Fig. 26. image is the most extensively and commonly used objective The degree of similarity between the two images is evaluation method for measuring the image denoising effect. reected by the value of the SSIM; the minimum value is 0, The larger the PSNR value is, the smaller the difference and the maximum value is 1. A value closer to 1 indicates VOLUME 8, 2020 87903 W. Wang et al.: Experiment-Based Review of Low-Light Image Enhancement Methods FIGURE 27. Low-light images under three illumination conditions (figure best viewed in color). FIGURE 26. Flowchart of the SSIM algorithm. that the two images are more similar. Taking the human visual system as the starting point, this method can effectively sim- ulate human visual perception to extract information about FIGURE 28. Two pairs of images with different exposures (figure best the structure of an image. The evaluation result is very close viewed in color). to the subjective perception of the human eye; therefore, this metric is widely used in image quality evaluations. (iv) Lightness order error (LOE). Considering that the A. SUBJECTIVE EVALUATION relative order of lightness of different image areas reects the The test images shown in Fig. 27 represent three different direction of the light source and the variation in illumination, illumination conditions, namely, uniform low light, uneven Ref. [136] proposed the LOE metric to measure the discrep- illumination and nighttime; the source images are named ancy in lightness order between an original image I and its `Flowers.bmp', `Building.bmp' and `Lawn.bmp', respec- enhanced version I . The LOE is dened as tively. In addition, we adopt two pairs of images for reference- M N based comparisons, where each pair consists of a low-light XX LOE D RD(i; j) (37) image, as shown in the top row of Fig. 28, and a corresponding M N iD1 jD1 well-exposed image, as shown in the bottom row. In Fig. 28, the image on the left is named `Desk.bmp', and the image on where RD(i; j) is the difference in the relative lightness order the right is `Road.bmp'. between the original image f and its enhanced version f at The experimental results are shown in Figs. 29-33. In these pixel (i; j). This difference is dened as follows: gures, panel (a) contains the original image, and panels (b)-(r) display the results of many enhancement methods: M N XX Gamma correction [3], AHE [78], WT [166], BIMEF [223], RD(i; j) D U(L(i; j); L(x; y)) U(L (i; j); L (x; y)) e e CegaHE [102], CRM [225], CVC [89], Dong et al. [229], x y MBLLEN [263], HMF [101], LIME [13], MF [219], HE [66], (38) MSRCP [127], MSRCR [115], NPE [136], and SRIE [126]. where M and N are the image width and height, respec- As shown in panels (b)-(r), all of these image enhancement tively; is the exclusive-or operator; and L(i; j) and L (i; j) methods improve the visual effect of the original image to are the maximum values among the three color channels at some degree. The details become clearer with the Gamma, location (i; j) for f and f , respectively. The function U(p; q) e WT, AHE, HMF, CVC, MSRCP and SRIE methods, but returns a value of 1 if p >D q; otherwise, it returns 0. The the overall level of brightness is dark. Especially when the smaller the LOE value is, the better the lightness order is Gamma and CVC methods are used, the enhancement effects preserved. for the three types of images are similar, while the WT method The above measures have the following advantages: they makes the output image blurred. The AHE method achieves are simple to calculate, have clear physical meanings, and a better effect when processing uniformly illuminated low- enable mathematically convenient optimizations. light images, but a wheel halo effect appears in unevenly illuminated low-light images. Although the CegaHE, HE, IV. ANALYSIS OF DIFFERENT ENHANCEMENT METHODS and MSRCR methods can brighten the entire image, the hue To compare the enhancement effects of various algorithms as changes dramatically for an image with uneven illumination, well as the consistency of subjective and objective evaluation, resulting in the loss of the real color of the original scene. experiments using many methods are presented in this paper Although the image brightness after processing with the for illustration. A test platform was built based on a desktop SRIE method is not high, this method achieves a consistent computer to verify the algorithms. This system includes an image processing effect for all three types of images, and the Intel(R) Core(TM) i7-6700 CPU @3.4 GHz with 16 GB tone recovery effect is superior. By comparison, the Dong, RAM and the Windows 10 operating system. MBLLEN, MF, NPE, LIME and CRM methods demonstrate 87904 VOLUME 8, 2020 W. Wang et al.: Experiment-Based Review of Low-Light Image Enhancement Methods FIGURE 29. Experimental results on `Flowers.bmp' (figure best viewed in color). FIGURE 30. Experimental results on `Building.bmp' (figure best viewed in color). outstanding performance in color and detail enhancement, B. OBJECTIVE EVALUATION and their visual effects are obviously superior to those Based on the above images, experiments for objective qual- of the other abovementioned image enhancement methods. ity evaluation were performed using various IQA methods, However, when the Dong and NPE methods are used to including both NIQA and FIQA metrics. process unevenly illuminated low-light images such as the `Building' and `Road' images, overenhancement appears at 1) NIQA-BASED EVALUATION the boundaries. The MF method, MBLLEN method and Eight metrics, namely, STD, IE, AG, BLIINDS-II [290], CRM method better maintain the color of the original images NIQE [289], BRISQUE [288], the contrast enhancement- compared with the above methods, but their overall effect is based contrast-changed image quality measure (CEIQ) [311], no better than that of the LIME method. The LIME method and the spatialspectral entropy-based quality measure considers both brightness and hue information and maintains (SSEQ) [312], were employed for NIQA-based evaluation. excellent realistic effects. Hence, the LIME method achieves The experimental results obtained on `Flowers.bmp', `Build- higher color delity from the perspective of human visual ing.bmp' and `Lawn.bmp' are shown in Tables 46, and the perception. best score in terms of each metric is highlighted in bold. VOLUME 8, 2020 87905 W. Wang et al.: Experiment-Based Review of Low-Light Image Enhancement Methods FIGURE 31. Experimental results on `Lawn.bmp' (figure best viewed in color). FIGURE 32. Experimental results on `Desk.bmp' (figure best viewed in color). These data show that the different evaluation metrics assign top two scoring methods based on these metrics. However, different scores to the same image enhancement algorithm to some extent, the distortion of chrominance information and that the interpretations of the evaluation results are com- causes the results of the objective evaluation to be opposite pletely opposite in some cases. The reason is that the tradi- to those of the subjective evaluation. tional IQA metrics used in this evaluation consider different aspects of the image obtained after enhancement. For the 2) FIQA-BASED EVALUATION three images, no method gets the best score on all metrics. For the FIQA-based evaluation, eleven metrics were selected, The HE method achieves the highest scores in terms of namely, MSE, PSNR, SSIM [301], LOE [136], PCQI [302], the IE and CEIQ metrics on the three images. The LIME GMSD [304], VIF [305], VSI [306], FSIM [308], method achieves the best scores in terms of the AG met- RVSIM [313], and IFC [314]. The reference images for ric for `Building.bmp' and `Lawn.bmp'. The CVC method `Desk.bmp' and `Road.bmp' are shown in panel (a) of achieves the best score on STD metric for ``Flower.bmp'' Fig. 33 and Fig. 34, and the experimental data are listed and `Lawn.bmp'. Overall, the HE and CVC methods are the in Tables 7 - 8, where the best score in terms of each metric 87906 VOLUME 8, 2020 W. Wang et al.: Experiment-Based Review of Low-Light Image Enhancement Methods FIGURE 33. Experimental results on `Road.bmp' (figure best viewed in color). TABLE 4. Objective evaluation of various methods on `Flowers.bmp' using NIQA metrics. TABLE 5. Objective evaluation of various methods on `Building.bmp' using NIQA metrics. TABLE 6. Objective evaluation of various methods on `Lawn.bmp' using NIQA metrics. is highlighted in bold. From these data, it can be seen that For example, the CegaHE method earns the best scores the best scores in terms of the different metrics are relatively according to seven of the above eleven metrics on concentrated among certain image enhancement algorithms. `Desk.bmp'. For the image `Road.bmp', BIMEF, CRM and VOLUME 8, 2020 87907 W. Wang et al.: Experiment-Based Review of Low-Light Image Enhancement Methods TABLE 7. Objective evaluation of various methods on `Desk.bmp' using FIQA metrics. TABLE 8. Objective evaluation of various methods on `Road.bmp' using FIQA metrics. TABLE 9. Comparison of time complexity (unit: seconds). MF achieve four, four and two of the best scores, respectively. C. TIME COMPLEXITY In addition, these three methods have the same scores in To test the processing speeds of the various methods, experi- terms of the GMSD and VSI metrics for the evaluation ments were performed using images of various sizes, and all on `Road.bmp'. To some extent, FIQA-based evaluations algorithms were run using MATLAB except the MELLEN provide a more accurate description of the images and method [263]. Table 9 shows that the Retinex-based meth- are more consistent with subjective evaluation results than ods (MSR, MSRCR, and MSRCP) have high computational NIQA-based evaluations are for cases in which reference complexities because of their multiscale Gaussian ltering images are available. operations. The NPE, SRIE and MELLEN methods have 87908 VOLUME 8, 2020 W. Wang et al.: Experiment-Based Review of Low-Light Image Enhancement Methods TABLE 10. Merits and shortcomings of different methods. the lowest computational efciency for processing a sin- gle image because of the use of iterative computations to nd the optimal solution. When processing an image with pixel dimensions of 3200 2400, their processing times are 206 seconds, 649 seconds and 890 seconds, respectively. In contrast, gamma correction and the various HE-based methods (AHE, HMF, and CegaHE) are faster, and their run times are only slightly affected by an increase in the image size. In particular, when the gamma correction method is run on an image of 3200 2400 pixels, it needs only 70 milliseconds to run, which is 1/12700th of the run time of the SRIE method. Therefore, the gamma correction method has an absolute advantage in terms of run time. For images with pixel dimensions of 1600 1200, the gamma method methods and the HE-based methods can be used under real- time conditions. The IQA metrics considered above are not completely consistent with subjective human perception and thus are not suitable for the direct evaluation of enhanced low-light images. They need to be combined with subjective evalua- image enhancement, researchers should focus on the follow- tions based on human vision. Therefore, there is a great need ing tasks: to design and develop an objective quality assessment method (i) Improve the robustness and adaptive capabilities of for low-light image enhancement that shows good agreement low-light image enhancement algorithms. The robustness and with the mechanism of human vision. adaptive capabilities of the existing methods are insufcient to meet the requirements of practical applications. The ideal V. CONCLUSION method should be able to adaptively adjust to different appli- This paper summarizes seven widely used classes of low-light cation conditions and different types of low-light images. image enhancement algorithms and their improved versions (ii) Reduce the computational complexity of the available and describes the underlying principles of the different meth- algorithms. To satisfy the needs of practical applications, real- ods. Then, it introduces the current quality evaluation system time methods are often in demand; however, most of the for low-light images and identies the problems with this existing methods currently require a long processing time. existing system. Finally, many representative image enhance- In addition, the results of the existing methods are still ment methods are evaluated using both subjective and objec- susceptible to certain problems, such as color deviations tive evaluation methods. The characteristics and performance and detail ambiguity. The high-performance processors in of the existing methods are analyzed and summarized, and graphics processing units (GPUs) allow such algorithms to the shortcomings of the present work in this eld are further be parallelized, which can signicantly improve their pro- revealed. The essential purpose of low-light image enhance- cessing speed and may ultimately enable real-time image ment is to improve the image contrast both globally and enhancement. locally in a certain range of the gray space in accordance (iii) Establish a standard quality evaluation system. with the distribution of the gray values of the original image At present, there are too few specialized low-light image pixels. Simultaneously, it should be ensured that the enhanced datasets, and the quality evaluation system is not mature. This image shows good image quality with regard to the character- limits the further development of this research eld and the istics of human visual perception, noise suppression, image selection of suitable enhancement and restoration methods entropy maximization, brightness maintenance, etc. The mer- for practical applications. its and shortcomings of the various methods are summarized (iv) Develop a video-based enhancement algorithm. in Table 10. Currently, most of the research in this eld has focused on Based on the limitations of the current methods, care must single images, and research on video enhancement has not be taken in image enhancement to ensure an appropriate received sufcient attention; by contrast, video processing balance among several factors, such as the image color, visual plays a greater role in practical applications. There is an effect and information entropy, while attempting to improve urgent need to solve the problems related to the efciency the visibility of the image contrast. However, the existing of low-illumination video processing, interframe consistency algorithms all have certain disadvantages, such as loss of and so on. detail, color distortion, or high computational complexity; In summary, thus far, no image enhancement algo- thus, current low-light image enhancement techniques can- rithm exists that is optimal in terms of all of the above not guarantee the performance of a vision system in a low- issues simultaneously. Therefore, it is necessary to select light environment. In future research on low-illumination the most suitable image enhancement algorithm based on VOLUME 8, 2020 87909 W. Wang et al.: Experiment-Based Review of Low-Light Image Enhancement Methods application-specic requirements. It is hoped that image SRIE Simultaneous reectivity and illumination enhancement technology can be advanced to a higher level estimation through in-depth studies of these enhancement algorithms, MSRCP Multiscale Retinex with chromaticity thus allowing this technology to play an important role in preservation multiple disciplines. NPE Naturalness preserved enhancement WT Wavelet transform APPENDIX DCT Discrete cosine transform Abbreviation Phrase KGWT Knee function and Gamma correction HE Histogram equalization HDR High dynamic range CDF Cumulative distribution function CNN Convolutional neural network GHE Global histogram equalization DCP Dark channel priori LHE Local histogram equalization CEM Color estimation model BBHE Brightness-preserving bi-histogram LLCNN Low-light CNN equalization MEF Multiple exposure image fusion DSIHE Dualistic subimage histogram IQA Image quality assessment equalization HVS Human visual system MMBEBHE Minimum mean brightness error AG Average gradient bi-histogram equalization MSE Mean square error IBBHE Iterative of brightness bi-histogram PSNR Peak signal-to-noise equalization SSIM Structural similarity index AHE Adaptive histogram equalization BIQI Blind image quality index POSHE Partially overlapped subblock histogram BRISQUE Blind/referenceless image spatial quality equalization evaluation CLAHE Contrast-limited adaptive histogram NIQE Naturalness image quality evaluator equalization FIRF Finite impulse response lter ACKNOWLEDGMENT RMSHE Recursive mean-separate histogram The authors thank AJE for linguistic assistance during the equalization preparation of this manuscript. CVC Contextual and variational contrast BBPHE Background brightness-preserving REFERENCES histogram equalization [1] H. Wang, Y. Zhang, and H. Shen, ``Review of image enhancement algo- GCCHE Gain-controllable clipped histogram rithms,'' (in Chinese), Chin. Opt., vol. 10, no. 4, pp. 438448, 2017. [2] W. Wang, X. Yuan, X. Wu, and Y. Liu, ``Fast image dehazing method equalization based on linear transformation,'' IEEE Trans. Multimedia, vol. 19, no. 6, RSIHE Recursive subimage histogram pp. 11421155, Jun. 2017. equalization [3] M. Fang, H. Li, and L. Lei, ``A review on low light video image enhance- DHE Dynamic histogram equalization ment algorithms,'' (in Chinese), J. Changchun Univ. Sci. Technol., vol. 39, no. 3, pp. 5664, 2016. BPDHE Brightness-preserving dynamic histogram [4] J. Yu, D. Li, and Q. Liao, ``Color constancy-based visibility enhancement equalization of color images in low-light conditions,'' (in Chinese), Acta Automatica EDSHE Entropy- based dynamic subhistogram Sinica, vol. 37, no. 8, pp. 923931, 2011. [5] S. Ko, S. Yu, W. Kang, C. Park, S. Lee, and J. Paik, ``Artifact-free low- equalization light video enhancement using temporal similarity and guide map,'' IEEE BHEPL Bi-histogram equalization with a plateau Trans. Ind. Electron., vol. 64, no. 8, pp. 63926401, Aug. 2017. limit [6] X. Fu, G. Fan, Y. Zhao, and Z. Wang, ``A new image enhancement algorithm for low illumination environment,'' in Proc. IEEE Int. Conf. MMSICHE Median-mean based subimage-clipped Comput. Sci. Autom. Eng., Jun. 2011, pp. 625627. histogram equalization [7] S. Park, K. Kim, S. Yu, and J. Paik, ``Contrast enhancement for low-light ESIHE Exposure-based subimage histogram image enhancement: A survey,'' IEIE Trans. Smart Process. Comput., equalization vol. 7, no. 1, pp. 3648, Feb. 2018. [8] K. Yang, X. Zhang, and Y. Li, ``A biological vision inspired frame- AMHE Adaptively modied histogram work for image enhancement in poor visibility conditions,'' IEEE equalization Trans. Image Process., vol. 29, pp. 14931506, Sep. 2019, doi: WHE Weighted histogram equalization 10.1109/TIP.2019.2938310. [9] C. Dai, M. Lin, J. Wang, and X. Hu, ``Dual-purpose method for underwa- HMF Histogram modication framework ter and low-light image enhancement via image layer separation,'' IEEE CegaHE Gap Adjustment for Histogram Access, vol. 7, pp. 178685178698, 2019. Equalization [10] Y.-F. Wang, H.-M. Liu, and Z.-W. Fu, ``Low-light image enhancement via the absorption light scattering model,'' IEEE Trans. Image Process., SSR Single-scale Retinex vol. 28, no. 11, pp. 56795690, Nov. 2019. MSR Multiscale Retinex [11] M. Kim, D. Park, D. K. Han, and H. Ko, ``A novel framework for MSRCR Multiscale Retinex with color restoration extremely low-light video enhancement,'' in Proc. IEEE Int. Conf. Con- sum. Electron., Jan. 2014, pp. 9192. KBR Kernel-based Retinex 87910 VOLUME 8, 2020 W. Wang et al.: Experiment-Based Review of Low-Light Image Enhancement Methods [12] M. H. Conde, B. Zhang, K. Kagawa, and O. Loffeld, ``Low-light image [34] J. Zhang, P. Zhou, and Q. Zhang, ``Low-light image enhancement based enhancement for multiaperture and multitap systems,'' IEEE Photon. J., on iterative multi-scale guided lter Retinex,'' (in Chinese), J. Graph., vol. 8, no. 2, pp. 125, Apr. 2016. vol. 39, no. 1, pp. 111, 2018. [13] X. Guo, Y. Li, and H. Ling, ``LIME: Low-light image enhancement via [35] Y. Li, J. Wang, R. Xing, X. Hong, and R. Feng, ``A new graph morpho- illumination map estimation,'' IEEE Trans. Image Process., vol. 26, no. 2, logical enhancement operator for low illumination color image,'' in Proc. pp. 982993, Feb. 2017. 7th Int. Symp. Comput. Intell. Design, Dec. 2014, pp. 505508. [36] H. Su and C. Jung, ``Perceptual enhancement of low light images based on [14] Q. Mu, Y. Wei, and J. Li, ``Research on the improved Retinex algorithm two-step noise suppression,'' IEEE Access, vol. 6, pp. 70057018, 2018. for low illumination image enhancement,'' (in Chinese), J. Harbin Eng. Univ., vol. 39, no. 12, pp. 17, Jan. 2018. [37] D. Mao, Z. Xie, and X. He, ``Adaptive bilateral logarithm transformation with bandwidth preserving and low-illumination image enhancement,'' [15] K. Aditya, V. Reddy, and R. Hariharan, ``Enhancement technique for (in Chinese), J. Image Graph., vol. 22, no. 10, pp. 13561363, 2017. improving the reliability of disparity map under low light condition,'' in [38] X. Sun, H. Liu, S. Wu, Z. Fang, C. Li, and J. Yin, ``Low-light image Proc. Int. Conf. Innov. Autom. Mechatronics Eng., 2014, pp. 236243. enhancement based on guided image ltering in gradient domain,'' Int. J. [16] Z. Shi, M. Zhu, B. Guo, and M. Zhao, ``A photographic negative imaging Digit. Multimedia Broadcast., vol. 2017, Aug. 2017, Art. no. 9029315. inspired method for low illumination night-time image enhancement,'' [39] R. Song, D. Li, and X. Wang, ``Low illumination image enhancement Multimedia Tools Appl., vol. 76, no. 13, pp. 1502715048, Jul. 2017. algorithm based on HSI color space,'' (in Chinese), J. Graph., vol. 38, [17] R. Chandrasekharan and M. Sasikumar, ``Fuzzy transform for contrast no. 2, pp. 217223, 2017. enhancement of nonuniform illumination images,'' IEEE Signal Process. [40] B. Gupta and T. K. Agarwal, ``New contrast enhancement approach Lett., vol. 25, no. 6, pp. 813817, Jun. 2018. for dark images with non-uniform illumination,'' Comput. Electr. Eng., [18] Y. Chen, X. Xiao, H.-L. Liu, and P. Feng, ``Dynamic color image resolu- vol. 70, pp. 616630, Aug. 2018. tion compensation under low light,'' Optik, vol. 126, no. 6, pp. 603608, [41] L. Han, J. Xiong, and G. Geng, ``Using HSV space real-color image Mar. 2015. enhanced by homomorphic lter in two channels,'' (in Chinese), Comput. [19] J. Zhu, L. Li, and W. Jin, ``Natural-appearance colorization and enhance- Eng. Appl., vol. 45, no. 27, pp. 1820, 2009. ment for the low-light-level night vision imaging,'' (in Chinese), Acta [42] M. Iqbal, S. S. Ali, M. M. Riaz, A. Ghafoor, and A. Ahmad, ``Color Photonica Sinica, vol. 47, no. 4, pp. 159198, 2018. and white balancing in low-light image enhancement,'' Optik, vol. 209, [20] L. Jinhong and Z. Mei, ``Design and realization of low-light-level CMOS May 2020, Art. no. 164260. image sensor,'' (in Chinese), Infr. Laser Eng., vol. 47, no. 7, 2018, [43] F. Wu and U. KinTak, ``Low-light image enhancement algorithm based Art. no. 720002. on HSI color space,'' in Proc. 10th Int. Congr. Image Signal Process., [21] N. Faramarzpour, M. J. Deen, S. Shirani, Q. Fang, L. W. C. Liu, Biomed. Eng. Informat., Oct. 2017, pp. 16. F. de Souza Campos, and J. W. Swart, ``CMOS-based active pixel for [44] A. Nandal, V. Bhaskar, and A. Dhaka, ``Contrast-based image enhance- low-light-level detection: Analysis and measurements,'' IEEE Trans. ment algorithm using grey-scale and colour space,'' IET Signal Process., Electron Devices, vol. 54, no. 12, pp. 32293237, Nov. 2007. vol. 12, no. 4, pp. 514521, Jun. 2018. [22] Z. Yuantao, C. Mengyang, S. Dexin, and L. Yinnian, ``Digital TDI [45] Q. Mu, Y. Wei, and Z. Li, ``Color image enhancement method based technology based on global shutter sCMOS image sensor for low-light- on weighted image guided ltering,'' 2018, arXiv:1812.09930. [Online]. level imaging,'' (in Chinese), Acta Optica Sinica, vol. 38, no. 9, 2018, Available: http://arxiv.org/abs/1812.09930 Art. no. 0911001. [46] Q. Xu, H. Jiang, R. Scopigno, and M. Sbert, ``A novel approach [23] S. Hao, Z. Feng, and Y. Guo, ``Low-light image enhancement with for enhancing very dark image sequences,'' Signal Process., vol. 103, a rened illumination map,'' Multimedia Tools Appl., vol. 77, no. 22, pp. 309330, Oct. 2014. pp. 2963929650, Nov. 2018. [47] Z. Feng and S. Hao, ``Low-light image enhancement by rening illumina- [24] F. Zhang, X. Wei, and S. Qiang, ``A perception-inspired contrast enhance- tion map with self-guided ltering,'' in Proc. IEEE Int. Conf. Big Knowl., ment method for low-light images in gradient domain,'' (in Chinese), Aug. 2017, pp. 183187. J. Comput.-Aided Des. Comput. Graph., vol. 26, no. 11, pp. 19811988, [48] L. Florea, C. Florea, and C. Ionascu, ``Avoiding the deconvolution: Framework oriented color transfer for enhancing low-light images,'' in [25] Y.-H. Shiau, P.-Y. Chen, H.-Y. Yang, and S.-Y. Li, ``A low-cost hardware Proc. IEEE Conf. Comput. Vis. Pattern Recognit. Workshops, Jun. 2016, architecture for illumination adjustment in real-time applications,'' IEEE pp. 936944. Trans. Intell. Transp. Syst., vol. 16, no. 2, pp. 934946, Sep. 2015. [49] Z. Zhou, N. Sang, and X. Hu, ``Global brightness and local contrast [26] C.-C. Leung, K.-S. Chan, H.-M. Chan, and W.-K. Tsui, ``A new approach adaptive enhancement for low illumination color image,'' Optik, vol. 125, for image enhancement applied to low-contrast-low-illumination IC and no. 6, pp. 17951799, Mar. 2014. document images,'' Pattern Recognit. Lett., vol. 26, no. 6, pp. 769778, [50] D. Mu, C. Xu, and H. Ge, ``Hybrid genetic algorithm based image May 2005. enhancement technology,'' in Proc. Int. Conf. Internet Technol. Appl., [27] S.-Y. Yu and H. Zhu, ``Low-illumination image enhancement algorithm Aug. 2011, pp. 14. based on a physical lighting model,'' IEEE Trans. Circuits Syst. Video [51] J. Wang, ``An enhancement algorithm for low-illumination color image Technol., vol. 29, no. 1, pp. 2837, Jan. 2019. with preserving edge,'' (in Chinese), Comput. Technol. Develop., vol. 28, [28] H.-J. Yun, Z.-Y. Wu, G.-J. Wang, G. Tong, and H. Yang, ``A novel no. 1, pp. 116120, 2018. enhancement algorithm combined with improved fuzzy set theory for [52] K. Srinivas and A. K. Bhandari, ``Low light image enhancement with low illumination images,'' Math. Problems Eng., vol. 2016, no. 8, 2016, adaptive sigmoid transfer function,'' IET Image Process., vol. 14, no. 4, Art. no. 8598917. pp. 668678, Mar. 2020. [29] S. Sun, Y. Dong, and C. Tang, ``An enhanced algorithm for single [53] W. Kim, R. Lee, M. Park, and S.-H. Lee, ``Low-light image enhance- nighttime low illuminated vehicle-mounted video image,'' (in Chinese), ment based on maximal diffusion values,'' IEEE Access, vol. 7, Comput. Technol. Develop., vol. 28, no. 4, pp. 5054, 2018. pp. 129150129163, 2019. [30] W. Wang, F. Chang, T. Ji, and X. Wu, ``A fast single-image dehazing [54] K. Panetta, S. Agaian, Y. Zhou, and E. J. Wharton, ``Parameterized method based on a physical model and gray projection,'' IEEE Access, logarithmic framework for image enhancement,'' IEEE Trans. Syst., Man, vol. 6, pp. 56415653, 2018. Cybern. B, Cybern., vol. 41, no. 2, pp. 460473, Apr. 2011. [31] J. Lim, M. Heo, C. Lee, and C.-S. Kim, ``Contrast enhancement of noisy [55] Z. Xiao, X. Zhang, F. Zhang, L. Geng, J. Wu, L. Su, and L. Chen, low-light images based on structure-texture-noise decomposition,'' J. Vis. ``Diabetic retinopathy retinal image enhancement based on gamma cor- Commun. Image Represent., vol. 45, pp. 107121, May 2017. rection,'' J. Med. Imag. Health Informat., vol. 7, no. 1, pp. 149154, [32] G. Lyu, H. Huang, H. Yin, S. Luo, and X. Jiang, ``A novel visual Feb. 2017. perception enhancement algorithm for high-speed railway in the low [56] F. Drago, K. Myszkowski, T. Annen, and N. Chiba, ``Adaptive logarithmic light condition,'' in Proc. 12th Int. Conf. Signal Process., Oct. 2014, mapping for displaying high contrast scenes,'' Comput. Graph. Forum, pp. 10221025. vol. 22, no. 3, pp. 419426, Sep. 2003. [33] S.-C. Pei and C.-T. Shen, ``Color enhancement with adaptive illumina- [57] L. Tao and V. Asari, ``An integrated neighborhood dependent approach for tion estimation for low-backlighted displays,'' IEEE Trans. Multimedia, nonlinear enhancement of color images,'' in Proc. Int. Conf. Inf. Technol. vol. 19, no. 8, pp. 19561961, Aug. 2017. Coding Comput., 2004, p. 138. VOLUME 8, 2020 87911 W. Wang et al.: Experiment-Based Review of Low-Light Image Enhancement Methods [58] X. Tian, X. Xu, and C. Wu, ``Low illumination color image enhance- [82] A. M. Reza, ``Realization of the contrast limited adaptive histogram ment algorithm based on LIP model,'' (in Chinese), J. Xian Univ. Posts equalization (CLAHE) for real-time image enhancement,'' J. VLSI Signal Telecommun., vol. 20, no. 1, pp. 913, 2015. Process.-Syst. Signal, Image, Video Technol., vol. 38, no. 1, pp. 3544, [59] S.-C. Huang, F.-C. Cheng, and Y.-S. Chiu, ``Efcient contrast enhance- Aug. 2004. ment using adaptive gamma correction with weighting distribution,'' [83] M. F. Al-Sammaraie, ``Contrast enhancement of roads images with foggy IEEE Trans. Image Process., vol. 22, no. 3, pp. 10321041, Mar. 2013. scenes based on histogram equalization,'' in Proc. 10th Int. Conf. Comput. [60] N. Zhi, S. Mao, and M. Li, ``An enhancement algorithm for coal mine Sci. Educ. (ICCSE), Jul. 2015, pp. 95101. low illumination images based on bi-Gamma function,'' (in Chinese), [84] G. Yadav, S. Maheshwari, and A. Agarwal, ``Foggy image enhancement J. Liaoning Tech. Univ., vol. 37, no. 1, pp. 191197, 2018. using contrast limited adaptive histogram equalization of digitally ltered image: Performance improvement,'' in Proc. Int. Conf. Adv. Comput., [61] C.-Y. Yu, Y.-C. Ouyang, C.-M. Wang, and C.-I. Chang, ``Adaptive inverse Commun. Informat., Sep. 2014, pp. 22252231. hyperbolic tangent algorithm for dynamic contrast adjustment in dis- [85] S.-D. Chen and A. R. Ramli, ``Contrast enhancement using recursive playing scenes,'' EURASIP J. Adv. Signal Process., vol. 2010, no. 1, mean-separate histogram equalization for scalable brightness preserva- Dec. 2010, Art. no. 485151. tion,'' IEEE Trans. Consum. Electron., vol. 49, no. 4, pp. 13011309, [62] S. C. Liu, S. Liu, H. Wu, M. A. Rahman, S. C.-F. Lin, C. Y. Wong, Nov. 2003. N. Kwok, and H. Shi, ``Enhancement of low illumination images based [86] J. Jiang, Y. Zhang, and F. Xue, ``Local histogram equalization with on an optimal hyperbolic tangent prole,'' Comput. Electr. Eng., vol. 70, brightness preservation,'' (in Chinese), Acta Electronica Sinica, vol. 34, pp. 538550, Aug. 2018. no. 5, pp. 861866, 2006. [63] D. David, ``Low illumination image enhancement algorithm using itera- [87] C. Sun and F. Yuan, ``Partially overlapped sub-block histogram equal- tive recursive lter and visual gamma transformation function,'' in Proc. ization based on recursive equal area separateness,'' (in Chinese), Opt. 5th Int. Conf. Adv. Comput. Commun., Sep. 2015, pp. 408411. Precis. Eng., vol. 17, no. 9, pp. 22922300, 2009. [64] Y. Huang, ``A Retinex image enhancement based on L channel illumina- [88] Y. Tian, Q. Wan, and F. Wu, ``Local histogram equalization based on the tion estimation and Gamma function,'' Techn. Autom. Appl., vol. 37, no. 5, minimum brightness error,'' in Proc. 4th Int. Conf. Image Graph. (ICIG), pp. 5660, 2018. Aug. 2007, pp. 5861. [65] S. Lee, N. Kim, and J. Paik, ``Adaptively partitioned block-based contrast [89] T. Celik and T. Tjahjadi, ``Contextual and variational contrast enhance- enhancement and its application to low light-level video surveillance,'' ment,'' IEEE Trans. Image Process., vol. 20, no. 12, pp. 34313441, SpringerPlus, vol. 4, no. 1, Dec. 2015, Art. no. 431. Dec. 2011. [66] L. Li, S. Sun, and C. Xia, ``Survey of histogram equalization technology,'' [90] T. L. Tan, K. S. Sim, and C. P. Tso, ``Image enhancement using back- Comput. Syst. Appl., vol. 23, no. 3, pp. 18, 2014. ground brightness preserving histogram equalisation,'' Electron. Lett., [67] K. Singh, R. Kapoor, and S. K. Sinha, ``Enhancement of low exposure vol. 48, no. 3, pp. 155157, 2012. images via recursive histogram equalization algorithms,'' Optik, vol. 126, [91] K. Singh, D. K. Vishwakarma, G. S. Walia, and R. Kapoor, ``Contrast no. 20, pp. 26192625, Oct. 2015. enhancement via texture region based histogram equalization,'' J. Mod. [68] A. Singh and K. Gupta, ``A contrast enhancement technique for low light Opt., vol. 63, no. 15, pp. 14441450, Aug. 2016. images,'' in Proc. Int. Conf. Commun. Syst., 2016, pp. 220230. [92] K. S. Sim, C. P. Tso, and Y. Y. Tan, ``Recursive sub-image histogram [69] Q. Wang and R. Ward, ``Fast image/video contrast enhancement based equalization applied to gray scale images,'' Pattern Recognit. Lett., on weighted thresholded histogram equalization,'' IEEE Trans. Consum. vol. 28, no. 10, pp. 12091221, Jul. 2007. Electron., vol. 53, no. 2, pp. 757764, Jul. 2007. [93] A. S. Parihar and O. P. Verma, ``Contrast enhancement using entropy- [70] R. Dale-Jones and T. Tjahjadi, ``A study and modication of the local based dynamic sub-histogram equalisation,'' IET Image Process., vol. 10, histogram equalization algorithm,'' Pattern Recognit., vol. 26, no. 9, no. 11, pp. 799808, Nov. 2016. pp. 13731381, Sep. 1993. [94] M. Abdullah-Al-Wadud, M. Kabir, and M. Dewan, ``A dynamic his- [71] M. F. Khan, E. Khan, and Z. A. Abbasi, ``Segment dependent dynamic togram equalization for image contrast enhancement,'' IEEE Trans. Con- multi-histogram equalization for image contrast enhancement,'' Digit. sum. Electron., vol. 53, no. 2, pp. 593600, May 2007. Signal Process., vol. 25, pp. 198223, Feb. 2014. [95] H. Ibrahim and N. P. Kong, ``Brightness preserving dynamic histogram [72] L. Li, S. Sun, and C. Xia, ``Survey of histogram equalization tech- equalization for image contrast enhancement,'' IEEE Trans. Consum. nology,'' (in Chinese), Comput. Syst. Appl., vol. 23, no. 3, pp. 18, Electron., vol. 53, no. 4, pp. 17521758, Nov. 2007. [96] C. Ooi, N. P. Kong, and H. Ibrahim, ``Bi-histogram equalization with [73] Y.-T. Kim, ``Contrast enhancement using brightness preserving bi- a plateau limit for digital image enhancement,'' IEEE Trans. Consum. histogram equalization,'' IEEE Trans. Consum. Electron., vol. 43, no. 1, Electron., vol. 55, no. 4, pp. 20722080, Nov. 2009. pp. 18, Feb. 1997. [97] K. Singh and R. Kapoor, ``Image enhancement via median-mean based [74] Y. Wang, Q. Chen, and B. Zhang, ``Image enhancement based on equal sub-image-clipped histogram equalization,'' Optik, vol. 125, no. 17, area dualistic sub-image histogram equalization method,'' IEEE Trans. pp. 46464651, Sep. 2014. Consum. Electron., vol. 45, no. 1, pp. 6875, Feb. 1999. [98] K. Singh and R. Kapoor, ``Image enhancement using exposure based sub [75] S.-D. Chen and A. R. Ramli, ``Minimum mean brightness error bi- image histogram equalization,'' Pattern Recognit. Lett., vol. 36, no. 1, histogram equalization in contrast enhancement,'' IEEE Trans. Consum. pp. 1014, Jan. 2014. Electron., vol. 49, no. 4, pp. 13101319, Nov. 2003. [99] H. Kim, J. Lee, and J. Lee, ``Contrast enhancement using adaptively [76] H. Shen, S. Sun, and B. Lei, ``An adaptive brightness preserving bi- modied histogram equalization,'' in Proc. Pacic-Rim Symp. Image histogram equalization,'' Proc. SPIE, vol. 8005, Dec. 2011, Art. no. 8005. Video Technol., 2006, pp. 11501158. [77] X. Tian, D. Qiao, and C. Wu, ``Color image enhancement based on bi- [100] M. Kim and M. Chung, ``Recursively separated and weighted histogram histogram equalization,'' (in Chinese), J. Xian Univ. Posts Telecommun., equalization for brightness preservation and contrast enhancement,'' vol. 20, no. 2, pp. 5863, 2015. IEEE Trans. Consum. Electron., vol. 54, no. 3, pp. 13891397, Aug. 2008. [78] T. K. Kim, J. K. Paik, and B. S. Kang, ``Contrast enhancement sys- [101] T. Arici, S. Dikbas, and Y. Altunbasak, ``A histogram modication frame- tem using spatially adaptive histogram equalization with temporal l- work and its application for image contrast enhancement,'' IEEE Trans. tering,'' IEEE Trans. Consum. Electron., vol. 44, no. 1, pp. 8287, Image Process., vol. 18, no. 9, pp. 19211935, Sep. 2009. Feb. 1998. [102] C.-C. Chiu and C.-C. Ting, ``Contrast enhancement algorithm based on [79] J.-Y. Kim, L.-S. Kim, and S.-H. Hwang, ``An advanced contrast enhance- gap adjustment for histogram equalization,'' Sensors, vol. 16, no. 6, 2016, ment using partially overlapped sub-block histogram equalization,'' Art. no. 936. IEEE Trans. Circuits Syst. Video Technol., vol. 11, no. 4, pp. 475484, [103] S. Kansal, S. Purwar, and R. K. Tripathi, ``Image contrast enhancement Apr. 2001. using unsharp masking and histogram equalization,'' Multimedia Tools [80] B. Liu, W. Jin, Y. Chen, C. Liu, and L. Li, ``Contrast enhancement using Appl., vol. 77, no. 20, pp. 2691926938, Oct. 2018. non-overlapped sub-blocks and local histogram projection,'' IEEE Trans. [104] E. H. Land and J. J. McCann, ``Lightness and Retinex theory,'' J. Opt. Consum. Electron., vol. 57, no. 2, pp. 583588, May 2011. Soc. Amer., vol. 61, no. 1, pp. 111, Jan. 1971. [81] S.-C. Huang and C.-H. Yeh, ``Image contrast enhancement for preserving [105] S. Park, S. Yu, B. Moon, S. Ko, and J. Paik, ``Low-light image enhance- mean brightness without losing image features,'' Eng. Appl. Artif. Intell., ment using variational optimization-based Retinex model,'' IEEE Trans. vol. 26, nos. 56, pp. 14871492, May 2013. Consum. Electron., vol. 63, no. 2, pp. 178184, May 2017. 87912 VOLUME 8, 2020 W. Wang et al.: Experiment-Based Review of Low-Light Image Enhancement Methods [106] H. Tanaka, Y. Waizumi, and T. Kasezawa, ``Retinex-based signal [131] C.-T. Shen and W.-L. Hwang, ``Color image enhancement using Retinex enhancement for image dark regions,'' in Proc. IEEE Int. Conf. Signal with robust envelope,'' in Proc. 16th IEEE Int. Conf. Image Process., Image Process. Appl., Sep. 2017, pp. 205209. Nov. 2009, pp. 31413144. [107] S. Liao, Y. Piao, and B. Li, ``Low illumination color image enhancement [132] I.-S. Jang, K.-H. Park, and Y.-H. Ha, ``Color correction by estimation of based on improved retinex,'' in Proc. LIDAR Imag. Detection Target dominant chromaticity in multi-scaled Retinex,'' J. Imag. Sci. Technol., Recognit., Nov. 2017, p. 160. vol. 53, no. 5, pp. 501512, 2009. [108] M. Li, J. Liu, W. Yang, X. Sun, and Z. Guo, ``Structure-revealing low- [133] S. Wang, X. Ding, and Y. Liao, ``A novel bio-inspired algorithm for light image enhancement via robust Retinex model,'' IEEE Trans. Image color image enhancement,'' (in Chinese), Acta Electronica Sinica, vol. 36, Process., vol. 27, no. 6, pp. 28282841, Jun. 2018. no. 10, pp. 19701973, 2008. [109] H. Hu and G. Ni, ``Color image enhancement based on the improved [134] Q. Xiao, X. Ding, S. Wang, Y. Liao, and D. Guo, ``A halo-free and retinex,'' in Proc. Int. Conf. Multimedia Technol., Oct. 2010, pp. 14. hue preserving algorithm for color image enhancement,'' (in Chinese), [110] H.-G. Lee, S. Yang, and J.-Y. Sim, ``Color preserving contrast enhance- J. Comput.-Aided Des. Comput. Graph., vol. 22, no. 8, pp. 12461252, ment for low light level images based on retinex,'' in Proc. AsiaPacic Sep. 2010. Signal Inf. Process. Assoc. Annu. Summit Conf., Dec. 2015, pp. 884887. [135] C.-H. Lee, J.-L. Shih, C.-C. Lien, and C.-C. Han, ``Adaptive multiscale [111] H. Liu, X. Sun, H. Han, and W. Cao, ``Low-light video image enhance- Retinex for image contrast enhancement,'' in Proc. Int. Conf. Signal- ment based on multiscale retinex-like algorithm,'' in Proc. Chin. Control Image Technol. Internet-Based Syst., Dec. 2013, pp. 4350. Decis. Conf., May 2016, pp. 37123715. [136] S. Wang, J. Zheng, H.-M. Hu, and B. Li, ``Naturalness preserved enhance- [112] D. J. Jobson, Z. Rahman, and G. A. Woodell, ``Properties and perfor- ment algorithm for non-uniform illumination images,'' IEEE Trans. mance of a center/surround retinex,'' IEEE Trans. Image Process., vol. 6, Image Process., vol. 22, no. 9, pp. 35383548, Sep. 2013. no. 3, pp. 451462, Mar. 1997. [137] D. Wang, X. Niu, and Y. Dou, ``A piecewise-based contrast enhancement [113] Z. Rahman, D. J. Jobson, and G. A. Woodell, ``Multi-scale retinex for framework for low lighting video,'' in Proc. IEEE Int. Conf. Secur., color image enhancement,'' in Proc. 3rd IEEE Int. Conf. Image Process., Pattern Anal., Cybern., Oct. 2014, pp. 235240. Sep. 1996, pp. 10031006. [138] J. Xiao, S. Shan, and P. Duan, ``A fast image enhancement algorithm [114] D. J. Jobson, Z. Rahman, and G. A. Woodell, ``A multiscale retinex based on fusion of different color spaces,'' Acta Automatica Sinica, for bridging the gap between color images and the human observation vol. 40, no. 4, pp. 697705, 2014. of scenes,'' IEEE Trans. Image Process., vol. 6, no. 7, pp. 965976, [139] X. Fu, Y. Liao, D. Zeng, Y. Huang, X.-P. Zhang, and X. Ding, ``A prob- Jul. 2002. abilistic method for image enhancement with simultaneous illumination [115] D. J. Jobson, ``Retinex processing for automatic image enhancement,'' and reectance estimation,'' IEEE Trans. Image Process., vol. 24, no. 12, J. Electron. Imag., vol. 13, no. 1, pp. 100110, Jan. 2004. pp. 49654977, Dec. 2015. [116] X. Ren, W. Yang, W. Cheng, and J. Liu, ``LR3M: Robust low- [140] H. Zhao, C. Xiao, and J. Yu, ``A Retinex algorithm for night color image light enhancement via low-rank regularized retinex model,'' IEEE enhancement by MRF,'' (in Chinese), Opt. Precis. Eng., vol. 22, no. 4, Trans. Image Process., vol. 29, pp. 58625876, Apr. 2020, doi: pp. 10481055, 2014. 10.1109/TIP.2020.2984098. [141] H. Zhao, C. Xiao, and J. Yu, ``Retinex algorithm for night color image [117] S. Hao, X. Han, Y. Guo, X. Xu, and M. Wang, ``Low-light image enhance- enhancement based on WLS,'' (in Chinese), J. Beijing Univ. Technol., ment with semi-decoupled decomposition,'' IEEE Trans. Multimedia, vol. 40, no. 3, pp. 404410, 2014. early access, Jan. 27, 2020, doi: 10.1109/TMM.2020.2969790. [142] J. Ho Jang, Y. Bae, and J. Beom Ra, ``Contrast-enhanced fusion of mul- [118] Z. Gu, F. Li, F. Fang, and G. Zhang, ``A novel retinex-based fractional- tisensor images using subband-decomposed multiscale Retinex,'' IEEE order variational model for images with severely low light,'' IEEE Trans. Image Process., vol. 21, no. 8, pp. 34793490, Aug. 2012. Trans. Image Process., vol. 29, pp. 32393253, Dec. 2019, doi: [143] X. Liu, T. Qiao, and Z. Qiao, ``Image enhancement method of mine based 10.1109/TIP.2019.2958144. on bilateral ltering and Retinex algorithm,'' (in Chinese), Ind. Mine [119] P. Hao, S. Wang, S. Li, and M. Yang, ``Low-light image enhancement Autom., vol. 43, no. 2, pp. 4954, 2017. based on retinex and saliency theories,'' in Proc. Chin. Autom. Congr., [144] S. Feng, ``Image enhancement algorithm based on real-time Retinex and Hangzhou, China, Nov. 2019, pp. 25942597. bilateral ltering,'' (in Chinese), Comput. Appl. Softw., vol. 26, no. 11, [120] R. Kimmel, M. Elad, D. Shaked, R. Keshet, and I. Sobel, ``A variational pp. 234238, 2009. framework for Retinex,'' Int. J. Comput. Vis., vol. 52, no. 1, pp. 723, [145] M.-R. Wang and S.-Q. Jiang, ``Image enhancement algorithm combining multi-scale retinex and bilateral lter,'' in Proc. Int. Conf. Autom., Mech. [121] M. Elad, ``Retinex by two bilateral lters,'' in Proc. 5th Int. Conf. Scale Control Comput. Eng., 2015, pp. 12211226. Space PDE Methods Comput. Vis., Hofgeismar, Germany, Apr. 2005, [146] J. Yin, H. Li, J. Du, and P. He, ``Low illumination image Retinex enhance- pp. 217229. ment algorithm based on guided ltering,'' in Proc. IEEE 3rd Int. Conf. [122] L. Meylan and S. Susstrunk, ``High dynamic range image rendering with a Cloud Comput. Intell. Syst., Nov. 2014, pp. 639644. retinex-based adaptive lter,'' IEEE Trans. Image Process., vol. 15, no. 9, [147] A. Mulyantini and H.-K. Choi, ``Color image enhancement using a pp. 28202830, Sep. 2006. Retinex algorithm with bilateral ltering for images with poor illumina- [123] X. Xu, Q. Chen, and P. Wang, ``A fast halo-free image enhancement tion,'' J. Korea Multimedia Soc., vol. 19, no. 2, pp. 233239, Feb. 2016. method based on Retinex,'' (in Chinese), J. Comput.-Aided Des. Comput. [148] Y. Zhang, W. Huang, W. Bi, and G. Gao, ``Colorful image enhancement Graph., vol. 20, no. 10, pp. 13251331, 2008. algorithm based on guided lter and Retinex,'' in Proc. IEEE Int. Conf. [124] M. Bertalmío, V. Caselles, and E. Provenzi, ``Issues about Retinex the- Signal Image Process., Aug. 2016, pp. 3336. ory and contrast enhancement,'' Int. J. Comput. Vis., vol. 83, no. 1, [149] J. Wei, Q. Zhijie, X. Bo, and Z. Dean, ``A nighttime image enhancement pp. 101119, Jun. 2009. method based on Retinex and guided lter for object recognition of apple [125] M. K. Ng and W. Wang, ``A total variation model for Retinex,'' SIAM J. harvesting robot,'' Int. J. Adv. Robot. Syst., vol. 15, no. 1, pp. 112, Imag. Sci., vol. 4, no. 1, pp. 345365, Jan. 2011. Jan. 2018. [126] X. Fu, D. Zeng, Y. Huang, X.-P. Zhang, and X. Ding, ``A weighted [150] S. Zhang, G.-J. Tang, X.-H. Liu, S.-H. Luo, and D.-D. Wang, ``Retinex variational model for simultaneous reectance and illumination estima- based low-light image enhancement using guided ltering and variational tion,'' in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2016, framework,'' (in Chinese), Optoelectron. Lett., vol. 14, no. 2, pp. 156 pp. 27822790. 160, Mar. 2018. [127] A. B. Petro, C. Sbert, and J.-M. Morel, ``Multiscale Retinex,'' Image [151] D. Zhu, G. Chen, P. N. Michelini, and H. Liu, ``Fast image enhancement Process. Line, vol. 4, pp. 7188, Apr. 2014. based on maximum and guided lters,'' in Proc. IEEE Int. Conf. Image [128] H. Lin and Z. Shi, ``Multi-scale Retinex improvement for nighttime image Process. (ICIP), Taipei, Taiwan, Sep. 2019, pp. 40804084. enhancement,'' Optik, vol. 125, no. 24, pp. 71437148, Dec. 2014. [129] F. Matin, Y. Jeong, K. Kim, and K. Park, ``Color image enhancement [152] M. Wang, Z. Tian, W. Gui, X. Zhang, and W. Wang, ``Low-light image using multiscale Retinex based on particle swarm optimization method,'' enhancement based on nonsubsampled shearlet transform,'' IEEE Access, J. Phys., Conf. Ser., vol. 960, no. 1, Jan. 2018, Art. no. 012026. vol. 8, pp. 6316263174, 2020. [130] S. Chen and A. Beghdadi, ``Natural rendering of color image based on [153] S. Wen and Z. You, ``Homomorphic ltering space domain algorithm for Retinex,'' in Proc. 16th IEEE Int. Conf. Image Process., Nov. 2009, performance optimization,'' (in Chinese), Comput. Appl. Res., vol. 17, pp. 18131816. no. 3, pp. 6265, 2000. VOLUME 8, 2020 87913 W. Wang et al.: Experiment-Based Review of Low-Light Image Enhancement Methods [154] J. Xiao, S. Song, and L. Ding, ``Research on the fast algorithm of spatial [177] K. Kawasaki and A. Taguchi, ``A multiscale Retinex based on wavelet homomorphic ltering,'' (in Chinese), J. Image Graph., vol. 13, no. 12, transformation,'' in Proc. IEEE AsiaPacic Conf. Circuits Syst., pp. 23022306, 2008. Nov. 2014, pp. 3336. [155] Y. Zhang and M. Xie, ``Colour image enhancement algorithm based [178] F. Russo, ``An image enhancement technique combining sharpening on HSI and local homomorphic ltering,'' (in Chinese), Comput. Appl. and noise reduction,'' IEEE Trans. Instrum. Meas., vol. 51, no. 4, Softw., vol. 30, no. 12, pp. 303307, 2013. pp. 824828, Aug. 2002. [156] Y. Zhang and M. Xie, ``Block-DCT based homomorphic ltering algo- [179] L. Chen, ``The application of wavelet transform in the image enhance- rithm for color image enhancement,'' Comput. Eng. Des., vol. 34, no. 5, ment processing,'' (in Chinese), J. Shaanxi Univ. Technol., vol. 30, no. 1, pp. 17521756, 2013. pp. 3237, 2014. [157] L. Xiao, C. Li, Z. Wu, and T. Wang, ``An enhancement method for X-ray [180] M. H. Asmare, V. S. Asirvadam, and A. F. M. Hani, ``Image enhancement image via fuzzy noise removal and homomorphic ltering,'' Neurocom- based on contourlet transform,'' Signal, Image Video Process., vol. 9, puting, vol. 195, pp. 5664, Jun. 2016. no. 7, pp. 16791690, Oct. 2015. [158] X. Tian, X. Cheng, and W. Chengmao, ``Color image enhancement [181] J.-L. Starck, F. Murtagh, E. J. Candes, and D. L. Donoho, ``Gray and color method based on Homomorphic ltering,'' (in Chinese), J. Xian Univ. image contrast enhancement by the curvelet transform,'' IEEE Trans. Posts Telecommun., vol. 20, no. 6, pp. 5155, 2015. Image Process., vol. 12, no. 6, pp. 706717, Jun. 2003. [159] A. Loza, D. Bull, and A. Achim, ``Automatic contrast enhancement of [182] G. G. Bhutada, R. S. Anand, and S. C. Saxena, ``Edge preserved image low-light images based on local statistics of wavelet coefcients,'' in enhancement using adaptive fusion of images denoised by wavelet and Proc. IEEE Int. Conf. Image Process., Sep. 2013, pp. 35533556. curvelet transform,'' Digit. Signal Process., vol. 21, no. 1, pp. 118130, [160] T. Sun, C. Jung, P. Ke, H. Song, and J. Hwang, ``Readability enhancement Jan. 2011. of low light videos based on discrete wavelet transform,'' in Proc. IEEE [183] X. Si, J. Wen, and X. Wang, ``Image enhancement algorithm based on Int. Symp. Multimedia, Dec. 2017, pp. 342345. curvelet transform II for low illumination colorful and noising images,'' (in Chinese), Command Inf. Syst. Technol., vol. 7, no. 4, pp. 8790, 2016. [161] C. Jung, Q. Yang, T. Sun, Q. Fu, and H. Song, ``Low light image enhance- ment with dual-tree complex wavelet transform,'' J. Vis. Commun. Image [184] T. Y. Han, D. H. Kim, S. H. Lee, and B. C. Song, ``Infrared image super- Represent., vol. 42, pp. 2836, Jan. 2017. resolution using auxiliary convolutional neural network and visible image [162] T. Sun and C. Jung, ``Readability enhancement of low light images under low-light conditions,'' J. Vis. Commun. Image Represent., vol. 51, based on dual-tree complex wavelet transform,'' in Proc. IEEE Int. Conf. pp. 191200, Feb. 2018. Acoust., Speech Signal Process., Mar. 2016, pp. 17411745. [185] T. Mikami, D. Sugimura, and T. Hamamoto, ``Capturing color and near- infrared images with different exposure times for image enhancement [163] T.-C. Hsung, D. P.-K. Lun, and W. W. L. Ng, ``Efcient fringe image under extremely low-light scene,'' in Proc. IEEE Int. Conf. Image Pro- enhancement based on dual-tree complex wavelet transform,'' Appl. Opt., cess., Oct. 2014, pp. 669673. vol. 50, no. 21, pp. 39733986, Jul. 2011. [186] H. Yamashita, D. Sugimura, and T. Hamamoto, ``Enhancing low-light [164] M. Z. Iqbal, A. Ghafoor, and A. M. Siddiqui, ``Satellite image resolution color images using an RGB-NIR single sensor,'' in Proc. Vis. Commun. enhancement using dual-tree complex wavelet transform and nonlocal Image Process., Dec. 2015, pp. 14. means,'' IEEE Geosci. Remote Sens. Lett., vol. 10, no. 3, pp. 451455, May 2013. [187] L. Li, Y. Si, and Z. Jia, ``Medical image enhancement based on CLAHE [165] M.-X. Yang, G.-J. Tang, X.-H. Liu, L.-Q. Wang, Z.-G. Cui, and S.-H. Luo, and unsharp masking in NSCT domain,'' J. Med. Imag. Health Informat., ``Low-light image enhancement based on retinex theory and dual- vol. 8, no. 3, pp. 431438, Mar. 2018. tree complex wavelet transform,'' Optoelectron. Lett., vol. 14, no. 6, [188] L. Wang, G. Fu, Z. Jiang, G. Ju, and A. Men, ``Low-light image enhance- pp. 470475, Nov. 2018. ment with attention and multi-level feature fusion,'' in Proc. IEEE Int. Conf. Multimedia Expo Workshops, Jul. 2019, pp. 276281. [166] X. Zhou, S. Zhou, and F. Huang, ``New algorithm of image enhancement based on wavelet transform,'' (in Chinese), Comput. Appl., vol. 25, no. 3, [189] A. Toet, M. A. Hogervorst, R. van Son, and J. Dijk, ``Augmenting full pp. 606608, 2005. colour-fused multi-band night vision imagery with synthetic imagery [167] X. Zong, A. F. Laine, and E. A. Geiser, ``Speckle reduction and contrast in real-time,'' Int. J. Image Data Fusion, vol. 2, no. 4, pp. 287308, enhancement of echocardiograms via multiscale nonlinear processing,'' Dec. 2011. IEEE Trans. Med. Imag., vol. 17, no. 4, pp. 532540, Aug. 1998. [190] M. Aguilar, D. A. Fay, and A. M. Waxman, ``Real-time fusion of low- light CCD and uncooled IR imagery for color night vision,'' Proc. SPIE, [168] A. oza, D. R. Bull, P. R. Hill, and A. M. Achim, ``Automatic contrast vol. 3364, pp. 124135, Jul. 1998. enhancement of low-light images based on local statistics of wavelet coefcients,'' Digit. Signal Process., vol. 23, no. 6, pp. 18561866, [191] B. Qi, G. Kun, Y.-X. Tian, and Z.-Y. Zhu, ``A novel false color mapping Dec. 2013. model-based fusion method of visual and infrared images,'' in Proc. [169] A. K. Bhandari, A. Kumar, and G. K. Singh, ``Improved knee transfer Int. Conf. Opt. Instrum. Technol., Optoelectron. Imag. Process. Technol., function and gamma correction based method for contrast and brightness Dec. 2013, Art. no. 904519. enhancement of satellite image,'' AEU-Int. J. Electron. Commun., vol. 69, [192] A. Toet, ``Colorizing single band intensied nightvision images,'' Dis- no. 2, pp. 579589, Feb. 2015. plays, vol. 26, no. 1, pp. 1521, Jan. 2005. [170] S. E. Kim, J. J. Jeon, and I. K. Eom, ``Image contrast enhancement using [193] W. Yang, J. Zhang, and H. Xu, ``Study of infrared and LLL image fusion entropy scaling in wavelet domain,'' Signal Process., vol. 127, no. 1, algorithm based on the target characteristics,'' (in Chinese), Laser Infr., pp. 111, Oct. 2016. vol. 44, no. 1, pp. 5660, 2014. [171] H. Demirel, C. Ozcinar, and G. Anbarjafari, ``Satellite image contrast [194] J. Zhu, W. Jin, and L. Li, ``Fusion of the low-light-level visible and enhancement using discrete wavelet transform and singular value decom- infrared images for night-vision context enhancement,'' (in Chinese), position,'' IEEE Geosci. Remote Sens. Lett., vol. 7, no. 2, pp. 333337, Chin. Opt. Lett., vol. 16, no. 1, pp. 9499, 2018. Apr. 2010. [195] Y. Zhang, L. Bai, and Q. Chen, ``Dual-band low level light image real- [172] Q. Li and Q. Liu, ``Adaptive enhancement algorithm for low illumina- time registration based on pixel spatial correlation degree,'' (in Chinese), tion images based on wavelet transform,'' (in Chinese), Chin. J. Lasers, J. Nanjing Univ. Sci. Technol., vol. 33, no. 4, pp. 506510, 2009. vol. 42, no. 2, pp. 280286, 2015. [196] S. Yang and W. Liu, ``Color fusion method for low-level light and infrared [173] Y. Jin and A. F. Laine, ``Contrast enhancement by multiscale adaptive images,'' (in Chinese), Infr. Laser Eng., vol. 43, no. 5, pp. 16541659, histogram equalization,'' Proc. SPIE, vol. 4478, pp. 206213, Dec. 2001. 2014. [174] W. L. Jun and Z. Rong, ``Image defogging algorithm of single color image [197] X. Qian, L. Han, and B. Wang, ``A fast fusion algorithm of visible and based on wavelet transform and histogram equalization,'' Appl. Math. infrared images,'' (in Chinese), J. Comput.-Aided Des. Comput. Graph., Sci., vol. 7, pp. 39133921, 2013. vol. 23, no. 7, pp. 12111216, 2011. [175] L. Huang, W. Zhao, and J. Wang, ``Combination of contrast limited [198] J. Li, S. Z. Li, Q. Pan, and T. Yang, ``Illumination and motion-based video adaptive histogram equalization and discrete wavelet transform for image enhancement for night surveillance,'' in Proc. IEEE Int. Workshop Vis. enhancement,'' IET Image Process., vol. 9, no. 10, pp. 908915, 2015. Surveill. Perform. Eval. Tracking Surveill., Jun. 2005, pp. 169175. [176] Q. Xu, J. Cui, and B. Chen, ``Low-light image enhancement algorithm [199] R. Raskar, J. Yu, and A. Llie, ``Image fusion for context enhancement and based on the wavelet transform and Retinex theory,'' (in Chinese), video surrealism,'' in Proc. 3rd Int. Symp. Non-Photorealistic Animation J. Hunan Univ. Arts Sci., vol. 29, no. 2, pp. 4146, 2017. Rendering, 2004, pp. 8594. 87914 VOLUME 8, 2020 W. Wang et al.: Experiment-Based Review of Low-Light Image Enhancement Methods [200] Y. Rao, ``Image-based fusion for video enhancement of night-time [225] Z. Ying, G. Li, Y. Ren, R. Wang, and W. Wang, ``A new low-light image surveillance,'' Opt. Eng., vol. 49, no. 12, pp. 120501120503, Dec. 2010. enhancement algorithm using camera response model,'' in Proc. IEEE [201] Y. Pu, L. Liu, and X. Liu, ``Enhancement technology of video under Int. Conf. Comput. Vis. Workshops, Oct. 2017, pp. 30153022. low illumination,'' (in Chinese), Infr. Laser Eng., vol. 43, no. 6, [226] Z. Rahman, M. Aamir, Y.-F. Pu, F. Ullah, and Q. Dai, ``A smart system pp. 20212026, 2014. for low-light image enhancement with color constancy and detail manip- [202] J. Zhu and Z. Wang, ``Low-illumination surveillance image enhancement ulation in complex light environments,'' Symmetry, vol. 10, no. 12, 2018, based on similar scenes,'' (in Chinese), Comput. Appl. Softw., vol. 32, Art. no. 10120718. no. 1, pp. 203205, 2015. [227] Z. Zhou et al., ``Single-image low-light enhancement via generating and fusing multiple sources,'' Neural Comput. Appl., Nov. 2018, doi: [203] Y. Rao, Z. Chen, and M. Sun, ``An effective night video enhancement 10.1007/s00521-018-3893-3. algorithm,'' in Proc. IEEE Conf. Vis. Commun. Image Process., Jun. 2011, [228] K. He, J. Sun, and X. Tang, ``Single image haze removal using dark pp. 14. channel prior,'' in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., [204] G. Petschnigg, R. Szeliski, M. Agrawala, M. Cohen, H. Hoppe, and Jun. 2009, pp. 19561963. K. Toyama, ``Digital photography with ash and no-ash image pairs,'' [229] X. Dong, G. Wang, and Y. Pang, ``Fast efcient algorithm for enhance- ACM Trans. Graph., vol. 23, no. 3, pp. 664672, Aug. 2004. ment of low lighting video,'' in Proc. IEEE Int. Conf. Multimedia Expo [205] E. Reinhard et al., High Dynamic Range Imaging: Acquisition, Display Washington, Jul. 2011, pp. 16. and Image-Based Lighting, 2nd ed. San Francisco, CA, USA: Morgan [230] G. Li, G. Li, and G. Han, ``Illumination compensation using Retinex Kaufmann, 2010. model based on bright channel prior,'' (in Chinese), Opt. Precis. Eng., [206] T. Stathaki, Image Fusion: Algorithms and Applications. New York, NY, vol. 26, no. 5, pp. 11911200, 2018. USA: Academic, 2008. [231] X. Fu, D. Zeng, Y. Huang, X. Ding, and X.-P. Zhang, ``A variational [207] H. Zhang, E. Zhu, and Y. Wu, ``High dynamic range image generating framework for single low light image enhancement using bright channel algorithm based on detail layer separation of a single exposure image,'' prior,'' in Proc. IEEE Global Conf. Signal Inf. Process., Dec. 2013, (in Chinese), Acta Automatica Sinica, vol. 45, no. 11, pp. 21592170, pp. 10851088. [232] X. Wang, H. Zhang, and Y. Wu, ``Low-illumination image enhance- [208] R. Fattal, D. Lischinski, and M. Werman, ``Gradient domain high ment based on physical model,'' J. Comput. Appl., vol. 35, no. 8, dynamic range compression,'' ACM Trans. Graph., vol. 21, no. 3, pp. 23012304, 2015. pp. 249256, Jul. 2002. [233] Z. Shi, M. M. Zhu, B. Guo, M. Zhao, and C. Zhang, ``Nighttime low [209] Z. Guo Li, J. H. Zheng, and S. Rahardja, ``Detail-enhanced exposure illumination image enhancement with single image using bright/dark fusion,'' IEEE Trans. Image Process., vol. 21, no. 11, pp. 46724676, channel prior,'' EURASIP J. Image Video Process., vol. 2018, no. 1, Nov. 2012. Dec. 2018, Art. no. 13. [210] B.-J. Yun, H.-D. Hong, and H.-H. Choi, ``A contrast enhancement method [234] X. Wei, L. Xueling, T. Zhigang, Y. Jin, and X. Ke, ``Low light image for HDR image using a modied image formation model,'' IEICE Trans. enhancement based on luminance map and haze removal model,'' in Proc. Inf. Syst., vol. E95-D, no. 4, pp. 11121119, 2012. 10th Int. Symp. Comput. Intell. Design, Dec. 2017, pp. 143146. [211] I. Merianos and N. Mitianoudis, ``A hybrid multiple exposure image [235] X. Zhang, P. Shen, and L. Luo, ``Enhancement and noise reduction of fusion approach for HDR image synthesis,'' in Proc. IEEE Int. Conf. very low light level images,'' in Proc. IEEE Int. Conf. Pattern Recognit., Imag. Syst. Techn., Oct. 2016, pp. 222226. Nov. 2012, pp. 20342037. [212] D. Patel, B. Sonane, and S. Raman, ``Multi-exposure image fusion using [236] X. Jiang, H. Yao, S. Zhang, X. Lu, and W. Zeng, ``Night video enhance- propagated image ltering,'' in Proc. Int. Conf. Comput. Vis. Image ment using improved dark channel prior,'' in Proc. IEEE Int. Conf. Image Process., 2017, pp. 431441. Process., Sep. 2013, pp. 553557. [213] Y. Huo and Q. Peng, ``High dynamic range images and reverse tone [237] J. Song, L. Zhang, and P. Shen, ``Single low-light image enhancement mapping operators,'' (in Chinese), Syst. Eng. Electron., vol. 34, no. 4, using luminance map,'' in Proc. Chin. Conf. Pattern Recognit., Nov. 2016, pp. 821826, 2012. pp. 101110. [214] Y. Huo, F. Yang, L. Dong, and V. Brost, ``Physiological inverse tone map- [238] J. Pang, S. Zhang, and W. Bai, ``A novel framework for enhancement ping based on retina response,'' Vis. Comput., vol. 30, no. 5, pp. 507517, of the low lighting video,'' in Proc. IEEE Symp. Comput. Commun., May 2014. Jul. 2017, pp. 13661371. [215] H.-S. Le and H. Li, ``Fused logarithmic transform for contrast enhance- [239] L. Zhang, P. Shen, X. Peng, G. Zhu, J. Song, W. Wei, and H. Song, ment,'' Electron. Lett., vol. 44, no. 1, pp. 1920, 2008. ``Simultaneous enhancement and noise reduction of a single low-light [216] M. Yamakawa and Y. Sugita, ``Image enhancement using Retinex and image,'' IET Image Process., vol. 10, no. 11, pp. 840847, Nov. 2016. image fusion techniques,'' Electron. Commun. Jpn., vol. 101, no. 8, [240] L. Tao, C. Zhu, J. Song, T. Lu, H. Jia, and X. Xie, ``Low-light image pp. 5263, Aug. 2018. enhancement using CNN and bright channel prior,'' in Proc. IEEE Int. [217] W. Wang, Z. Chen, X. Yuan, and X. Wu, ``Adaptive image enhance- Conf. Image Process., Sep. 2017, pp. 32153219. ment method for correcting low-illumination images,'' Inf. Sci., vol. 496, [241] H. Lee, K. Sohn, and D. Min, ``Unsupervised low-light image enhance- pp. 2541, Sep. 2019. ment using bright channel prior,'' IEEE Signal Process. Lett., vol. 27, [218] A. T. Celebi, R. Duvar, and O. Urhan, ``Fuzzy fusion based high dynamic pp. 251255, Jan. 2020, doi: 10.1109/LSP.2020.2965824. range imaging using adaptive histogram separation,'' IEEE Trans. Con- [242] S. Park, B. Moon, S. Ko, S. Yu, and J. Paik, ``Low-light image restoration sum. Electron., vol. 61, no. 1, pp. 119127, Feb. 2015. using bright channel prior-based variational retinex model,'' EURASIP J. [219] X. Fu, D. Zeng, Y. Huang, Y. Liao, X. Ding, and J. Paisley, ``A fusion- Image Video Process., vol. 2017, no. 1, Dec. 2017, Art. no. 44 based enhancing method for weakly illuminated images,'' Signal Pro- [243] Y. Hu, Y. Shang, and X. Fu, ``Low-illumination video enhancement cess., vol. 129, pp. 8296, Dec. 2016. algorithm based on combined atmospheric physical model and lumi- [220] W. Xie et al., ``Color reduction and detail extraction of low illumination nance transmission map,'' (in Chinese), J. Image Graph., vol. 21, no. 8, image with improved pyramid fusion,'' (in Chinese), Appl. Res. Comput., pp. 10101020, 2016. vol. 36, no. 2, pp. 606610, 2019. [244] C. Yu, X. Xu, and H. Lin, ``Low-illumination image enhancement method [221] Y. Ren, Z. Ying, T. H. Li, and G. Li, ``LECARM: Low-light image based on a fog-degraded model,'' (in Chinese), J. Image Graph., vol. 22, enhancement using the camera response model,'' IEEE Trans. Circuits no. 9, pp. 11941205, 2017. Syst. Video Technol., vol. 29, no. 4, pp. 968981, Apr. 2019. [245] C. Tang, Y. Wang, H. Feng, Z. Xu, Q. Li, and Y. Chen, ``Low-light image [222] Y. Huo and X. Zhang, ``Single image-based HDR image generation enhancement with strong light weakening and bright halo suppressing,'' with camera response function estimation,'' IET Image Process., vol. 11, IET Image Process., vol. 13, no. 3, pp. 537542, Feb. 2019. no. 12, pp. 13171324, Dec. 2017. [246] P. Gómez, M. Semmler, A. Schützenberger, C. Bohr, and M. Döllinger, [223] Z. Ying, G. Li, and W. Gao, ``A bio-inspired multi-exposure fusion ``Low-light image enhancement of high-speed endoscopic videos using a framework for low-light image enhancement,'' 2017, arXiv:1711.00591. convolutional neural network,'' Med. Biol. Eng. Comput., vol. 57, no. 7, [Online]. Available: http://arxiv.org/abs/1711.00591 pp. 14511463, Jul. 2019. [224] Z. Ying, G. Li, and Y. Ren, ``A new image contrast enhancement algorithm [247] W. Ren, S. Liu, L. Ma, Q. Xu, X. Xu, X. Cao, J. Du, and M.-H. Yang, using exposure fusion framework,'' in Proc. Int. Conf. Comput. Anal. ``Low-light image enhancement via a deep hybrid network,'' IEEE Trans. Images Patterns, 2017, pp. 3646. Image Process., vol. 28, no. 9, pp. 43644375, Sep. 2019. VOLUME 8, 2020 87915 W. Wang et al.: Experiment-Based Review of Low-Light Image Enhancement Methods [248] Y. Cai and U. Kintak, ``Low-light image enhancement based on modied [274] X. Yang, K. Xu, Y. Song, Q. Zhang, X. Wei, and R. Lau, U-net,'' in Proc. Int. Conf. Wavelet Anal. Pattern Recognit., Jul. 2019, ``Image correction via deep reciprocating HDR transformation,'' 2018, pp. 17. arXiv:1804.04371. [Online]. Available: http://arxiv.org/abs/1804.04371 [249] H. Chang, M. K. Ng, W. Wang, and T. Zeng, ``Retinex image enhance- [275] Y. Kinoshita and H. Kiya, ``Convolutional neural networks con- ment via a learned dictionary,'' Opt. Eng., vol. 54, no. 1, Jan. 2015, sidering local and global features for image enhancement,'' 2019, Art. no. 013107. arXiv:1905.02899. [Online]. Available: http://arxiv.org/abs/1905.02899 [250] H. Fu, H. Ma, and S. Wu, ``Night removal by color estimation and sparse [276] Y.-S. Chen, Y.-C. Wang, M.-H. Kao, and Y.-Y. Chuang, ``Deep photo representation,'' in Proc. IEEE Int. Conf. Pattern Recognit., Nov. 2012, enhancer: Unpaired learning for image enhancement from photographs pp. 36563659. with GANs,'' in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., [251] K. Fotiadou, G. Tsagkatakis, and P. Tsakalides, ``Low light image Jun. 2018, pp. 63066314. enhancement via sparse representations,'' in Proc. Int. Conf. Image Anal. [277] J. Wang, W. Tan, X. Niu, and B. Yan, ``RDGAN: Retinex decomposition Recognit., Vilamoura, Portugal, Oct. 2014, pp. 8493. based adversarial learning for low-light enhancement,'' in Proc. IEEE Int. [252] J. Cepeda-Negrete and R. E. Sanchez-Yanez, ``Automatic selection of Conf. Multimedia Expo, Jul. 2019, pp. 11861191. color constancy algorithms for dark image enhancement by fuzzy rule- [278] Y. Meng, D. Kong, Z. Zhu, and Y. Zhao, ``From night to day: GANs based based reasoning,'' Appl. Soft Comput., vol. 28, pp. 110, Mar. 2015. low quality image enhancement,'' Neural Process. Lett., vol. 50, no. 1, [253] Z. Yan, H. Zhang, B. Wang, S. Paris, and Y. Yu, ``Automatic photo pp. 799814, Aug. 2019. adjustment using deep neural networks,'' ACM Trans. Graph., vol. 35, [279] A. Ignatov, N. Kobyshev, R. Timofte, K. Vanhoey, and L. Van Gool, no. 2, May 2016, Art. no. 11. ``WESPE: Weakly supervised photo enhancer for digital cameras,'' in [254] K. G. Lore, A. Akintayo, and S. Sarkar, ``LLNet: A deep autoencoder Proc. IEEE Conf. Comput. Vis. Pattern Recognit. Workshops, Jun. 2018, approach to natural low-light image enhancement,'' Pattern Recognit., pp. 804809. vol. 61, pp. 650662, Jan. 2017. [280] G. Kim, D. Kwon, and J. Kwon, ``Low-lightgan: Low-light enhancement [255] S. Park, S. Yu, M. Kim, K. Park, and J. Paik, ``Dual autoencoder network via advanced generative adversarial network with task-driven training,'' for retinex-based low-light image enhancement,'' IEEE Access, vol. 6, in Proc. IEEE Int. Conf. Image Process., Sep. 2019, pp. 28112815. pp. 2208422093, 2018. [281] Y. P. Loh and C. S. Chan, ``Getting to know low-light images with the [256] Y. Endo, Y. Kanamori, and J. Mitani, ``Deep reverse tone mapping,'' ACM exclusively dark dataset,'' Comput. Vis. Image Understand., vol. 178, Trans. Graph., vol. 36, no. 6, Nov. 2017, Art. no. 177. pp. 3042, Jan. 2019. [257] E. Ha, H. Lim, S. Yu, and J. Paik, ``Low-light image enhancement using [282] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and dual convolutional neural networks for vehicular imaging systems,'' in A. Zisserman, ``The Pascal visual object classes (VOC) challenge,'' Int. Proc. IEEE Int. Conf. Consum. Electron., Jan. 2020, pp. 12. J. Comput. Vis., vol. 88, no. 2, pp. 303338, Jun. 2010. [258] H. Ma, S. Ma, and Y. Xu, ``Low-light image enhancement based on deep [283] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, convolutional neural network,'' (in Chinese), Acta Optica Sinica, vol. 39, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei, ``Ima- no. 2, pp. 91100, 2019. geNet large scale visual recognition challenge,'' Int. J. Comput. Vis., [259] M. Kim, ``Improvement of low-light image by convolutional neural net- vol. 115, no. 3, pp. 211252, Dec. 2015. work,'' in Proc. IEEE 62nd Int. Midwest Symp. Circuits Syst., Aug. 2019, [284] T. Lin et al., ``Microsoft COCO: Common objects in context,'' in Proc. pp. 189192. Eur. Conf. Comput. Vis., Zürich, Switzerland, Sep. 2014, pp. 740755. [260] L. Tao, C. Zhu, G. Xiang, Y. Li, H. Jia, and X. Xie, ``LLCNN: A [285] N. He, K. Xie, and T. Li, ``Overview of image quality assessment,'' (in convolutional neural network for low-light image enhancement,'' in Proc. Chinese), J. Beijing Inst. Graphic Commun., vol. 25, no. 2, pp. 4750, IEEE Vis. Commun. Image Process., Dec. 2017, pp. 14. [261] W. Wang, C. Wei, W. Yang, and J. Liu, ``GLADNet: Low-light enhance- [286] S. Liu et al., ``Overview of image quality assessment,'' (in Chinese), ment network with global awareness,'' in Proc. 13th IEEE Int. Conf. Sciencepaper Online, vol. 6, no. 7, pp. 501506, 2011. Autom. Face Gesture Recognit., May 2018, pp. 751755. [287] W. Huang, Y. Zhang, and B. Wei, ``Low illumination image decompo- [262] A. Ignatov, N. Kobyshev, R. Timofte, and K. Vanhoey, ``DSLR-quality sition and details enhancement under gradient sparse and least square photos on mobile devices with deep convolutional networks,'' in Proc. constraint,'' (in Chinese), Acta Electronica Sinica, vol. 46, no. 2, IEEE Int. Conf. Comput. Vis., Oct. 2017, pp. 32973305. pp. 424432, 2018. [263] F. Lv, F. Lu, and J. Wu, ``MBLLEN: Low-light image/video enhancement [288] A. Mittal, A. K. Moorthy, and A. C. Bovik, ``No-reference image quality using CNNs,'' in Proc. Brit. Mach. Vis. Conf., 2018, pp. 113. assessment in the spatial domain,'' IEEE Trans. Image Process., vol. 21, [264] G. Eilertsen, J. Kronander, G. Denes, R. K. Mantiuk, and J. Unger, ``HDR no. 12, pp. 46954708, Dec. 2012. image reconstruction from a single exposure using deep CNNs,'' ACM [289] A. Mittal, R. Soundararajan, and A. C. Bovik, ``Making a `completely Trans. Graph., vol. 36, no. 6, Nov. 2017, Art. no. 178 blind' image quality analyzer,'' IEEE Signal Process. Lett., vol. 20, no. 3, [265] C. Liu, X. Wu, and X. Shu, ``Learning-based dequantization for pp. 209212, Mar. 2013. image restoration against extremely poor illumination,'' 2018, [290] M. A. Saad, A. C. Bovik, and C. Charrier, ``Blind image quality assess- arXiv:1803.01532. [Online]. Available: http://arxiv.org/abs/1803.01532 ment: A natural scene statistics approach in the DCT domain,'' IEEE [266] M. Gharbi, J. Chen, J. T. Barron, S. W. Hasinoff, and F. Durand, ``Deep Trans. Image Process., vol. 21, no. 8, pp. 33393352, Aug. 2012. bilateral learning for real-time image enhancement,'' ACM Trans. Graph., [291] K. Gu, S. Wang, G. Zhai, S. Ma, X. Yang, W. Lin, W. Zhang, and vol. 36, no. 4, Jul. 2017, Art. no. 118. W. Gao, ``Blind quality assessment of tone-mapped images via analysis [267] C. Chen, Q. Chen, J. Xu, and V. Koltun, ``Learning to see in the of information, naturalness, and structure,'' IEEE Trans. Multimedia, dark,'' in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2018, vol. 18, no. 3, pp. 432443, Mar. 2016. pp. 32913300. [292] N. Hautière, J.-P. Tarel, D. Aubert, and É. Dumont, ``Blind contrast [268] L. Shen, Z. Yue, F. Feng, Q. Chen, S. Liu, and J. Ma, ``MSR-net:Low- enhancement assessment by gradient ratioing at visible edges,'' Image light image enhancement using deep convolutional network,'' 2017, Anal. Stereol., vol. 27, no. 2, pp. 8795, 2008. arXiv:1711.02488. [Online]. Available: http://arxiv.org/abs/1711.02488 [293] K. Gu, G. Zhai, W. Lin, X. Yang, and W. Zhang, ``No-reference image [269] Y. Guo, X. Ke, J. Ma, and J. Zhang, ``A pipeline neural network for low- sharpness assessment in autoregressive parameter space,'' IEEE Trans. light image enhancement,'' IEEE Access, vol. 7, pp. 1373713744, 2019. Image Process., vol. 24, no. 10, pp. 32183231, Oct. 2015. [270] C. Wei, W. Wang, and W. Yang, ``Deep Retinex decomposition for low- [294] K. Gu, W. Lin, G. Zhai, X. Yang, W. Zhang, and C. W. Chen, ``No- light enhancement,'' in Proc. Brit. Mach. Vis. Conf., 2018, pp. 112. reference quality metric of contrast-distorted images based on informa- [271] C. Li, J. Guo, F. Porikli, and Y. Pang, ``LightenNet: A convolutional neural tion maximization,'' IEEE Trans. Cybern., vol. 47, no. 12, pp. 45594565, network for weakly illuminated image enhancement,'' Pattern Recognit. Dec. 2017. Lett., vol. 104, pp. 1522, Mar. 2018. [272] Y. Zhang, J. Zhang, and X. Guo, ``Kindling the darkness: A practical [295] K. Matkovic, L. Neumann, and A. Neumann, ``Global contrast factor-a low-light image enhancer,'' 2019, arXiv:1905.04161. [Online]. Available: new approach to image contrast,'' in Proc. Comput. Aesthetics Graph., http://arxiv.org/abs/1905.04161 Vis. Imag., 2005, pp. 159167. [273] J. Cai, S. Gu, and L. Zhang, ``Learning a deep single image contrast [296] Y.-C. Chang and C.-M. Chang, ``A simple histogram modication scheme enhancer from multi-exposure images,'' IEEE Trans. Image Process., for contrast enhancement,'' IEEE Trans. Consum. Electron., vol. 56, no. 2, vol. 27, no. 4, pp. 20492062, Apr. 2018. pp. 737742, May 2010. 87916 VOLUME 8, 2020 W. Wang et al.: Experiment-Based Review of Low-Light Image Enhancement Methods [297] S. S. Agaian, B. Silver, and K. A. Panetta, ``Transform coefcient XIAOJIN WU received the Ph.D. degree in trafc histogram-based image enhancement algorithms using contrast entropy,'' information engineering control from Beijing Jiao- IEEE Trans. Image Process., vol. 16, no. 3, pp. 741758, Mar. 2007. tong University, in 2011. He is currently with the [298] Z. Chen, B. R. Abidi, D. L. Page, and M. A. Abidi, ``Gray-level College of Information and Control Engineering, grouping (GLG): An automatic method for optimized image contrast Weifang University. He is also a Visiting Scholar enhancementPart I: The basic method,'' IEEE Trans. Image Process., with the University of North Texas, engaged in vol. 15, no. 8, pp. 22902302, Aug. 2006. the research of image enhancement technology. [299] K. Gu, G. Zhai, X. Yang, and W. Zhang, ``Using free energy principle for He has published and authored more than ten arti- blind image quality assessment,'' IEEE Trans. Multimedia, vol. 17, no. 1, cles on academic journals and conferences. His pp. 5063, Jan. 2015. main research interests include intelligent systems [300] C. E. Shannon, ``A mathematical theory of communication,'' Bell Syst. and image processing Tech. J., vol. 27, no. 3, pp. 379423, Jul./Oct. 1948. [301] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, ``Image quality assessment: From error visibility to structural similarity,'' IEEE Trans. Image Process., vol. 13, no. 4, pp. 600612, Apr. 2004. [302] S. Wang, K. Ma, H. Yeganeh, Z. Wang, and W. Lin, ``A patch-structure representation method for quality assessment of contrast changed images,'' IEEE Signal Process. Lett., vol. 22, no. 12, pp. 23872390, Dec. 2015. [303] K. Gu, D. Tao, J.-F. Qiao, and W. Lin, ``Learning a no-reference quality assessment model of enhanced images with big data,'' IEEE Trans. Neural Netw. Learn. Syst., vol. 29, no. 4, pp. 13011313, Apr. 2018. [304] W. Xue, L. Zhang, X. Mou, and A. C. Bovik, ``Gradient magnitude similarity deviation: A highly efcient perceptual image quality index,'' IEEE Trans. Image Process., vol. 23, no. 2, pp. 684695, Feb. 2014. [305] H. R. Sheikh and A. C. Bovik, ``Image information and visual quality,'' IEEE Trans. Image Process., vol. 15, no. 2, pp. 430444, Feb. 2006. [306] L. Zhang, Y. Shen, and H. Li, ``VSI: A visual saliency-induced index XIAOHUI YUAN (Senior Member, IEEE) for perceptual image quality assessment,'' IEEE Trans. Image Process., received the B.S. degree in electrical engineering vol. 23, no. 10, pp. 42704281, Oct. 2014. from the Hefei University of Technology, China, [307] H. Yeganeh and Z. Wang, ``Objective quality assessment of tone-mapped in 1996, and the Ph.D. degree in computer science images,'' IEEE Trans. Image Process., vol. 22, no. 2, pp. 657667, from Tulane University, in 2004. He is currently Feb. 2013. an Associate Professor with the University of [308] L. Zhang, L. Zhang, X. Mou, and D. Zhang, ``FSIM: A feature similar- North Texas (UNT). His research ndings are ity index for image quality assessment,'' IEEE Trans. Image Process., reported in over 150 peer-reviewed articles. His vol. 20, no. 8, pp. 23782386, Aug. 2011. [309] J. Yang, X. Jiang, C. Pan, and C.-L. Liu, ``Enhancement of low light research interests include computer vision, data level images with coupled dictionary learning,'' in Proc. Int. Conf. Pattern mining, machine learning, and articial intelli- Recognit., Dec. 2016, pp. 751756. gence. He was a recipient of Ralph E. Powe Junior Faculty Enhancement [310] S. Yu, S. Ko, W. Kang, and J. Paik, ``Low-light image enhancement using Award, in 2008, and the Air Force Summer Faculty Fellowship, in 2011, fast adaptive binning for mobile phone cameras,'' in Proc. IEEE 5th Int. 2012, and 2013. He served as the session chairs with many conferences. Conf. Consum. Electron.-Berlin (ICCE-Berlin), Sep. 2015, pp. 170171. He also served as a Panel Reviewer for funding agencies, including NSF, [311] J. Yan, J. Li, and X. Fu, ``No-reference quality assessment of contrast- NIH, and the Louisiana Board of Regent's Research Competitiveness distorted images using contrast enhancement,'' 2019, arXiv:1904.08879. Program, and the editorial board of several international journals. [Online]. Available: http://arxiv.org/abs/1904.08879 [312] L. Liu, B. Liu, H. Huang, and A. C. Bovik, ``No-reference image quality assessment based on spatial and spectral entropies,'' Signal Process., Image Commun., vol. 29, no. 8, pp. 856863, Sep. 2014. [313] G. Yang, D. Li, F. Lu, Y. Liao, and W. Yang, ``RVSIM: A feature similarity method for full-reference image quality assessment,'' EURASIP J. Image Video Process., vol. 2018, no. 1, p. 6, Dec. 2018. [314] H. R. Sheikh, A. C. Bovik, and G. de Veciana, ``An information delity criterion for image quality assessment using natural scene statistics,'' IEEE Trans. Image Process., vol. 14, no. 12, pp. 21172128, Dec. 2005. WENCHENG WANG (Member, IEEE) received the Ph.D. degree in pattern recognition and intel- ligent system from Shandong University. From 2015 to 2016, he was a Visiting Scholar with the University of North Texas, engaged in the research ZAIRUI GAO received the Ph.D. degree in control theory and control engineering from the Ocean of image dehazing and enhancement technology. He is currently a Professor with Weifang Univer- University of China, in 2012. Since 2012, he has been a Lecturer with the College of Informa- sity. He is also a Principal Investigator with the Weifang City's Key Laboratory and the Innovation tion and Control Engineering, Weifang University, Team of Shandong Provincial Education Depart- China. He has published and authored more than ment on robot's vision perception and control. He has participated in more 20 articles on academic journals and conferences. than ten scientic research projects. He has published more than 60 articles His research interests include variable structure on academic journals and conferences. He holds over 14 patents. His main control, intelligent control, singular systems, and research interests include computer vision, pattern recognition, and intelli- information processing. gent computing. VOLUME 8, 2020 87917
http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.pngIEEE AccessUnpaywallhttp://www.deepdyve.com/lp/unpaywall/an-experiment-based-review-of-low-light-image-enhancement-methods-IroKNGFKC8
An Experiment-Based Review of Low-Light Image Enhancement Methods