Poultry carcass visceral contour recognition method using image processing

Poultry carcass visceral contour recognition method using image processing Summary Automatic evisceration is a very important application technology in the poultry slaughter industry. In this study, a machine-vision-based method of locating the viscera of poultry carcasses is described. Using machine vision, the image recognition system employs a color image segmentation of the HSV (hue, saturation, value) and LAB (Lightness-A-B) color spaces. An active contour segmentation algorithm, threshold segmentation method, and image operations are used to segment the images of viscera of poultry carcasses. Subsequently, the visceral contour is extracted and its position is detected for common poultry from the supermarket, such as Three-Yellow Chicken and Cherry Valley Duck. The identification rates of this visceral contour recognition system were 93.3 and 86.7%, respectively, for each type of poultry, suggesting that the proposed image recognition algorithm could achieve the accuracy required for poultry visceral contour detection. Therefore, the proposed image-processing-based visceral contour recognition method can be applied in poultry processing, improving the production efficiency of Three-Yellow Chicken and Cherry Valley Duck processing and providing more convenient processing services in the poultry industry. DESCRIPTION OF PROBLEM Annually, more than one billion poultry (turkey, duck, and chicken) are slaughtered and processed as meat products for daily consumption by the public in China [1,2]. Poultry slaughtering and processing consists of live-poultry hanging, electric shock, bloodletting, unhairing, viscera cleaning, flushing, precooling, segmentation, and other processes [3]. Of these poultry slaughtering and processing tasks, evisceration work is one of the most important procedures. At present, there are 2 methods for the evisceration of poultry: manual auxiliary line evisceration and automatic evisceration technology [4-6]. However, manual evisceration does not require sufficiently uniform poultry bodies, and processing amounts are limited in comparison to automatic technology. Moreover, manual work also has several disadvantages, such as a poor working environment, difficult labor, and low production efficiency, which threaten the health of the workers. An automatic evisceration system could substantially improve the production efficiency of the meat production and working environment, and strictly control the separation and uniformity of meat and bones. Therefore, the automation of slaughter will provide more convenient processing services and healthful meat in the future. The development and implementation of automation equipment is necessary for the large-scale and standardized production of poultry. In the past decades, manual auxiliary line operation was mainly used in most poultry processing enterprises, and the traditional manual scooping method was gradually replaced by the automatic recognition and grasping of internal organs [7, 8]. Moreover, the system control is developing to the spatial cam mechanism from a combination control consisting of an irregular round curved rail groove and planar cam. Researchers have studied various automation equipment systems [9-19]. For example, Ma proposed a method to remove viscera using equipment composed of a mobile control slider, swing control slider, pendulum lever, flexible adjustment device, and manipulator [20]. The internal organs were scooped or grasped by the manipulator and extracted from the poultry. Several large poultry processing companies use automation equipment to eviscerate internal organs [1,7], which clearly improves the production efficiency and market competitiveness, while significantly reducing their poultry processing production costs. However, most studies do not consider the integrity of the edible internal organs in the manipulator grasping process because the manipulator consists of a fixed single structure (scoop or shovel), and internal organs such as the liver and intestine are easily broken. This paper uses a multi-fingered robot hand that is mounted on the DELTA robot to eviscerate internal organs. The multi-fingered robot hand is designed to simulate a human hand and is more flexible than previous eviscerated manipulators. The image processing and automated operating systems are now important technical methods of modern food production and have been widely used in a variety of meat processing enterprises. For example, Bosoon et al. (2006) proposed a band-ratio image processing algorithm in a hyper spectral imaging system to detect surface fecal and ingesta contaminants on poultry carcasses [21]. In another study, they used the texture features of poultry image, using co-occurrence matrices to discriminate whole some poultry carcasses from wholesome ones [22]. Chao proposed multispectral inspection using fuzzy logic detection algorithms in a hyperspectral-multispectral line-scan imaging system to differentiate wholesome and diseased chickens [23]. In these papers, the target is identified effectively by image processing algorithms. However, image processing technology has not yet been introduced into evisceration equipment. Previous evisceration equipment systems measure the size of the poultry's internal organs manually to determine the position of the manipulator operation [18–20], so poultry must be provided on a large scale, and the poultry body must be roughly the same size. In this study, using image processing techniques, the poultry internal organs of different sizes can be recognized effectively, so that the multi-fingered robot hand can be guided to eviscerate them using online DELTA robot evisceration equipment. In addition, the grasping posture can be automatically adjusted according to the size of the poultry body. MATERIALS AND METHODS Establishment of the Image Acquisition System A custom poultry carcass image acquisition chamber was built and mounted above a conveyor like those used to transport poultry in the on-line DELTA robot evisceration equipment (Figure 1). The camera and the light source were placed inside the image acquisition chamber, and the camera took RGB (red, green, blue) images of poultry on the conveyor. A diagram of the poultry carcass image acquisition system is shown in Figure 2. The acquisition system consists of an industrial camera with GigE Vision, standard Genie series camera CRGEN3M640x (camera resolution 640 × 480, pixel dimension 7.4 μm), image acquisition card (OK-C30A, Beijing Joinhope Image Technology Ltd.), computer, image acquisition chamber, LED light sources, conveyor, and other items. The image acquisition card is installed in the computer. The LEDs are symmetrically arranged on the inner sidewall of the lighting chamber. Black sandpaper was placed on the conveyor as background, and the opened poultry carcass was placed on it. The acquisition system was used to collect the images for the experiment, which were acquired by placing the poultry under the chamber and manually orienting the internal organs, which contained the heart, liver, and fatty area. Each image was saved in JPG (tagged image file format) with a name related to the variety and ID number of the poultry. The algorithms were implemented in the MATLAB R2012a environment. Figure 1. View largeDownload slide On-line DELTA robot evisceration equipment. Figure 1. View largeDownload slide On-line DELTA robot evisceration equipment. Figure 2. View largeDownload slide Poultry carcass image acquisition system. Figure 2. View largeDownload slide Poultry carcass image acquisition system. The DELTA robot and accompanying conveyor line were obtained from Siasun Inc. of China. The DELTA robot (Figure 1) design is based on the classic DELTA-mechanism with a new type 4-degrees-of-freedom parallel robot hand. In this robot, all motors are installed on a fixed frame, which reduces the weight of the robot and the inertia of the manipulator, so the robot moves quickly and has high repetition accuracy. In this study, we independently designed a new multi-finger robot hand that can be installed on the DELTA robot for grabbing poultry carcass organs. As shown in Figure 3, each mechanical finger of the robot hand has 2 joints, and the degree of finger bend can be regulated. The position of the middle finger, which is connected to the chassis, cannot be changed, but the other 2 fingers can be driven by several micro motors and then rotated around the normal line of the hand. To minimize the internal organ damage caused by robot hand manipulation, the machine vision system determines the contour and position of the internal organs. Figure 3. View largeDownload slide Multi-finger robot hand. Figure 3. View largeDownload slide Multi-finger robot hand. Collection and Processing of Materials Thirty Three-Yellow-Chickens (weight: 1.9 to 2.3 kg) and 30 Cherry Valley Ducks (weight: 1.7 to 2.5 kg) were selected for this study. After both kinds of poultry were slaughtered, depilated, and opened, their images were collected through the image acquisition system. The opened poultry is composed of external poultry meat and internal viscera, and the internal viscera area is divided into the upper part, or the heart and liver area, and the lower part, or the fatty area. Poultry carcass recognition enables the robot hand to determine the initial location, and both internal organ areas are precisely recognized. The processing flow is shown in Figure 4. The image processing program is used for image segmentation, and the entire internal organ contour is obtained from the program. Figure 4. View largeDownload slide Flow chart of image preprocessing. Figure 4. View largeDownload slide Flow chart of image preprocessing. Image Recognition of the Poultry Carcass The surface of poultry carcasses collected by the industrial camera might contain noise that reduces the image quality because the image is corrupted during production, transmission, and recording. Therefore, a median filtering algorithm is used to improve the quality of each image, and then gray-level threshold segmentation is adopted to recognize the poultry carcass. In this process, the gray value of each pixel in the image is compared with the threshold value, and then the binary image is obtained. The basic equation of this method is as follows:   \begin{equation}g\left( {x,y} \right) = \left\{ {\begin{array}{@{}*{1}{c}@{}} {{Z_A}}\\ {{Z_B}} \end{array}} \right.\begin{array}{@{}*{1}{c}@{}} {f\left( {x,y} \right) \ge T}\\ {others} \end{array}\end{equation} (1) Where: f(x, y) is the gray value of pixel (x, y) in the image, T is the threshold value, and ZA and ZB are the target and background, respectively. Finally, g(x, y) is the obtained binary image. The main common threshold segmentation methods include maximum entropy thresholding [24], Otsu's method [25], minimum error threshold [26], and iterative threshold method [27]. The iteration threshold method uses an approximation approach to calculate the appropriate threshold. Its steps are as follows: Calculate the maximum gray value (Zmax) and minimum gray value (Zmin) of the poultry carcass image. The initial value of the threshold value is computed using the following formula:   \begin{equation}{{\rm{T}}_{\rm{0}}} = \frac{{{{\rm{Z}}_{{\rm{max}}}} + {{\rm{Z}}_{{\rm{min}}}}}}{2}\end{equation} (2) Using Tk, the image is divided into the target and the background. Their average gray values (ZA and ZB) are computed using the following formulas:   \begin{equation}{Z_A} = \frac{{\sum\limits_{Z(i,j) > {T_K}} {Z(i,j)N(i,j)} }}{{\sum\limits_{Z(i,j) > {T_K}} {N(i,j)} }}\end{equation} (3)  \begin{equation}{Z_B} = \frac{{\sum\limits_{Z(i,j) < {T_K}} {Z(i,j)N(i,j)} }}{{\sum\limits_{Z(i,j) < {T_K}} {N(i,j)} }}\end{equation} (4) Where: Z(i, j) is the gray value of the point (i, j) in the image, and N(i, j) is the weight coefficient of point (i, j). Generally, N(i, j) = 1.0. The new threshold value (Tk+1) is computed using the following formula:   \begin{equation*}{T_{K + 1}} = \frac{{{Z_A} + {Z_B}}}{2}\end{equation*} If Tk= Tk+1, end; otherwise, k ← k + 1, go to step 2). Segmentation of the Viscera Area Segmentation of the Heart and Liver Area From the Image In this study, the active contour model segmentation algorithm, proposed by Chan et al. [28], is used to segment the upper part of the poultry viscera (the heart and liver area). The active contour model is known as the snake model, where a snake is a curve expressed by a parametric equation. Energy functional associated with the curve, and when the energy of the curve minimized, the target is found. Using the combination of the internal forces determined by the curve and external forces of the image, the curve moves in the direction of the minimized energy and eventually converges to the boundary of the target. The snake is the parametric curve c(s) = (x(s), y(s), 0 ≤ s ≤ 1), where s is its domain. The snake moves in the image domain, to minimize the following energy functional:   \begin{eqnarray} {E_{snakes}} &=& \int_{0}^{1}{{\frac{1}{2}}}\left[ {\alpha {{\left| {c'(s)} \right|}^2} + \beta {{\left| {c''(s)} \right|}^2}} \right] \nonumber\\ &&+ {E_{ext}}(c(s))ds \end{eqnarray} (5) Where: α, β——is the factor that controlled curve tension and rigidity; c'(s)|,——is the first, second derivative of the parametric curve; Eext is external energy, derived from the image. When the curve is the characteristic of interest, such as the target boundary, the external energy value is lower. The Lagrange equation for the minimum energy function (5) is:   \begin{equation}ac''(s) - \beta c''''(s) - \nabla {E_{ext}} = 0\end{equation} (6) Where: c″″(s) ——is the fourth derivative of the parametric curve; ∇——is the gradient operator, and the corresponding force field equation is:   \begin{equation} {F_{{\mathop{\rm int}} }} + {F_{ext}} = 0 \end{equation} (7) Where: Fint is the internal force, with Fint = αc″(s) - βc″″(s); and Fext——is the external force, with Fext = −∇Eext. Using the active contour model, the steps of the heart and liver area segmentation process are as follows: Initialize the image curve. To obtain the complete heart and liver area contour, the segmentation curve evolves outward. The initial image curve is a circle with its center on a pixel point acquired on the heart, and the radius is 5 pixels. Evolve the curve. The segmentation smooth weight coefficient is set to α = 0.01, and the maximum number of iterations is 1,000. The active contour model algorithm is called to iteratively evolve the initial curve, which gradually extends outward until the contour curve of the heart and liver area no longer changes. Segmentation of the heart and liver area. Using the results of the final evolution, the heart and liver area is segmented from the other areas. Image Segmentation of the Visceral Fatty Area The original image is converted into the LAB (Lightness-A-B) model, and the B component is extracted. Using the distribution of gray-level values in the B component image, the optimal threshold value is chosen using the gray-level threshold method to obtain the image of the visceral fatty area. The entire viscera image of the poultry is obtained by adding the images and morphological operators (dilation and erosion) from the heart–liver area and the fatty area image together. RESULTS AND DISCUSSION The main contribution of this study is the development of a region-oriented image segmentation algorithm designed to detect the viscera position of the poultry and then position the robot hand for grabbing the viscera using position recognition. Figure 5(a) shows an original image of a poultry carcass, randomly extracted from the 148 poultry carcass images obtained by the image recognition system (Figure 2). The poultry carcass image is composed of a colored carcass area and dark background area, where the pixel values are homogeneous and display high consistency. The carcass area is located in the middle of the image, and the background area is distributed around it. Compared with the image of the RGB image, there are more significant differences between the target and background brightness in the LAB image. That is, the background area is darker, and the target area is lighter in the LAB color image. Therefore, in this study, the original RGB image is converted to LAB color. The L channel image of the LAB color image is extracted, then is processed by median filtering and iterative threshold segmentation, and the result is shown in Figure 5(b). Because the color of bone and residual blood are similar to those of the heart and liver area, this affects the results, and there are some regional residues in the image, and the viscera and background area had to be removed using a small region elimination function. The final poultry carcass contour is then obtained, as shown in Figure 5(c). Figure 5. View largeDownload slide (a) Original image of a poultry carcass; (b) segmented image showing the poultry carcass, heart and liver area, and bone and residual blood inside; and (c) poultry carcass contour used for the initial location. Figure 5. View largeDownload slide (a) Original image of a poultry carcass; (b) segmented image showing the poultry carcass, heart and liver area, and bone and residual blood inside; and (c) poultry carcass contour used for the initial location. The poultry viscera consist of 2 areas with clearly different colors. The red upper part is the heart and liver area, while the yellow lower part is the visceral fatty area and mainly consists of the digestive tract. Because their colors are different, they should be independently segmented in the image processing system. The original image is converted to an HSV (hue, saturation, value) image that exhibits the obvious color difference in the visceral area, as shown in Figure 6(a). The heart and the liver area is the biggest connected area of the visceral organs. It is segmented using the active contour model, and the result is shown in Figure 6(b). The active contour model algorithm is an iterative convergence process. Although the complexity of the algorithm is higher, its high segmentation accuracy provides reliable results for subsequent processing tasks. The final heart and liver area segmented from the poultry carcass and background areas is shown in Figure 7(c). Figure 6. View largeDownload slide (a) HSV image converted from the original RGB image; (b) segmented result using the active contour model and segmented image showing the contour of the heart and liver area; and (c) heart and liver area separated from the background and other areas. Figure 6. View largeDownload slide (a) HSV image converted from the original RGB image; (b) segmented result using the active contour model and segmented image showing the contour of the heart and liver area; and (c) heart and liver area separated from the background and other areas. Figure 7. View largeDownload slide (a) B component image of the LAB model converted from the original RGB image; (b) segmented image showing the visceral fatty area obtained using the threshold segmentation method; and (c) segmented image showing the visceral fatty area with burrs and noise removed by the dilation and erosion. Figure 7. View largeDownload slide (a) B component image of the LAB model converted from the original RGB image; (b) segmented image showing the visceral fatty area obtained using the threshold segmentation method; and (c) segmented image showing the visceral fatty area with burrs and noise removed by the dilation and erosion. Figure 7 (a) shows the B component image, in which the visceral fatty area is bright white and is significantly different from other areas. Here T = 0.561 is chosen as the optimal threshold, and the visceral fatty area image is segmented using the gray-level threshold segmentation method. The result is shown in Figure 7(b). Figure 7(c) shows the segmented image with burrs and noise removed using dilation and erosion. Figure 6(c) and 7(c) are added together, and the resulting image is shown in Figure 8(a). Figure 8. View largeDownload slide (a) Result of the images in Figure 6(c) and Figure 7(c) added together and (b) visceral center location. Figure 8. View largeDownload slide (a) Result of the images in Figure 6(c) and Figure 7(c) added together and (b) visceral center location. To avoid damaging the visceral organs when the poultry viscera are handled, the positioning of the robot hand is very important and should be carefully chosen. When the positioning is incorrect, the robot hand might unevenly grasp the internal organs. As shown in Figure 8(b), the hollow in the visceral image in Figure 8(a) is eliminated, and the central position is determined. Table 1 presents the segmentation performance of the proposed method on images of chickens and ducks. The results show the optimal separation (100%) of the background from the rest of the areas, which enables the centroid of each part of the chicken and duck to be estimated accurately. In addition, most errors found in the image segmentation procedure were due to isolated small clusters of pixels, mainly located at the boundaries of adjacent regions. Furthermore, these errors can be detected and corrected when the features of each segmented area are calculated. Table 1. Percentage of image recognition for each area of chickens and ducks. Species  Chicken (%)  Duck (%)  Heart and liver area  99  85  Visceral fatty area  95  87  Poultry carcass area  99  96  Background  100  100  Species  Chicken (%)  Duck (%)  Heart and liver area  99  85  Visceral fatty area  95  87  Poultry carcass area  99  96  Background  100  100  View Large Table 1. Percentage of image recognition for each area of chickens and ducks. Species  Chicken (%)  Duck (%)  Heart and liver area  99  85  Visceral fatty area  95  87  Poultry carcass area  99  96  Background  100  100  Species  Chicken (%)  Duck (%)  Heart and liver area  99  85  Visceral fatty area  95  87  Poultry carcass area  99  96  Background  100  100  View Large The results in Table 2 show that the recognition rate for Three-Yellow Chicken is similar to that of Cherry Valley Duck, exhibiting very high and effective recognition precision for the visceral organs. In the whole recognition process, image processing and positioning time is 6.14 seconds. The recognition rates for the complete viscera are 93.3 and 86.7%, respectively. The recognition rate for Three-Yellow Chicken is slightly higher than that of Cherry Valley Duck. We speculate that the recognition errors for Cherry Valley Duck might be because their internal organ colors are closer to those of the meat. Moreover, there was some residual blood in the lumen of Cherry Valley Duck, influencing the image recognition of its internal organs. The visceral contour recognition of Three-Yellow Chicken also produced some errors. One reason is that the color of the visceral fatty area is very close to the color of the carcass. Cherry Valley Duck has a bigger body size than Three-Yellow Chicken and is more affected by adjacent regions, suggesting that the recognition rate is lower than that of Three-Yellow Chicken. In fact, all of these potential factors might produce certain errors in image recognition, leading to the incorrect internal organ recognition. Therefore, we will attempt to develop a new method for improving the recognition rate in further studies. Table 2. Poultry entire viscera recognition. Name  Number  Recognition rate of entire viscera (%)  Three-Yellow Chicken  30  93.3%  Cherry Valley Duck  30  86.7%  Name  Number  Recognition rate of entire viscera (%)  Three-Yellow Chicken  30  93.3%  Cherry Valley Duck  30  86.7%  View Large Table 2. Poultry entire viscera recognition. Name  Number  Recognition rate of entire viscera (%)  Three-Yellow Chicken  30  93.3%  Cherry Valley Duck  30  86.7%  Name  Number  Recognition rate of entire viscera (%)  Three-Yellow Chicken  30  93.3%  Cherry Valley Duck  30  86.7%  View Large CONCLUSIONS AND APPLICATIONS In this study, using the accurate color characteristics of poultry carcass images, we proposed a method to determine the position of poultry visceral organs. The target contour is efficiently obtained using the active contour model and iterative threshold segmentation, and then the internal organs are accurately located using the multi-finger robot hand. The experimental results show that the visceral organs of Three-Yellow Chickens and Cherry Valley Ducks were recognized with accuracies of 93.3 and 86.7%, respectively. These results will assist the application of this technology in actual poultry processing. REFERENCES AND NOTES 1. Zhang K. B. 2011. In “Twelve Five-Year Plan” period, development targets of Chinese poultry slaughter and processing technology and equipment. Meatind . 3: 8– 11. 2. Harmse J. L., Engelbrecht J. C., Bekker J. L.. 2016. The impact of physical and ergonomic hazards on poultry abattoir processing workers: A review. Int. J. Environ. Res. Public Health.  13: 197– 198. Google Scholar CrossRef Search ADS PubMed  3. Janssen, Petrus C. H., Gerrits, Johannes G. M., Nieuwelaar V. D., Adrianus J.. 2011. Method and apparatus for the separate harvesting of back skin and back meat from a carcass part of slaughtered poultry. Patent Number:US 07967668. 4. Bendt, and Torben. 2009. A method and an apparatus for evisceration of poultry: DK, WO, Patent Number:043348A1. 5. Ma P. W., Wang L., Ye J., Wang Z.. 2009. The domestic application prospects of automatic evisceration technology. Academic Periodical of Farm Products Processing and Equipment of Slaughtered Poultry . 10: 93– 95. 6. Zhang K. B. 2012. Design on live area of poultry slaughtering and processing. Meatind . 3: 2– 6. 7. Ma P. W., Wang L. H., Ye J. P., Wang Z. K.. 2009. The domestic application prospects of automatic evisceration technology and equipment of slaughtered poultry. Academic Periodical of Farm Products Processing . 10: 93– 96. 8. Jing J. Q., Wang L. H., Ye J. P., Wang M., Guo N.. 2016. Research and application of apparatus for opening the body cavity of poultry. Packaging and Food Machinery . 34: 64– 66. 9. Cornelis. 1992. Apparatus for eviscerating slaughtered poultry. NL, US. 0497014A1 [P]. 10. Bendt and Torben. 2009. A method and an apparatus for evisceration of poultry: DK, WO 2009/043348A1 [P]. 11. Edward J. 1991. Device for eviscerating slaughtered poultry: NL, 0432317A1 [P]. 12. Lindholst. 2002. Svend method, apparatus and evisceration spoon for eviscerating intestine packs of slaughtered poultry: WO 98/044806. 13. Stork Pmt B. V. 2006. Eviscerating member, device and method for processing a cluster of viscera of a slaughtered animal: NL, EP 1248525B 1 [P]. 14. Tieleman. 2008. Food Equipment B.V Device: for removing viscera from slaughtered poultry: NL, US7425173 B1[P]. 15. Machinefabriek Meyn B. V. 1992. Method and apparatus for eviscerating poultry: NL, 0574617AI[P]. 16. Stork Pmt B. V. 1997. Method and device for processing a cluster of organs from a slaughtered animal: NL, EP 0587253B1[P]. 17. Tieleman RudolfJ, 1983. Poultry eviscerating tool: NL, US4435878[P]. 18. Wang L. H., Yan C. L., Ye J. P., Ma P. W., Wang Z. K., Pan M.. 2010. Design and experiment of QNZ15 automatic poultry eviscerator. Transactions of the Chinese Society of Agricultural Machinery . 41: 220– 224. 19. Wang M. 2011. Study on structure and trajectory parameters of the grippable automatic evisceration manipulator for poultry . Master degree thesis of China agricultural mechanization science institute. 20. Ma P. W. 2010. Study on structure, and trajectory parameters of device for removing viscera from slaughtered poultry . Master degree thesis of China agricultural mechanization science institute. 21. Bosoon P., Kurt C. L., William R. W., Douglas P. S. 2006. Performance of hyperspectral imaging system for poultry surface fecal contaminant detection. Journal of Food Engineering.  22. Bosoon P., Kurt C. L., William R. W., Yud R. C., Kevin C.. 2002. Discriminant analysis of dual-wavelength spectral images for classifying poultry carcasses. Computers and Electronics in Agriculture . 23. Chao, Yang C. C., Chen Y. R., Kim M. S., Chan D. E.. 2007. Hyperspectral-multispectral line-scan imaging system for automated poultry carcass inspection applications for food safety. Poultry science.  24. Yin P. Y. 2002. Maximum entropy-based optimal threshold selection using deterministic reinforcement learning with controlled randomization, signal processing, 82: 993– 1006. 25. Otsu N. 1979. A threshold selection method from gray-level histogram. IEEE Transactin on System Man, and Cybernetics , 9: 62– 66. Google Scholar CrossRef Search ADS   26. Kittler J., Illingworth J.. 1986. Minimum error thresholding. Pattern Recognition , 19: 41– 47. Google Scholar CrossRef Search ADS   27. Kapur J. N., Sahoo P. K., A. K. C., Wong. 1985. A new method for gray-level picturethresholding using the entropy of histogram. Computer Vision Graphics Image Processing . 29: 273– 285. Google Scholar CrossRef Search ADS   28. Chan T. F., Vese L. A.. 2002. Active contours without edges. IEEE Transactions onImage Processing , 10: 266– 277. Google Scholar CrossRef Search ADS   Acknowledgments This work was supported by the National “Twelfth Five-Year” Plan for Science & Technology Support of China (2015BAD19B06). © 2018 Poultry Science Association Inc. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Journal of Applied Poultry Research Oxford University Press

Poultry carcass visceral contour recognition method using image processing

Loading next page...
 
/lp/ou_press/poultry-carcass-visceral-contour-recognition-method-using-image-ftQ5b2urnc
Publisher
Applied Poultry Science, Inc.
Copyright
© 2018 Poultry Science Association Inc.
ISSN
1056-6171
eISSN
1537-0437
D.O.I.
10.3382/japr/pfx073
Publisher site
See Article on Publisher Site

Abstract

Summary Automatic evisceration is a very important application technology in the poultry slaughter industry. In this study, a machine-vision-based method of locating the viscera of poultry carcasses is described. Using machine vision, the image recognition system employs a color image segmentation of the HSV (hue, saturation, value) and LAB (Lightness-A-B) color spaces. An active contour segmentation algorithm, threshold segmentation method, and image operations are used to segment the images of viscera of poultry carcasses. Subsequently, the visceral contour is extracted and its position is detected for common poultry from the supermarket, such as Three-Yellow Chicken and Cherry Valley Duck. The identification rates of this visceral contour recognition system were 93.3 and 86.7%, respectively, for each type of poultry, suggesting that the proposed image recognition algorithm could achieve the accuracy required for poultry visceral contour detection. Therefore, the proposed image-processing-based visceral contour recognition method can be applied in poultry processing, improving the production efficiency of Three-Yellow Chicken and Cherry Valley Duck processing and providing more convenient processing services in the poultry industry. DESCRIPTION OF PROBLEM Annually, more than one billion poultry (turkey, duck, and chicken) are slaughtered and processed as meat products for daily consumption by the public in China [1,2]. Poultry slaughtering and processing consists of live-poultry hanging, electric shock, bloodletting, unhairing, viscera cleaning, flushing, precooling, segmentation, and other processes [3]. Of these poultry slaughtering and processing tasks, evisceration work is one of the most important procedures. At present, there are 2 methods for the evisceration of poultry: manual auxiliary line evisceration and automatic evisceration technology [4-6]. However, manual evisceration does not require sufficiently uniform poultry bodies, and processing amounts are limited in comparison to automatic technology. Moreover, manual work also has several disadvantages, such as a poor working environment, difficult labor, and low production efficiency, which threaten the health of the workers. An automatic evisceration system could substantially improve the production efficiency of the meat production and working environment, and strictly control the separation and uniformity of meat and bones. Therefore, the automation of slaughter will provide more convenient processing services and healthful meat in the future. The development and implementation of automation equipment is necessary for the large-scale and standardized production of poultry. In the past decades, manual auxiliary line operation was mainly used in most poultry processing enterprises, and the traditional manual scooping method was gradually replaced by the automatic recognition and grasping of internal organs [7, 8]. Moreover, the system control is developing to the spatial cam mechanism from a combination control consisting of an irregular round curved rail groove and planar cam. Researchers have studied various automation equipment systems [9-19]. For example, Ma proposed a method to remove viscera using equipment composed of a mobile control slider, swing control slider, pendulum lever, flexible adjustment device, and manipulator [20]. The internal organs were scooped or grasped by the manipulator and extracted from the poultry. Several large poultry processing companies use automation equipment to eviscerate internal organs [1,7], which clearly improves the production efficiency and market competitiveness, while significantly reducing their poultry processing production costs. However, most studies do not consider the integrity of the edible internal organs in the manipulator grasping process because the manipulator consists of a fixed single structure (scoop or shovel), and internal organs such as the liver and intestine are easily broken. This paper uses a multi-fingered robot hand that is mounted on the DELTA robot to eviscerate internal organs. The multi-fingered robot hand is designed to simulate a human hand and is more flexible than previous eviscerated manipulators. The image processing and automated operating systems are now important technical methods of modern food production and have been widely used in a variety of meat processing enterprises. For example, Bosoon et al. (2006) proposed a band-ratio image processing algorithm in a hyper spectral imaging system to detect surface fecal and ingesta contaminants on poultry carcasses [21]. In another study, they used the texture features of poultry image, using co-occurrence matrices to discriminate whole some poultry carcasses from wholesome ones [22]. Chao proposed multispectral inspection using fuzzy logic detection algorithms in a hyperspectral-multispectral line-scan imaging system to differentiate wholesome and diseased chickens [23]. In these papers, the target is identified effectively by image processing algorithms. However, image processing technology has not yet been introduced into evisceration equipment. Previous evisceration equipment systems measure the size of the poultry's internal organs manually to determine the position of the manipulator operation [18–20], so poultry must be provided on a large scale, and the poultry body must be roughly the same size. In this study, using image processing techniques, the poultry internal organs of different sizes can be recognized effectively, so that the multi-fingered robot hand can be guided to eviscerate them using online DELTA robot evisceration equipment. In addition, the grasping posture can be automatically adjusted according to the size of the poultry body. MATERIALS AND METHODS Establishment of the Image Acquisition System A custom poultry carcass image acquisition chamber was built and mounted above a conveyor like those used to transport poultry in the on-line DELTA robot evisceration equipment (Figure 1). The camera and the light source were placed inside the image acquisition chamber, and the camera took RGB (red, green, blue) images of poultry on the conveyor. A diagram of the poultry carcass image acquisition system is shown in Figure 2. The acquisition system consists of an industrial camera with GigE Vision, standard Genie series camera CRGEN3M640x (camera resolution 640 × 480, pixel dimension 7.4 μm), image acquisition card (OK-C30A, Beijing Joinhope Image Technology Ltd.), computer, image acquisition chamber, LED light sources, conveyor, and other items. The image acquisition card is installed in the computer. The LEDs are symmetrically arranged on the inner sidewall of the lighting chamber. Black sandpaper was placed on the conveyor as background, and the opened poultry carcass was placed on it. The acquisition system was used to collect the images for the experiment, which were acquired by placing the poultry under the chamber and manually orienting the internal organs, which contained the heart, liver, and fatty area. Each image was saved in JPG (tagged image file format) with a name related to the variety and ID number of the poultry. The algorithms were implemented in the MATLAB R2012a environment. Figure 1. View largeDownload slide On-line DELTA robot evisceration equipment. Figure 1. View largeDownload slide On-line DELTA robot evisceration equipment. Figure 2. View largeDownload slide Poultry carcass image acquisition system. Figure 2. View largeDownload slide Poultry carcass image acquisition system. The DELTA robot and accompanying conveyor line were obtained from Siasun Inc. of China. The DELTA robot (Figure 1) design is based on the classic DELTA-mechanism with a new type 4-degrees-of-freedom parallel robot hand. In this robot, all motors are installed on a fixed frame, which reduces the weight of the robot and the inertia of the manipulator, so the robot moves quickly and has high repetition accuracy. In this study, we independently designed a new multi-finger robot hand that can be installed on the DELTA robot for grabbing poultry carcass organs. As shown in Figure 3, each mechanical finger of the robot hand has 2 joints, and the degree of finger bend can be regulated. The position of the middle finger, which is connected to the chassis, cannot be changed, but the other 2 fingers can be driven by several micro motors and then rotated around the normal line of the hand. To minimize the internal organ damage caused by robot hand manipulation, the machine vision system determines the contour and position of the internal organs. Figure 3. View largeDownload slide Multi-finger robot hand. Figure 3. View largeDownload slide Multi-finger robot hand. Collection and Processing of Materials Thirty Three-Yellow-Chickens (weight: 1.9 to 2.3 kg) and 30 Cherry Valley Ducks (weight: 1.7 to 2.5 kg) were selected for this study. After both kinds of poultry were slaughtered, depilated, and opened, their images were collected through the image acquisition system. The opened poultry is composed of external poultry meat and internal viscera, and the internal viscera area is divided into the upper part, or the heart and liver area, and the lower part, or the fatty area. Poultry carcass recognition enables the robot hand to determine the initial location, and both internal organ areas are precisely recognized. The processing flow is shown in Figure 4. The image processing program is used for image segmentation, and the entire internal organ contour is obtained from the program. Figure 4. View largeDownload slide Flow chart of image preprocessing. Figure 4. View largeDownload slide Flow chart of image preprocessing. Image Recognition of the Poultry Carcass The surface of poultry carcasses collected by the industrial camera might contain noise that reduces the image quality because the image is corrupted during production, transmission, and recording. Therefore, a median filtering algorithm is used to improve the quality of each image, and then gray-level threshold segmentation is adopted to recognize the poultry carcass. In this process, the gray value of each pixel in the image is compared with the threshold value, and then the binary image is obtained. The basic equation of this method is as follows:   \begin{equation}g\left( {x,y} \right) = \left\{ {\begin{array}{@{}*{1}{c}@{}} {{Z_A}}\\ {{Z_B}} \end{array}} \right.\begin{array}{@{}*{1}{c}@{}} {f\left( {x,y} \right) \ge T}\\ {others} \end{array}\end{equation} (1) Where: f(x, y) is the gray value of pixel (x, y) in the image, T is the threshold value, and ZA and ZB are the target and background, respectively. Finally, g(x, y) is the obtained binary image. The main common threshold segmentation methods include maximum entropy thresholding [24], Otsu's method [25], minimum error threshold [26], and iterative threshold method [27]. The iteration threshold method uses an approximation approach to calculate the appropriate threshold. Its steps are as follows: Calculate the maximum gray value (Zmax) and minimum gray value (Zmin) of the poultry carcass image. The initial value of the threshold value is computed using the following formula:   \begin{equation}{{\rm{T}}_{\rm{0}}} = \frac{{{{\rm{Z}}_{{\rm{max}}}} + {{\rm{Z}}_{{\rm{min}}}}}}{2}\end{equation} (2) Using Tk, the image is divided into the target and the background. Their average gray values (ZA and ZB) are computed using the following formulas:   \begin{equation}{Z_A} = \frac{{\sum\limits_{Z(i,j) > {T_K}} {Z(i,j)N(i,j)} }}{{\sum\limits_{Z(i,j) > {T_K}} {N(i,j)} }}\end{equation} (3)  \begin{equation}{Z_B} = \frac{{\sum\limits_{Z(i,j) < {T_K}} {Z(i,j)N(i,j)} }}{{\sum\limits_{Z(i,j) < {T_K}} {N(i,j)} }}\end{equation} (4) Where: Z(i, j) is the gray value of the point (i, j) in the image, and N(i, j) is the weight coefficient of point (i, j). Generally, N(i, j) = 1.0. The new threshold value (Tk+1) is computed using the following formula:   \begin{equation*}{T_{K + 1}} = \frac{{{Z_A} + {Z_B}}}{2}\end{equation*} If Tk= Tk+1, end; otherwise, k ← k + 1, go to step 2). Segmentation of the Viscera Area Segmentation of the Heart and Liver Area From the Image In this study, the active contour model segmentation algorithm, proposed by Chan et al. [28], is used to segment the upper part of the poultry viscera (the heart and liver area). The active contour model is known as the snake model, where a snake is a curve expressed by a parametric equation. Energy functional associated with the curve, and when the energy of the curve minimized, the target is found. Using the combination of the internal forces determined by the curve and external forces of the image, the curve moves in the direction of the minimized energy and eventually converges to the boundary of the target. The snake is the parametric curve c(s) = (x(s), y(s), 0 ≤ s ≤ 1), where s is its domain. The snake moves in the image domain, to minimize the following energy functional:   \begin{eqnarray} {E_{snakes}} &=& \int_{0}^{1}{{\frac{1}{2}}}\left[ {\alpha {{\left| {c'(s)} \right|}^2} + \beta {{\left| {c''(s)} \right|}^2}} \right] \nonumber\\ &&+ {E_{ext}}(c(s))ds \end{eqnarray} (5) Where: α, β——is the factor that controlled curve tension and rigidity; c'(s)|,——is the first, second derivative of the parametric curve; Eext is external energy, derived from the image. When the curve is the characteristic of interest, such as the target boundary, the external energy value is lower. The Lagrange equation for the minimum energy function (5) is:   \begin{equation}ac''(s) - \beta c''''(s) - \nabla {E_{ext}} = 0\end{equation} (6) Where: c″″(s) ——is the fourth derivative of the parametric curve; ∇——is the gradient operator, and the corresponding force field equation is:   \begin{equation} {F_{{\mathop{\rm int}} }} + {F_{ext}} = 0 \end{equation} (7) Where: Fint is the internal force, with Fint = αc″(s) - βc″″(s); and Fext——is the external force, with Fext = −∇Eext. Using the active contour model, the steps of the heart and liver area segmentation process are as follows: Initialize the image curve. To obtain the complete heart and liver area contour, the segmentation curve evolves outward. The initial image curve is a circle with its center on a pixel point acquired on the heart, and the radius is 5 pixels. Evolve the curve. The segmentation smooth weight coefficient is set to α = 0.01, and the maximum number of iterations is 1,000. The active contour model algorithm is called to iteratively evolve the initial curve, which gradually extends outward until the contour curve of the heart and liver area no longer changes. Segmentation of the heart and liver area. Using the results of the final evolution, the heart and liver area is segmented from the other areas. Image Segmentation of the Visceral Fatty Area The original image is converted into the LAB (Lightness-A-B) model, and the B component is extracted. Using the distribution of gray-level values in the B component image, the optimal threshold value is chosen using the gray-level threshold method to obtain the image of the visceral fatty area. The entire viscera image of the poultry is obtained by adding the images and morphological operators (dilation and erosion) from the heart–liver area and the fatty area image together. RESULTS AND DISCUSSION The main contribution of this study is the development of a region-oriented image segmentation algorithm designed to detect the viscera position of the poultry and then position the robot hand for grabbing the viscera using position recognition. Figure 5(a) shows an original image of a poultry carcass, randomly extracted from the 148 poultry carcass images obtained by the image recognition system (Figure 2). The poultry carcass image is composed of a colored carcass area and dark background area, where the pixel values are homogeneous and display high consistency. The carcass area is located in the middle of the image, and the background area is distributed around it. Compared with the image of the RGB image, there are more significant differences between the target and background brightness in the LAB image. That is, the background area is darker, and the target area is lighter in the LAB color image. Therefore, in this study, the original RGB image is converted to LAB color. The L channel image of the LAB color image is extracted, then is processed by median filtering and iterative threshold segmentation, and the result is shown in Figure 5(b). Because the color of bone and residual blood are similar to those of the heart and liver area, this affects the results, and there are some regional residues in the image, and the viscera and background area had to be removed using a small region elimination function. The final poultry carcass contour is then obtained, as shown in Figure 5(c). Figure 5. View largeDownload slide (a) Original image of a poultry carcass; (b) segmented image showing the poultry carcass, heart and liver area, and bone and residual blood inside; and (c) poultry carcass contour used for the initial location. Figure 5. View largeDownload slide (a) Original image of a poultry carcass; (b) segmented image showing the poultry carcass, heart and liver area, and bone and residual blood inside; and (c) poultry carcass contour used for the initial location. The poultry viscera consist of 2 areas with clearly different colors. The red upper part is the heart and liver area, while the yellow lower part is the visceral fatty area and mainly consists of the digestive tract. Because their colors are different, they should be independently segmented in the image processing system. The original image is converted to an HSV (hue, saturation, value) image that exhibits the obvious color difference in the visceral area, as shown in Figure 6(a). The heart and the liver area is the biggest connected area of the visceral organs. It is segmented using the active contour model, and the result is shown in Figure 6(b). The active contour model algorithm is an iterative convergence process. Although the complexity of the algorithm is higher, its high segmentation accuracy provides reliable results for subsequent processing tasks. The final heart and liver area segmented from the poultry carcass and background areas is shown in Figure 7(c). Figure 6. View largeDownload slide (a) HSV image converted from the original RGB image; (b) segmented result using the active contour model and segmented image showing the contour of the heart and liver area; and (c) heart and liver area separated from the background and other areas. Figure 6. View largeDownload slide (a) HSV image converted from the original RGB image; (b) segmented result using the active contour model and segmented image showing the contour of the heart and liver area; and (c) heart and liver area separated from the background and other areas. Figure 7. View largeDownload slide (a) B component image of the LAB model converted from the original RGB image; (b) segmented image showing the visceral fatty area obtained using the threshold segmentation method; and (c) segmented image showing the visceral fatty area with burrs and noise removed by the dilation and erosion. Figure 7. View largeDownload slide (a) B component image of the LAB model converted from the original RGB image; (b) segmented image showing the visceral fatty area obtained using the threshold segmentation method; and (c) segmented image showing the visceral fatty area with burrs and noise removed by the dilation and erosion. Figure 7 (a) shows the B component image, in which the visceral fatty area is bright white and is significantly different from other areas. Here T = 0.561 is chosen as the optimal threshold, and the visceral fatty area image is segmented using the gray-level threshold segmentation method. The result is shown in Figure 7(b). Figure 7(c) shows the segmented image with burrs and noise removed using dilation and erosion. Figure 6(c) and 7(c) are added together, and the resulting image is shown in Figure 8(a). Figure 8. View largeDownload slide (a) Result of the images in Figure 6(c) and Figure 7(c) added together and (b) visceral center location. Figure 8. View largeDownload slide (a) Result of the images in Figure 6(c) and Figure 7(c) added together and (b) visceral center location. To avoid damaging the visceral organs when the poultry viscera are handled, the positioning of the robot hand is very important and should be carefully chosen. When the positioning is incorrect, the robot hand might unevenly grasp the internal organs. As shown in Figure 8(b), the hollow in the visceral image in Figure 8(a) is eliminated, and the central position is determined. Table 1 presents the segmentation performance of the proposed method on images of chickens and ducks. The results show the optimal separation (100%) of the background from the rest of the areas, which enables the centroid of each part of the chicken and duck to be estimated accurately. In addition, most errors found in the image segmentation procedure were due to isolated small clusters of pixels, mainly located at the boundaries of adjacent regions. Furthermore, these errors can be detected and corrected when the features of each segmented area are calculated. Table 1. Percentage of image recognition for each area of chickens and ducks. Species  Chicken (%)  Duck (%)  Heart and liver area  99  85  Visceral fatty area  95  87  Poultry carcass area  99  96  Background  100  100  Species  Chicken (%)  Duck (%)  Heart and liver area  99  85  Visceral fatty area  95  87  Poultry carcass area  99  96  Background  100  100  View Large Table 1. Percentage of image recognition for each area of chickens and ducks. Species  Chicken (%)  Duck (%)  Heart and liver area  99  85  Visceral fatty area  95  87  Poultry carcass area  99  96  Background  100  100  Species  Chicken (%)  Duck (%)  Heart and liver area  99  85  Visceral fatty area  95  87  Poultry carcass area  99  96  Background  100  100  View Large The results in Table 2 show that the recognition rate for Three-Yellow Chicken is similar to that of Cherry Valley Duck, exhibiting very high and effective recognition precision for the visceral organs. In the whole recognition process, image processing and positioning time is 6.14 seconds. The recognition rates for the complete viscera are 93.3 and 86.7%, respectively. The recognition rate for Three-Yellow Chicken is slightly higher than that of Cherry Valley Duck. We speculate that the recognition errors for Cherry Valley Duck might be because their internal organ colors are closer to those of the meat. Moreover, there was some residual blood in the lumen of Cherry Valley Duck, influencing the image recognition of its internal organs. The visceral contour recognition of Three-Yellow Chicken also produced some errors. One reason is that the color of the visceral fatty area is very close to the color of the carcass. Cherry Valley Duck has a bigger body size than Three-Yellow Chicken and is more affected by adjacent regions, suggesting that the recognition rate is lower than that of Three-Yellow Chicken. In fact, all of these potential factors might produce certain errors in image recognition, leading to the incorrect internal organ recognition. Therefore, we will attempt to develop a new method for improving the recognition rate in further studies. Table 2. Poultry entire viscera recognition. Name  Number  Recognition rate of entire viscera (%)  Three-Yellow Chicken  30  93.3%  Cherry Valley Duck  30  86.7%  Name  Number  Recognition rate of entire viscera (%)  Three-Yellow Chicken  30  93.3%  Cherry Valley Duck  30  86.7%  View Large Table 2. Poultry entire viscera recognition. Name  Number  Recognition rate of entire viscera (%)  Three-Yellow Chicken  30  93.3%  Cherry Valley Duck  30  86.7%  Name  Number  Recognition rate of entire viscera (%)  Three-Yellow Chicken  30  93.3%  Cherry Valley Duck  30  86.7%  View Large CONCLUSIONS AND APPLICATIONS In this study, using the accurate color characteristics of poultry carcass images, we proposed a method to determine the position of poultry visceral organs. The target contour is efficiently obtained using the active contour model and iterative threshold segmentation, and then the internal organs are accurately located using the multi-finger robot hand. The experimental results show that the visceral organs of Three-Yellow Chickens and Cherry Valley Ducks were recognized with accuracies of 93.3 and 86.7%, respectively. These results will assist the application of this technology in actual poultry processing. REFERENCES AND NOTES 1. Zhang K. B. 2011. In “Twelve Five-Year Plan” period, development targets of Chinese poultry slaughter and processing technology and equipment. Meatind . 3: 8– 11. 2. Harmse J. L., Engelbrecht J. C., Bekker J. L.. 2016. The impact of physical and ergonomic hazards on poultry abattoir processing workers: A review. Int. J. Environ. Res. Public Health.  13: 197– 198. Google Scholar CrossRef Search ADS PubMed  3. Janssen, Petrus C. H., Gerrits, Johannes G. M., Nieuwelaar V. D., Adrianus J.. 2011. Method and apparatus for the separate harvesting of back skin and back meat from a carcass part of slaughtered poultry. Patent Number:US 07967668. 4. Bendt, and Torben. 2009. A method and an apparatus for evisceration of poultry: DK, WO, Patent Number:043348A1. 5. Ma P. W., Wang L., Ye J., Wang Z.. 2009. The domestic application prospects of automatic evisceration technology. Academic Periodical of Farm Products Processing and Equipment of Slaughtered Poultry . 10: 93– 95. 6. Zhang K. B. 2012. Design on live area of poultry slaughtering and processing. Meatind . 3: 2– 6. 7. Ma P. W., Wang L. H., Ye J. P., Wang Z. K.. 2009. The domestic application prospects of automatic evisceration technology and equipment of slaughtered poultry. Academic Periodical of Farm Products Processing . 10: 93– 96. 8. Jing J. Q., Wang L. H., Ye J. P., Wang M., Guo N.. 2016. Research and application of apparatus for opening the body cavity of poultry. Packaging and Food Machinery . 34: 64– 66. 9. Cornelis. 1992. Apparatus for eviscerating slaughtered poultry. NL, US. 0497014A1 [P]. 10. Bendt and Torben. 2009. A method and an apparatus for evisceration of poultry: DK, WO 2009/043348A1 [P]. 11. Edward J. 1991. Device for eviscerating slaughtered poultry: NL, 0432317A1 [P]. 12. Lindholst. 2002. Svend method, apparatus and evisceration spoon for eviscerating intestine packs of slaughtered poultry: WO 98/044806. 13. Stork Pmt B. V. 2006. Eviscerating member, device and method for processing a cluster of viscera of a slaughtered animal: NL, EP 1248525B 1 [P]. 14. Tieleman. 2008. Food Equipment B.V Device: for removing viscera from slaughtered poultry: NL, US7425173 B1[P]. 15. Machinefabriek Meyn B. V. 1992. Method and apparatus for eviscerating poultry: NL, 0574617AI[P]. 16. Stork Pmt B. V. 1997. Method and device for processing a cluster of organs from a slaughtered animal: NL, EP 0587253B1[P]. 17. Tieleman RudolfJ, 1983. Poultry eviscerating tool: NL, US4435878[P]. 18. Wang L. H., Yan C. L., Ye J. P., Ma P. W., Wang Z. K., Pan M.. 2010. Design and experiment of QNZ15 automatic poultry eviscerator. Transactions of the Chinese Society of Agricultural Machinery . 41: 220– 224. 19. Wang M. 2011. Study on structure and trajectory parameters of the grippable automatic evisceration manipulator for poultry . Master degree thesis of China agricultural mechanization science institute. 20. Ma P. W. 2010. Study on structure, and trajectory parameters of device for removing viscera from slaughtered poultry . Master degree thesis of China agricultural mechanization science institute. 21. Bosoon P., Kurt C. L., William R. W., Douglas P. S. 2006. Performance of hyperspectral imaging system for poultry surface fecal contaminant detection. Journal of Food Engineering.  22. Bosoon P., Kurt C. L., William R. W., Yud R. C., Kevin C.. 2002. Discriminant analysis of dual-wavelength spectral images for classifying poultry carcasses. Computers and Electronics in Agriculture . 23. Chao, Yang C. C., Chen Y. R., Kim M. S., Chan D. E.. 2007. Hyperspectral-multispectral line-scan imaging system for automated poultry carcass inspection applications for food safety. Poultry science.  24. Yin P. Y. 2002. Maximum entropy-based optimal threshold selection using deterministic reinforcement learning with controlled randomization, signal processing, 82: 993– 1006. 25. Otsu N. 1979. A threshold selection method from gray-level histogram. IEEE Transactin on System Man, and Cybernetics , 9: 62– 66. Google Scholar CrossRef Search ADS   26. Kittler J., Illingworth J.. 1986. Minimum error thresholding. Pattern Recognition , 19: 41– 47. Google Scholar CrossRef Search ADS   27. Kapur J. N., Sahoo P. K., A. K. C., Wong. 1985. A new method for gray-level picturethresholding using the entropy of histogram. Computer Vision Graphics Image Processing . 29: 273– 285. Google Scholar CrossRef Search ADS   28. Chan T. F., Vese L. A.. 2002. Active contours without edges. IEEE Transactions onImage Processing , 10: 266– 277. Google Scholar CrossRef Search ADS   Acknowledgments This work was supported by the National “Twelfth Five-Year” Plan for Science & Technology Support of China (2015BAD19B06). © 2018 Poultry Science Association Inc.

Journal

Journal of Applied Poultry ResearchOxford University Press

Published: Jan 17, 2018

There are no references for this article.

You’re reading a free preview. Subscribe to read the entire article.


DeepDyve is your
personal research library

It’s your single place to instantly
discover and read the research
that matters to you.

Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.

All for just $49/month

Explore the DeepDyve Library

Search

Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly

Organize

Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.

Access

Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.

Your journals are on DeepDyve

Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.

All the latest content is available, no embargo periods.

See the journals in your area

DeepDyve

Freelancer

DeepDyve

Pro

Price

FREE

$49/month
$360/year

Save searches from
Google Scholar,
PubMed

Create lists to
organize your research

Export lists, citations

Read DeepDyve articles

Abstract access only

Unlimited access to over
18 million full-text articles

Print

20 pages / month

PDF Discount

20% off