Hierarchical Approach to Detect Fractures in CT DICOM Images

Hierarchical Approach to Detect Fractures in CT DICOM Images Abstract This paper deals with the identification of fractures in CT scan images. A large amount of significant and critical information are normally stored in medical data. Highly efficient and automated computational methods are needed to process and analyze all available data in order to help the physician in diagnosis, decisions and treatment planning. Each CT scan includes a large number of slices. In this paper, a new hierarchical segmentation algorithm is applied to all slices; it automatically extracts the bone structures using HMRF_EM segmentation method. The template-matching technique is employed to extract the affected portion. This new approach is experimented with eight patients’ data and validated by radiologists. The performance of the work is analyzed and compared with recent works using sensitivity, specificity and accuracy. 1. INTRODUCTION Computed tomography (CT or CAT scan) is a non-invasive diagnostic imaging procedure that uses a combination of X-rays and computer technology to produce horizontal or axial images (often called slices) of the body. A CT scan shows detailed images of any part of the body, including the bones, muscles, fat and organs. The CT scans of the bones can provide more information about the bone tissues and bone structures than standard X-rays of the bone, thus providing more information related to injuries. A fracture sustained due to trauma is called traumatic fractures, these fractures caused due to fall, road traffic accident, fight, etc. Detecting simple fracture is an easy task but detecting complex fracture is crucial. Therefore, automated fracture detection can help physicians to examine the CT images and to detect the injury severity within a short period. Extraction of features such as presence and location of fracture, and displacement between the fractured bones in an automated fashion is vital for such injuries. Few studies focus on automated fracture detection and some of them are given here. The crack detection in bone X-ray images can be done by Fuzzy index measure [1] and it is suited for 2D X-ray images. The hairline mandibular fracture is detected and highlights the fractures using a Markov Random Field (MRF) modeling approach coupled with MAP estimation [2]. Gibbs sampling is used to maximize the posterior probability but it is having a lack in high degree of automation. Fracture point is automatically detected from a sequence of CT images based on scale-space theory and graph-based filtering [3] (using prior anatomical knowledge). But this scheme fails in multi-fracture scenario. Femur fractures are clearly marked by texture analysis [4] by computing the angle between the neck axis and the shaft axis. A comprehensive comparison of various classifier combination methods [5] in the context of detecting bone fractures in X-ray images is reported in order to achieve high sensitivity. A divide-and-conquer approach for fracture detection is presented by partitioning the problem in the kernel space of the SVM [6] into smaller sub-problems, and training an SVM to specialize in solving a sub-problem. Femur fractures are detected by measuring the neck-shaft angle [7]. Different features and classification techniques [8] are proposed to detect fractures. The different kinds of features such as neck-shaft angle, Gabor texture, Markov Random Field texture and intensity gradient are used in this work but it gives a false alarm rate above 10%. A geodesic active contour model [9] with global constraints is applied to segment the bone region. A prior shape is collected and used as a global constraint in this model. A CAD system for the long bone fracture detection is created to detect fracture using gradient analysis [10]. However, all the above techniques detect fractures well, but it is suited only for 2D X-ray images and also it does not provide very good accuracy. Therefore, it is very difficult to analyze the severity of fractures well for the physicians. The thresholding [11], edge-based [12, 13], region-based [14], graph-based [15] classification-based [16] and deformable model [17] is the most commonly used image segmentation techniques [18, 19] in different medical applications. The objective of the proposed method is to detect complex fractures i.e. fractures having lot of pattern like transverse, oblique, spiral and comminuted. For detecting this type of fracture, the proposed technique focused on using linear structuring elements. Based on the patterns, totally 80 structuring elements are generated and stored on the database. The proposed method improves both the accuracy and the sensitivity significantly. The volume reconstruction method is used to view the fractures in order to view the severity. The proposed scheme is modeled as a four-step approach: enhancement, bone region extraction, fracture region isolation and volume rendering. In the enhancement step, the 2D anisotropic diffusion filter [20, 21] is used to remove the noises and to enhance the contrast. In the bone region extraction, first the image is segmented using HMRF_EM technique and is extracted using the mask bone region and adaptive thresholding is performed to segment bone region. The fracture region is isolated by template-matching technique. The template frames are generated and stored in the database for further processing. Finally, 3D image is formed from various 2D slices using volume reconstruction method. The performance is compared with some standard techniques; the experimental result shows that the result identifies the crack well when compared with the existing techniques. The remainder of this paper is organized as follows: Section 2 introduces the methodology involved in this paper. Section 3 describes the experiment results with analysis charts. A brief conclusion is given in Section 4. 2. METHODOLOGY Fracture detection is important to take fast and accurate decisions for physicians. In order to perform fracture detection very accurately, the proposed method works as a hierarchical method. Our objective is to visualize fractures in 3D. Therefore, in this work, all individual slices are processed and a 3D view is generated using Ray Casting method. The overview of the proposed method is shown in Fig. 1. The input for the proposed method is the various 2D slices of a 3D image. Each slice is enhanced first and segmented using Hidden Markov Random Field–Expectation Maximization (HMRF-EM) technique to extract the bone region. From the bone region, fracture portion is isolated using template-matching technique. The different templates are generated using the fracture properties and stored as a large dataset for analysis. Section 2.1 illustrates the enhancement step involved to reduce noises and to enhance the contrast. Section 2.2 describes the steps involved in bone region extraction. Section 2.3 describes how to isolate the fracture region using template-matching technique. Finally, Section 2.4 describes the steps to generate the 3D image from various slices. Figure 1. View largeDownload slide Block diagram of bone fracture detection. Figure 1. View largeDownload slide Block diagram of bone fracture detection. 2.1. Enhancement step The first step in the computerized analysis is to enhance the image because digital medical images are often affected by unwanted noise and blurriness. It also suffers from lack of contrast and sharpness which sometimes results in false diagnosis. Noise removal plays a vital role in medical imaging applications in order to enhance and recover the analysis details that may be hidden in the data. Normally, the CT scan images have Gaussian and Poisson noises. These noises can be effectively removed by anisotropic diffusion filter. The idea was to vary the noise removal in nearly homogeneous regions while avoiding any alteration of the signal along significant discontinuities. The discontinuities are edges in images that arise due to the sharp changes in image intensity. The purpose of the anisotropic diffusion filter [20, 21] is to improve the medical image quality by removing the noise and enhancing the edges. Anisotropic diffusion filter is able to retain the edges in the image by diminishing the noise in the non-homogenous region of image. 2.2. Bone region extraction A single slice in the CT scan image consists of three regions namely, background, skin and bone. The bone region is alone needed to detect the fracture. For this purpose, the given image is first segmented using HMRF-EM (K = 3). To get the bone region, background and skin region from the original image should be removed using the mask of the segmented image. Finally, adaptive thresholding technique is used to segment the bone region. Figure 2 explains the concept of bone region extraction. Figure 2. View largeDownload slide Block diagram for bone region extraction. Figure 2. View largeDownload slide Block diagram for bone region extraction. In order to obtain the bone region first, the image should be splitted into three areas, such as background, skin and bone. The HMRF-EM [22, 23] was first proposed for the segmentation of brain images. Many clustering algorithms are in survey, the clustering outputs are not smooth and they have morphological hole. The bone area is important for this work. In order to get smooth bone area, HMRF model is proposed here. HMRF model is a stochastic process generated by a MRF whose state sequence cannot be observed directly. But it can be observed through a field of observations. The importance of the HMRF model is derived from MRF theory, in which the spatial information in an image is encoded through contextual constraints of neighboring pixels. HMRF-EM technique is carried out in two steps: likelihood estimation and MAP estimation. To use HMRF-EM framework, first generate an initial segmentation using k-means clustering on the gray-level intensities of pixels. The initial segmentation provides the initial labels l0 for the MAP algorithm. The initial parameter for the EM algorithm is s0. Consider an image P={p1,…,pN} where each pi is the intensity of the pixel. To infer a configuration of labels L=(l1,l2,…,ln), where li∈PL and PL is the set of all possible labels. In binary segmentation problem, PL={0,1}. According to MAP estimation any label l* should satisfy the following equation:   l*=argmaxl{d(p|l,s)d(l)} (1)where d(l) is a Gibbs distribution and it can be written in the following form of equation:   d(l)=1texp(−U(l)) (2)where U(l) is the prior energy function and t is the partition function. The joint likelihood probability d(p|l,s) is defined by the following equation:   d(p|l,s)=∏id(pi|li,si) (3)where d(pi|li,si) is a Gaussian distribution with parameters si=(μi,σi). s={sl|l∈PL} is the parameter set, which is obtained by the EM algorithm. Let G(t;sl) denote a Gaussian distribution function with parameters sl=(μl,σl). It is denoted in the following equation:   G(t;sl)=12πσl2exp(−(t−μl)22σl2) (4) Hence, Eq. (4) is rewritten as the following equation:   d(p|l,s)=∏iG(pi;sli)=1t′exp(−U(p|l)) (5) After determining the likelihood estimation, HMRF-EM method proposes MAP estimation to estimate the labels. The minimized the total posterior energy produced from Eq. (2) is   l*=argminl∈L{U(p|l,s)+U(l)} (6)with the given p and s, where the likelihood energy is   U(p|l,s)=∑iU(pi|li,s)=∑i[(pi−μli)22σli2+ln(σli)] (7) The prior energy function U(l) has the following form:   U(l)=∑c∈CVc(l) (8)where Vc(l) is the clique potential and C is the set of all possible cliques. In the image domain, it is assumed that one pixel has at most four neighbors. Then the clique potential defined on pairs of neighboring pixels is in the following equation:   Vc(li,lj)=12(1−Nli,lj) (9)where   Nli,lj={0ifli≠lj1otherwise The summary of HMRF-EM algorithm is given in Steps (1–5): 1. Start with initial parameter set s(0) 2. Calculate the likelihood distribution d(t)(pi|li,si). 3. Using the current parameter set s(t), estimate the labels by MAP estimation. 4. Calculate the posterior distribution for all x∈PL and all pixels yi using the following equation:   d(t)(x|pi)=G(pi;sl)d(x|lNi(t))d(t)(pi) (10)where lNi(t) is the neighborhood configuration of li(t) and   d(t)(pi)=∑l∈PLG(pi;sl)d(x|lNi(t)) (11)we have   d(x|lNi(t))=1texp(−∑j∈NiVc(x,lj(t))) (12) 5. Use d(t)(x|pi) to update the parameters, it will produces the following two equations:  μx(t+1)=∑id(t)(x|pi)pi∑id(t)(x|pi)and (13)  (σx(t+1))2=∑id(t)(x|pi)(pi−μx(t+1))2∑id(t)(x|pi) (14)The output of the HMRF-EM technique is very smooth. Using this mask, the bone region is extracted from the contrast enhanced image. In order to identify the fracture, adaptive thresholding technique [24] is used in the extracted bone area. The extracted image is divided into an array of overlapping sub-images. A gray-level distribution histogram is produced for each sub-image, and the optimal threshold for that sub-image is calculated based on the histogram. Since the sub-images overlap, it is possible to produce a threshold for each individual pixel by interpolating the thresholds of the sub-images. Using the obtained threshold, the bone area is segmented. 2.3. Fracture area isolation One of the major concepts in fracture detection is that to find whether the medical image contains a fracture or not and to tell where this fracture is actually located in the input image. Template-matching technique is the best technique used to determine such fractures in the image. The forthcoming section describes the use of linear structuring elements in fracture detection in detail. 2.3.1. Fracture detection using linear structuring elements The main goal is to detect complex fractures with various patterns like transverse, oblique, spiral and comminuted in any part of the body. These fractures have geometric features such as The fractures are branch like a tree. This means that the tree-like branching occurs in lines and is continued by lines. This is based on the observation made viewing a few images of the fractures. The fractures are more or less have a constant width With the above knowledge about bone fracture features, linear structuring elements are chosen with a size of 20 × 20 window; in which line width varies from 1 to 20. Totally, 80 structuring elements are generated with the fracture properties depends on width and orientation, out of these, some sample structuring elements are shown in Fig. 3. Template matching [25] involves comparing a given template with windows of the same size in an image and identifying the window that is most similar to the template or linear structuring elements. The structuring element having width less than five is used to detect hairline breakage. For major or right fractures, the structuring elements width varies from 6 to 20 can be used. Figure 3. View largeDownload slide Sample structuring elements for (a) breakage with width = 2. (b) Breakage with width = 3, (c) horizontal breakage with width = 1 and (d) vertical breakage with width = 2. Figure 3. View largeDownload slide Sample structuring elements for (a) breakage with width = 2. (b) Breakage with width = 3, (c) horizontal breakage with width = 1 and (d) vertical breakage with width = 2. The accuracy of a template-matching process depends on the accuracy of the metric used to determine the similarity between a template and a window. There are many similar matching techniques in survey but in this work, sum of absolute intensity differences is used to determine the defect portion because it gives best matching than the others. Let a template be denoted by f1 and an image be denoted by f2. Assume that the template is of size n×n and the image is of size m×m and n is always less than m. Sum of absolute intensity differences is defined by the following equation:   s(x,y)=∑i=1n∑j=1n|f1(i,j)−f2(x+i−1,y+j−1)|wherex,y=1,…,m−n+1 (15)Measure s shows the dissimilarity between f1 and the window at location (x, y) in f2. The smaller the value of s(x, y), the more similar the template and the window. Assuming that the smallest value of s obtained so far is smin, the algorithm keeps track of smin and, at each iteration, compares the obtained sum to smin. If the sum obtained so far is equal to or greater than smin, further computation of the similarity measure at that position is abandoned. This is done because further computation at that position will only increase the value of s. If the sum obtained for all iterations is less than smin, smin is replaced with the new sum. 2.3.2. Volume rendering Each CT scan includes a large number of slices. Meanwhile, each slice is processed and they are combined to give the overall effect. Volume rendering is a technique for visualizing sampled functions of three spatial dimensions by computing 2D projections of a colored semitransparent volume. Volume rendering forms an RGBA volume from the data by projecting data onto the 2D viewing plane from the desired point of view. Ray Casting technique [26] is performed to produce a volume. The obtained 3D image can be rotated in various angles to view the full details of the injured part. The length and the width of the affected portions are also obtained to analyze the fracture portion in order to guide the physicians. 3. RESULTS AND DISCUSSIONS 3.1. Dataset The dataset has been obtained from the Arthi Scan Centre Tirunelveli, India. Data have been collected from 70 patients who have fractures in skull or bones. Forty-five to seventy-five images are collected for each patient, which is obtained from GE LightSpeed 16 Slice CT Scanner. Axial CT images with 5 mm slice thickness are used for the study. For fracture detection, from the collected details, 57 patients exhibited small to very severe bone fractures. 3.2. Experimental results The proposed method has been tested on various CT scan DICOM images. The performance of the shown figure here for an optimized MATLAB implementation was obtained. The performance was measured on 2.40 GHz Pentium(4) CPU with 512 MB of RAM running Microsoft Windows XP version 2002. The proposed method and earlier works are carried out in MATLAB with our native dataset. A sample CT scan image consists of 45–75 slices. The sample DICOM image is shown in Fig. 4a (test image T1). The DICOM image consists of the history of the patient. So we need to take the image which is necessary for processing image is resized as in Fig. 4(b). Normally preprocessing is important in image processing to enhance the image therefore the image is filtered using anisotropic diffusion filter and it shown in Fig. 4c. The enhanced image is subjected to HMRF-EM algorithm with k = 3 to make the mask for bone region. Since any image has soft tissue, bone and background region segment the region into three regions (k = 3). The output after HMRF-EM is shown in Fig. 4d. Figure 4. View largeDownload slide Sample outputs of T1 test image: (a) a 2D slice of a CT DICOM image, (b) resized image (region of interest), (c) filtered image using anisotropic diffusion filter, (d) HMRF output with k = 3 (to extract bone region), (e) after adaptive thresholding, (f) fracture region isolated in a 2D slice, (g) 3D image to visualize the fracture portion with different orientations and (h) fracture portion with different orientations. Figure 4. View largeDownload slide Sample outputs of T1 test image: (a) a 2D slice of a CT DICOM image, (b) resized image (region of interest), (c) filtered image using anisotropic diffusion filter, (d) HMRF output with k = 3 (to extract bone region), (e) after adaptive thresholding, (f) fracture region isolated in a 2D slice, (g) 3D image to visualize the fracture portion with different orientations and (h) fracture portion with different orientations. The result of HMRF-EM has three regions. Using the bone region mask, the bone is alone extracted from the input image. The extracted bone region is then subjected to adaptive thresholding method. After adaptive thresholding, a binary image is obtained. The result is shown in Fig. 4e. Then the image is subjected to template-matching technique. Based on the observation of various sample fracture images, the fractures are branched like a tree. The tree-like branching occurs in lines and is continued by lines. Another property of the fracture is more or less having a constant width. The generated 80 structuring elements involve with the template-matching process to identify the fracture region. If the width is less than 5, the images have minute hairline breakages. For major or right fractures, the structuring elements have width varing from 6 to 20. The fracture portion is isolated using template-matching technique and Fig. 4f depicts it. The fracture region is isolated for all 2D slices and it is combined to give a 3D image. Combining all 2D slices will generate the 3D image. This is used to visualize the fracture portion in a better manner. The obtained 3D image can be rotated in various angles to view the full details of the injured part. The 3D images with various orientations are given in Fig. 4g. The length and the width of the affected portions are also obtained from the 3D image to analyze the fracture portion. The fracture isolated 3D image is shown in Fig. 4h. The length and the width of the affected portion are also obtained for diagnosis. The length of the fracture in T1 test image is 16.4 cm and the width is 0.4 cm which is given in Table 1. The results for another sample image T2 are depicted in Fig. 5. A sample DICOM image for T2 test image is given in Fig. 5a. It is preprocessed using anisotropic diffusion filter (Fig. 5b) and subjected to HMRF algorithm to extract the bone region. The bone region is then subjected to adaptive thresholding (Fig. 5c). At last, the fracture region is isolated from the 2D slice (Fig. 5d), then all slices are combined to generate the 3D image. The fracture portion for the test image T2 with different orientation is given in Fig. 5e. The proposed work is carried out over various test images and the length and the width for eight test images are given in Table 1. Table 1. Length and width of the various sample images. Test image  T1  T2  T3  T4  T5  T6  T7  T8  Length (in cm)  16.4  10.33  7.7  6.54  3.64  10.74  8.57  7.87  Width (in cm)  0.4  0.3  0.3  0.08  0.11  0.2  0.13  0.11  Test image  T1  T2  T3  T4  T5  T6  T7  T8  Length (in cm)  16.4  10.33  7.7  6.54  3.64  10.74  8.57  7.87  Width (in cm)  0.4  0.3  0.3  0.08  0.11  0.2  0.13  0.11  Table 1. Length and width of the various sample images. Test image  T1  T2  T3  T4  T5  T6  T7  T8  Length (in cm)  16.4  10.33  7.7  6.54  3.64  10.74  8.57  7.87  Width (in cm)  0.4  0.3  0.3  0.08  0.11  0.2  0.13  0.11  Test image  T1  T2  T3  T4  T5  T6  T7  T8  Length (in cm)  16.4  10.33  7.7  6.54  3.64  10.74  8.57  7.87  Width (in cm)  0.4  0.3  0.3  0.08  0.11  0.2  0.13  0.11  Figure 5. View largeDownload slide (a) Input image (DICOM), (b) filtered image after anisotropic diffusion filter, (c) after adaptive thresholding, (d) fracture region isolation in a 2D slice and (e) 3D image formation in different orientation. Figure 5. View largeDownload slide (a) Input image (DICOM), (b) filtered image after anisotropic diffusion filter, (c) after adaptive thresholding, (d) fracture region isolation in a 2D slice and (e) 3D image formation in different orientation. 3.3. Discussion The results were validated on the basis of the assessment and evaluation made by radiologists on the CT scans in the above-mentioned database. As shown in the results, the designed algorithm is able to detect the fractures relatively accurately. Using the proposed algorithm, fractured bone may be further highlighted in the processed images; this could help the radiologists better analyze the scans and increase the chances of capturing the fractures. Sensitivity or True Positive Rate is the ability of a test those with the Crack, whereas test Specificity or True Negative Rate is the ability of the test to correctly identity those without the crack.   sensitivity=TP(TP+FN) (16)  specificity=TN(FP+TN) (17)  Accuracy=(TP+TN)(TP+TN+FP+FN) (18)where TP→True Positive (correctly identified), FP→False Positive (incorrectly identified), TN→True Negative (correctly rejected), FN→False Negative (incorrectly rejected). The results of sensitivity and specificity for the Eight Patients sample image are determined with the help of the ground truth image and it is given in Table 2. The overall True Positive Rate (Sensitivity) is 0.957 and the overall True Negative Rate (Specificity) is 0.978. The proposed work is compared with the work of Wu et al. [27] and Choudhury et al. [3] having the sensitivity 0.884 and 0.864, respectively. On seeing the results, the proposed method detects crack well because the sensitivity of the proposed work is very high (0.957) when compared with the existing works. Similarly, the specificity is also very high (0.978) when compared with the existing works (0.887 and 0.881). Figure 6 depicts the TPR, FNR, TNR and FPR comparison of the above-mentioned techniques. From this, it is clearly understood that the proposed method detects crack well. Figure 7 shows the graphical chart of accuracy comparison. The overall accuracy rate of the proposed work is very high (0.965) when compared with the previous technique (0.903 and 0.876). On seeing the sensitivity, specificity and accuracy, the proposed work works well than the other methods. The computational time of the work (Table 3) is adequate and it yields an average execution time of 10 min 26 s with no prior training is required. The percentage of accuracy improvement (PAI) with the existing work is computed using the following equation:   PAI=AP−AEAE (19)where AP is the accuracy of the proposed method and AE is the accuracy of the existing work. The proposed method achieves 10% and 6.7% improvement with the existing work [3, 27]. Table 2. Sensitivity and specificity analysis. S. no.  Test image  Sensitivity  Specificity  Accuracy  Proposed work  Wu et al. [27]  Choudhury et al. [3]  Proposed work  Wu et al. [27]  Choudhury et al. [3]  Proposed work  Wu et al. [27]  Choudhury et al. [3]  1  T1  0.975  0.943  0.912  0.999  0.962  0.931  0.998  0.945  0.904  2  T2  0.981  0.917  0.935  0.986  0.913  0.894  0.988  0.925  0.893  3  T3  0.909  0.863  0.854  0.98  0.865  0.862  0.995  0.903  0.884  4  T4  0.975  0.972  0.903  0.998  0.905  0.894  0.891  0.863  0.864  5  T5  0.867  0.726  0.82  0.948  0.882  0.869  0.979  0.942  0.925  6  T6  0.969  0.859  0.793  0.964  0.813  0.824  0.955  0.863  0.831  7  T7  0.988  0.892  0.813  0.969  0.847  0.868  0.948  0.901  0.865  8  T8  0.995  0.903  0.883  0.979  0.911  0.905  0.962  0.879  0.842  Overall percentage  0.957  0.884  0.864  0.978  0.887  0.881  0.965  0.903  0.876  S. no.  Test image  Sensitivity  Specificity  Accuracy  Proposed work  Wu et al. [27]  Choudhury et al. [3]  Proposed work  Wu et al. [27]  Choudhury et al. [3]  Proposed work  Wu et al. [27]  Choudhury et al. [3]  1  T1  0.975  0.943  0.912  0.999  0.962  0.931  0.998  0.945  0.904  2  T2  0.981  0.917  0.935  0.986  0.913  0.894  0.988  0.925  0.893  3  T3  0.909  0.863  0.854  0.98  0.865  0.862  0.995  0.903  0.884  4  T4  0.975  0.972  0.903  0.998  0.905  0.894  0.891  0.863  0.864  5  T5  0.867  0.726  0.82  0.948  0.882  0.869  0.979  0.942  0.925  6  T6  0.969  0.859  0.793  0.964  0.813  0.824  0.955  0.863  0.831  7  T7  0.988  0.892  0.813  0.969  0.847  0.868  0.948  0.901  0.865  8  T8  0.995  0.903  0.883  0.979  0.911  0.905  0.962  0.879  0.842  Overall percentage  0.957  0.884  0.864  0.978  0.887  0.881  0.965  0.903  0.876  Table 2. Sensitivity and specificity analysis. S. no.  Test image  Sensitivity  Specificity  Accuracy  Proposed work  Wu et al. [27]  Choudhury et al. [3]  Proposed work  Wu et al. [27]  Choudhury et al. [3]  Proposed work  Wu et al. [27]  Choudhury et al. [3]  1  T1  0.975  0.943  0.912  0.999  0.962  0.931  0.998  0.945  0.904  2  T2  0.981  0.917  0.935  0.986  0.913  0.894  0.988  0.925  0.893  3  T3  0.909  0.863  0.854  0.98  0.865  0.862  0.995  0.903  0.884  4  T4  0.975  0.972  0.903  0.998  0.905  0.894  0.891  0.863  0.864  5  T5  0.867  0.726  0.82  0.948  0.882  0.869  0.979  0.942  0.925  6  T6  0.969  0.859  0.793  0.964  0.813  0.824  0.955  0.863  0.831  7  T7  0.988  0.892  0.813  0.969  0.847  0.868  0.948  0.901  0.865  8  T8  0.995  0.903  0.883  0.979  0.911  0.905  0.962  0.879  0.842  Overall percentage  0.957  0.884  0.864  0.978  0.887  0.881  0.965  0.903  0.876  S. no.  Test image  Sensitivity  Specificity  Accuracy  Proposed work  Wu et al. [27]  Choudhury et al. [3]  Proposed work  Wu et al. [27]  Choudhury et al. [3]  Proposed work  Wu et al. [27]  Choudhury et al. [3]  1  T1  0.975  0.943  0.912  0.999  0.962  0.931  0.998  0.945  0.904  2  T2  0.981  0.917  0.935  0.986  0.913  0.894  0.988  0.925  0.893  3  T3  0.909  0.863  0.854  0.98  0.865  0.862  0.995  0.903  0.884  4  T4  0.975  0.972  0.903  0.998  0.905  0.894  0.891  0.863  0.864  5  T5  0.867  0.726  0.82  0.948  0.882  0.869  0.979  0.942  0.925  6  T6  0.969  0.859  0.793  0.964  0.813  0.824  0.955  0.863  0.831  7  T7  0.988  0.892  0.813  0.969  0.847  0.868  0.948  0.901  0.865  8  T8  0.995  0.903  0.883  0.979  0.911  0.905  0.962  0.879  0.842  Overall percentage  0.957  0.884  0.864  0.978  0.887  0.881  0.965  0.903  0.876  Figure 6. View largeDownload slide TPR, FNR, TNR and FPR comparison. Figure 6. View largeDownload slide TPR, FNR, TNR and FPR comparison. Figure 7. View largeDownload slide Accuracy comparison. Figure 7. View largeDownload slide Accuracy comparison. Table 3. Computation time. Test image  Number of slices  Proposed work  Wu et al. [27]  Choudhury et al. [3]  T1  60  11 min 50 s  12 min 01 s  13 min 42 s  T2  49  7 min 42 s  8 min 50 s  8 min 22 s  T3  38  5 min 55 s  4 min 37 s  7 min 86 s  T4  60  13 min 42 s  13 min 25 s  15 min 32 s  T5  75  16 min  17 min 35 s  17 min 08 s  T6  60  10 min 43 s  11 min 45 s  9 min 43 s  T7  55  9 min 36 s  9 min 26 s  10 min 45 s  T8  52  8 min 40 s  9 min 08 s  11 min 38 s  Average  10 min 26 s  11 min 06 s  12 min 04 s  Test image  Number of slices  Proposed work  Wu et al. [27]  Choudhury et al. [3]  T1  60  11 min 50 s  12 min 01 s  13 min 42 s  T2  49  7 min 42 s  8 min 50 s  8 min 22 s  T3  38  5 min 55 s  4 min 37 s  7 min 86 s  T4  60  13 min 42 s  13 min 25 s  15 min 32 s  T5  75  16 min  17 min 35 s  17 min 08 s  T6  60  10 min 43 s  11 min 45 s  9 min 43 s  T7  55  9 min 36 s  9 min 26 s  10 min 45 s  T8  52  8 min 40 s  9 min 08 s  11 min 38 s  Average  10 min 26 s  11 min 06 s  12 min 04 s  Table 3. Computation time. Test image  Number of slices  Proposed work  Wu et al. [27]  Choudhury et al. [3]  T1  60  11 min 50 s  12 min 01 s  13 min 42 s  T2  49  7 min 42 s  8 min 50 s  8 min 22 s  T3  38  5 min 55 s  4 min 37 s  7 min 86 s  T4  60  13 min 42 s  13 min 25 s  15 min 32 s  T5  75  16 min  17 min 35 s  17 min 08 s  T6  60  10 min 43 s  11 min 45 s  9 min 43 s  T7  55  9 min 36 s  9 min 26 s  10 min 45 s  T8  52  8 min 40 s  9 min 08 s  11 min 38 s  Average  10 min 26 s  11 min 06 s  12 min 04 s  Test image  Number of slices  Proposed work  Wu et al. [27]  Choudhury et al. [3]  T1  60  11 min 50 s  12 min 01 s  13 min 42 s  T2  49  7 min 42 s  8 min 50 s  8 min 22 s  T3  38  5 min 55 s  4 min 37 s  7 min 86 s  T4  60  13 min 42 s  13 min 25 s  15 min 32 s  T5  75  16 min  17 min 35 s  17 min 08 s  T6  60  10 min 43 s  11 min 45 s  9 min 43 s  T7  55  9 min 36 s  9 min 26 s  10 min 45 s  T8  52  8 min 40 s  9 min 08 s  11 min 38 s  Average  10 min 26 s  11 min 06 s  12 min 04 s  3.4. ROC analysis The segmentation results of the proposed work are compared with other standard techniques by calculating the True Positives (TP), True Negatives (TN), False Positives (FP) and False Negatives (FN). The True Positive Rate (TPR) or the Sensitivity, and the False Positive Rate (FPR) or 1-specificity are calculated for the sample images. Figure 8 shows an ROC curve with Sensitivity vs. 1-Specificity for all patients. The results also demonstrate that our new proposed technique has the ability to detect crack well when compared with the other techniques. Figure 8. View largeDownload slide ROC curve. Figure 8. View largeDownload slide ROC curve. The ROC analysis says that the proposed method provides better detections than two standard methods. It is possible to improve the accuracy of crack detection by decreasing the error rates, this work can be extended on 3D images and the depth of the crack can also calculated. 4. CONCLUSION This paper presents a method for detecting fractures in bones using automated bone segmentation, adaptive thresholding and template matching. The results show that the proposed method is capable of detecting fractures accurately. The proposed algorithm produces segmentations with high sensitivity and specificity. The overall sensitivity of the proposed method of all patients is 0.957 and the overall true Negative Rate (Specificity) is 0.978. The ROC analysis says that the proposed method provides better detections than other methods. In future, this work can be extended to improve the accuracy of fracture detection by decreasing the error rates. References 1 Linda, C.H. and Jiji, G.W. ( 2011) Crack detection in X-ray images using Fuzzy Index Measure. Appl. Soft Comput. , 11, 3571– 3579. Google Scholar CrossRef Search ADS   2 Chowdhury, A.S., Bhattacharya, A., Bhandarkar, S.M., Datta, G.S., Yu, J.C. and Figueroa, R. ( 2007) Hairline Fracture Detection using MRF and Gibbs Sampling. IEEE Worhshop on Application of Computer Vision (WACV), Austin, TX, USA, 21 and 22 February 2007, pp. 56–61, IEEE Explore Digital Library. 3 Chowdhury, A.S., Bhandarkar, S.M., Datta, G. and Yu, J.C. ( 2006) Automated Detection of Stable Fracture Points In Computed Tomography Image Sequences, 3rd IEEE Int. Symp. Biomedical imaging: Nano to Macro, Arlington, VA, USA, 6–9 May 2006, pp. 1320–23, IEEE Explore Digital Library. 4 Yap, D.W.-H., Chen, Y., Leow, W.K., Howe, T.S. and Png, M.A. ( 2004) Detecting Femur Fractures by Texture Analysis of Trabeculae, 17th IEEE Int. Conf. Pattern Recognition (ICPR), 26 August 2004, Cambridge, UK, pp. 730–733, IEEE Explore Digital Library. 5 Lum, V.L.F., Leow, W.K., Chen, Y., Howe, T.S. and Png, M.A., ( 2005) Combining Classifiers for Bone Fracture Detection in X-ray Images, IEEE Int. Conf. Image Processing (ICIP), 14 September 2005, Genova, Italy, pp. 1149–1152, IEEE Explore Digital Library. 6 He, J.C., Leow, W.K. and Howe, T.S., ( 2007) Hierarchical Classifiers for Detection of Fractures in X-ray Images, 12th Int. Conf. Computer Analysis of Images and Patterns (CAIP), 27–29 August 2007, Vienna, Austria, pp. 962–969, Springer Verlag, Berlin, Heidelberg. 7 Tian, T.P., Chen, Y., Leow, W.K., Hsu, W., Howe, T.S. and Png, M.A., ( 2003) Computing Neckshaft Angle of Femur for X-ray Fracture Detection, Int. Conf. Computer Analysis of Images and Patterns (CAIP 2003), 25–27 August 2003, Groningen, The Netherlands, pp. 82–89, Springer Verlag, Berlin, Heidelberg. 8 Lim, S.E., Xing, Y., Chen, Y., Leow, W.K., Howe, T.S. and Png, M.A., ( 2004) Detection of Femur and Radius Fractures in X-ray Images’, Int. Conf. Advances in Medical Signal and Information Processing, September 2004, Malte. 9 Jia, Y. and Jiang, Y., ( 2006) Active Contour Model with Shape Constraints for Bone Fracture Detection, Int. Conf. Computer Graphics, Imaging and Visualization, 26–28 July 2006, Sydney, Australia, pp. 90–95, IEEE Explore Digital Library. 10 Donnelley, M., Knoeles, G. and Hearn, T., ( 2008) A CAD System for Long-Bone Segmentation and Fracture Detection, Int. Conf. Image and Signal Processing (ICISP 2008), 1–3 July 2008, Cherbourg- Octerville, France, pp. 153–162, Springer Verlag, Berlin, Heidelberg. 11 Sezgin, M. and Aankur, B. ( 2004) Survey over image thresholding techniques and quantitative performance evaluation. J. Electron. Imaging , 13, 146– 165. Google Scholar CrossRef Search ADS   12 Canny, J. ( 1986) A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. , 8, 679– 698. Google Scholar CrossRef Search ADS PubMed  13 Thakare, P. ( 2011) A study of image segmentation and edge detection techniques. Int. J. Comput. Sci. Eng. (IJCSE) , 3, 899– 904. 14 Grau, V., Mewes, A.U.T., Alcaniz, M., Kikinis, R. and Warfield, S.K. ( 2004) Improved watershed transform for medical image segmentation using prior information. IEEE Trans. Med. Imaging , 23, 447– 458. Google Scholar CrossRef Search ADS PubMed  15 Boykov, Y.Y. and Jolly, M.P., ( 2001) Interactive Graph Cuts for Optimal Boundary & Region Segmentation of Objects in N-D images, Int. Conf. Computer Vision, Vancouver, Canada, July 2001, pp. 105–112. 16 Yogeswara Rao, K., James Stephen, M. and Siva Phanindra, D. ( 2012) Classification based image segmentation approach. Int. J. Comput. Sci. Technol. , 3, 658– 659. 17 McInerney, T. and Terzopoulos, D. ( 1996) Deformable models in medical image analysis: a survey. Med. Image Anal. , 1, 91– 108. Google Scholar CrossRef Search ADS PubMed  18 Rogowska, J. ( 2000) Overview and Fundamentals of Medical Image Segmentation, Handbook of Medical Imaging . Academic Press, Inc, Orlando, FL, USA. 19 Pharm, D.L., Xu, C. and Prince, J.L., ( 1998) A survey of current methods in medical image segmentation. Annu. Rev. Biomed. Eng., 2, 315– 337. 20 Mahmoodi, S. ( 2011) Anisotropic diffusion for noise removal of band pass signals. Elsevier Signal Process. , 91, 1298– 1307. Google Scholar CrossRef Search ADS   21 Mendrik, A.M., Vonken, E.J., Rutten, A., Viergever, M.A. and Van Ginneken, B. ( 2009) Noise reduction in computed tomography scans using 3D anisotropic hybrid diffusion with continuous switch. IEEE Trans. Med. Imaging , 28, 1585– 1594. Google Scholar CrossRef Search ADS PubMed  22 Zhang, Y., Brady, M. and Smith, S. ( 2001) Segmentation of brain MR images through a hidden Markov random field model and the expectation maximization algorithm. IEEE Trans. Med. Imaging , 20, 45– 57. Google Scholar CrossRef Search ADS PubMed  23 Said, T.B. and Azaiz, O., ( 2010), Segmentation of Liver Tumor using HMRF-EM Algorithm with Bootstrap Resampling, 5th Int. Symp. I/V Communications and Mobile Network (ISVC), 30 September–2 October 2010, Rabat Morocco, pp. 1–4, IEEE Explore Digital library. 24 Singh, T.R., Roy, S., Singh, O.I., Sinam, T. and Singh, K.M. ( 2012) A new local adaptive thresholding technique in binarization. Int. J. Comput. Sci. , 8, 271– 277. 25 Jurie, F. and Dhome, M., ( 2001) A simple and Efficient Template Matching Algorithm, Eighth IEEE Int. Conf. Computer Vision, ICCV 2001, 7–4 July 2001, Vancouver, BC, Canada, pp. 544–549, IEEE Explore Digital Library. 26 Mohamed Sathik, M., Mehaboobathunnisa, R., Haseena Thasneem, A.A. and Arumugam, S. ( 2015) Ray casting for 3D rendering – a review. Int. J. Innovations Eng. Technol. , 5, 121– 124. 27 Wu, J., Davuluri, P., Ward, K.R., Cockrell, C., Hobson, R. and Najarian, K. ( 2012) Fracture detection in traumatic pelvic CT images. Int. J. Biomed. Imaging , 2012, 327198. 10 pages. Google Scholar CrossRef Search ADS PubMed  Author notes Handling editor: Suchi Bhandarkar © The British Computer Society 2018. All rights reserved. For permissions, please email: journals.permissions@oup.com This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/about_us/legal/notices) http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png The Computer Journal Oxford University Press

Hierarchical Approach to Detect Fractures in CT DICOM Images

Loading next page...
 
/lp/ou_press/hierarchical-approach-to-detect-fractures-in-ct-dicom-images-D02tVJF0xe
Publisher
Oxford University Press
Copyright
© The British Computer Society 2018. All rights reserved. For permissions, please email: journals.permissions@oup.com
ISSN
0010-4620
eISSN
1460-2067
D.O.I.
10.1093/comjnl/bxy023
Publisher site
See Article on Publisher Site

Abstract

Abstract This paper deals with the identification of fractures in CT scan images. A large amount of significant and critical information are normally stored in medical data. Highly efficient and automated computational methods are needed to process and analyze all available data in order to help the physician in diagnosis, decisions and treatment planning. Each CT scan includes a large number of slices. In this paper, a new hierarchical segmentation algorithm is applied to all slices; it automatically extracts the bone structures using HMRF_EM segmentation method. The template-matching technique is employed to extract the affected portion. This new approach is experimented with eight patients’ data and validated by radiologists. The performance of the work is analyzed and compared with recent works using sensitivity, specificity and accuracy. 1. INTRODUCTION Computed tomography (CT or CAT scan) is a non-invasive diagnostic imaging procedure that uses a combination of X-rays and computer technology to produce horizontal or axial images (often called slices) of the body. A CT scan shows detailed images of any part of the body, including the bones, muscles, fat and organs. The CT scans of the bones can provide more information about the bone tissues and bone structures than standard X-rays of the bone, thus providing more information related to injuries. A fracture sustained due to trauma is called traumatic fractures, these fractures caused due to fall, road traffic accident, fight, etc. Detecting simple fracture is an easy task but detecting complex fracture is crucial. Therefore, automated fracture detection can help physicians to examine the CT images and to detect the injury severity within a short period. Extraction of features such as presence and location of fracture, and displacement between the fractured bones in an automated fashion is vital for such injuries. Few studies focus on automated fracture detection and some of them are given here. The crack detection in bone X-ray images can be done by Fuzzy index measure [1] and it is suited for 2D X-ray images. The hairline mandibular fracture is detected and highlights the fractures using a Markov Random Field (MRF) modeling approach coupled with MAP estimation [2]. Gibbs sampling is used to maximize the posterior probability but it is having a lack in high degree of automation. Fracture point is automatically detected from a sequence of CT images based on scale-space theory and graph-based filtering [3] (using prior anatomical knowledge). But this scheme fails in multi-fracture scenario. Femur fractures are clearly marked by texture analysis [4] by computing the angle between the neck axis and the shaft axis. A comprehensive comparison of various classifier combination methods [5] in the context of detecting bone fractures in X-ray images is reported in order to achieve high sensitivity. A divide-and-conquer approach for fracture detection is presented by partitioning the problem in the kernel space of the SVM [6] into smaller sub-problems, and training an SVM to specialize in solving a sub-problem. Femur fractures are detected by measuring the neck-shaft angle [7]. Different features and classification techniques [8] are proposed to detect fractures. The different kinds of features such as neck-shaft angle, Gabor texture, Markov Random Field texture and intensity gradient are used in this work but it gives a false alarm rate above 10%. A geodesic active contour model [9] with global constraints is applied to segment the bone region. A prior shape is collected and used as a global constraint in this model. A CAD system for the long bone fracture detection is created to detect fracture using gradient analysis [10]. However, all the above techniques detect fractures well, but it is suited only for 2D X-ray images and also it does not provide very good accuracy. Therefore, it is very difficult to analyze the severity of fractures well for the physicians. The thresholding [11], edge-based [12, 13], region-based [14], graph-based [15] classification-based [16] and deformable model [17] is the most commonly used image segmentation techniques [18, 19] in different medical applications. The objective of the proposed method is to detect complex fractures i.e. fractures having lot of pattern like transverse, oblique, spiral and comminuted. For detecting this type of fracture, the proposed technique focused on using linear structuring elements. Based on the patterns, totally 80 structuring elements are generated and stored on the database. The proposed method improves both the accuracy and the sensitivity significantly. The volume reconstruction method is used to view the fractures in order to view the severity. The proposed scheme is modeled as a four-step approach: enhancement, bone region extraction, fracture region isolation and volume rendering. In the enhancement step, the 2D anisotropic diffusion filter [20, 21] is used to remove the noises and to enhance the contrast. In the bone region extraction, first the image is segmented using HMRF_EM technique and is extracted using the mask bone region and adaptive thresholding is performed to segment bone region. The fracture region is isolated by template-matching technique. The template frames are generated and stored in the database for further processing. Finally, 3D image is formed from various 2D slices using volume reconstruction method. The performance is compared with some standard techniques; the experimental result shows that the result identifies the crack well when compared with the existing techniques. The remainder of this paper is organized as follows: Section 2 introduces the methodology involved in this paper. Section 3 describes the experiment results with analysis charts. A brief conclusion is given in Section 4. 2. METHODOLOGY Fracture detection is important to take fast and accurate decisions for physicians. In order to perform fracture detection very accurately, the proposed method works as a hierarchical method. Our objective is to visualize fractures in 3D. Therefore, in this work, all individual slices are processed and a 3D view is generated using Ray Casting method. The overview of the proposed method is shown in Fig. 1. The input for the proposed method is the various 2D slices of a 3D image. Each slice is enhanced first and segmented using Hidden Markov Random Field–Expectation Maximization (HMRF-EM) technique to extract the bone region. From the bone region, fracture portion is isolated using template-matching technique. The different templates are generated using the fracture properties and stored as a large dataset for analysis. Section 2.1 illustrates the enhancement step involved to reduce noises and to enhance the contrast. Section 2.2 describes the steps involved in bone region extraction. Section 2.3 describes how to isolate the fracture region using template-matching technique. Finally, Section 2.4 describes the steps to generate the 3D image from various slices. Figure 1. View largeDownload slide Block diagram of bone fracture detection. Figure 1. View largeDownload slide Block diagram of bone fracture detection. 2.1. Enhancement step The first step in the computerized analysis is to enhance the image because digital medical images are often affected by unwanted noise and blurriness. It also suffers from lack of contrast and sharpness which sometimes results in false diagnosis. Noise removal plays a vital role in medical imaging applications in order to enhance and recover the analysis details that may be hidden in the data. Normally, the CT scan images have Gaussian and Poisson noises. These noises can be effectively removed by anisotropic diffusion filter. The idea was to vary the noise removal in nearly homogeneous regions while avoiding any alteration of the signal along significant discontinuities. The discontinuities are edges in images that arise due to the sharp changes in image intensity. The purpose of the anisotropic diffusion filter [20, 21] is to improve the medical image quality by removing the noise and enhancing the edges. Anisotropic diffusion filter is able to retain the edges in the image by diminishing the noise in the non-homogenous region of image. 2.2. Bone region extraction A single slice in the CT scan image consists of three regions namely, background, skin and bone. The bone region is alone needed to detect the fracture. For this purpose, the given image is first segmented using HMRF-EM (K = 3). To get the bone region, background and skin region from the original image should be removed using the mask of the segmented image. Finally, adaptive thresholding technique is used to segment the bone region. Figure 2 explains the concept of bone region extraction. Figure 2. View largeDownload slide Block diagram for bone region extraction. Figure 2. View largeDownload slide Block diagram for bone region extraction. In order to obtain the bone region first, the image should be splitted into three areas, such as background, skin and bone. The HMRF-EM [22, 23] was first proposed for the segmentation of brain images. Many clustering algorithms are in survey, the clustering outputs are not smooth and they have morphological hole. The bone area is important for this work. In order to get smooth bone area, HMRF model is proposed here. HMRF model is a stochastic process generated by a MRF whose state sequence cannot be observed directly. But it can be observed through a field of observations. The importance of the HMRF model is derived from MRF theory, in which the spatial information in an image is encoded through contextual constraints of neighboring pixels. HMRF-EM technique is carried out in two steps: likelihood estimation and MAP estimation. To use HMRF-EM framework, first generate an initial segmentation using k-means clustering on the gray-level intensities of pixels. The initial segmentation provides the initial labels l0 for the MAP algorithm. The initial parameter for the EM algorithm is s0. Consider an image P={p1,…,pN} where each pi is the intensity of the pixel. To infer a configuration of labels L=(l1,l2,…,ln), where li∈PL and PL is the set of all possible labels. In binary segmentation problem, PL={0,1}. According to MAP estimation any label l* should satisfy the following equation:   l*=argmaxl{d(p|l,s)d(l)} (1)where d(l) is a Gibbs distribution and it can be written in the following form of equation:   d(l)=1texp(−U(l)) (2)where U(l) is the prior energy function and t is the partition function. The joint likelihood probability d(p|l,s) is defined by the following equation:   d(p|l,s)=∏id(pi|li,si) (3)where d(pi|li,si) is a Gaussian distribution with parameters si=(μi,σi). s={sl|l∈PL} is the parameter set, which is obtained by the EM algorithm. Let G(t;sl) denote a Gaussian distribution function with parameters sl=(μl,σl). It is denoted in the following equation:   G(t;sl)=12πσl2exp(−(t−μl)22σl2) (4) Hence, Eq. (4) is rewritten as the following equation:   d(p|l,s)=∏iG(pi;sli)=1t′exp(−U(p|l)) (5) After determining the likelihood estimation, HMRF-EM method proposes MAP estimation to estimate the labels. The minimized the total posterior energy produced from Eq. (2) is   l*=argminl∈L{U(p|l,s)+U(l)} (6)with the given p and s, where the likelihood energy is   U(p|l,s)=∑iU(pi|li,s)=∑i[(pi−μli)22σli2+ln(σli)] (7) The prior energy function U(l) has the following form:   U(l)=∑c∈CVc(l) (8)where Vc(l) is the clique potential and C is the set of all possible cliques. In the image domain, it is assumed that one pixel has at most four neighbors. Then the clique potential defined on pairs of neighboring pixels is in the following equation:   Vc(li,lj)=12(1−Nli,lj) (9)where   Nli,lj={0ifli≠lj1otherwise The summary of HMRF-EM algorithm is given in Steps (1–5): 1. Start with initial parameter set s(0) 2. Calculate the likelihood distribution d(t)(pi|li,si). 3. Using the current parameter set s(t), estimate the labels by MAP estimation. 4. Calculate the posterior distribution for all x∈PL and all pixels yi using the following equation:   d(t)(x|pi)=G(pi;sl)d(x|lNi(t))d(t)(pi) (10)where lNi(t) is the neighborhood configuration of li(t) and   d(t)(pi)=∑l∈PLG(pi;sl)d(x|lNi(t)) (11)we have   d(x|lNi(t))=1texp(−∑j∈NiVc(x,lj(t))) (12) 5. Use d(t)(x|pi) to update the parameters, it will produces the following two equations:  μx(t+1)=∑id(t)(x|pi)pi∑id(t)(x|pi)and (13)  (σx(t+1))2=∑id(t)(x|pi)(pi−μx(t+1))2∑id(t)(x|pi) (14)The output of the HMRF-EM technique is very smooth. Using this mask, the bone region is extracted from the contrast enhanced image. In order to identify the fracture, adaptive thresholding technique [24] is used in the extracted bone area. The extracted image is divided into an array of overlapping sub-images. A gray-level distribution histogram is produced for each sub-image, and the optimal threshold for that sub-image is calculated based on the histogram. Since the sub-images overlap, it is possible to produce a threshold for each individual pixel by interpolating the thresholds of the sub-images. Using the obtained threshold, the bone area is segmented. 2.3. Fracture area isolation One of the major concepts in fracture detection is that to find whether the medical image contains a fracture or not and to tell where this fracture is actually located in the input image. Template-matching technique is the best technique used to determine such fractures in the image. The forthcoming section describes the use of linear structuring elements in fracture detection in detail. 2.3.1. Fracture detection using linear structuring elements The main goal is to detect complex fractures with various patterns like transverse, oblique, spiral and comminuted in any part of the body. These fractures have geometric features such as The fractures are branch like a tree. This means that the tree-like branching occurs in lines and is continued by lines. This is based on the observation made viewing a few images of the fractures. The fractures are more or less have a constant width With the above knowledge about bone fracture features, linear structuring elements are chosen with a size of 20 × 20 window; in which line width varies from 1 to 20. Totally, 80 structuring elements are generated with the fracture properties depends on width and orientation, out of these, some sample structuring elements are shown in Fig. 3. Template matching [25] involves comparing a given template with windows of the same size in an image and identifying the window that is most similar to the template or linear structuring elements. The structuring element having width less than five is used to detect hairline breakage. For major or right fractures, the structuring elements width varies from 6 to 20 can be used. Figure 3. View largeDownload slide Sample structuring elements for (a) breakage with width = 2. (b) Breakage with width = 3, (c) horizontal breakage with width = 1 and (d) vertical breakage with width = 2. Figure 3. View largeDownload slide Sample structuring elements for (a) breakage with width = 2. (b) Breakage with width = 3, (c) horizontal breakage with width = 1 and (d) vertical breakage with width = 2. The accuracy of a template-matching process depends on the accuracy of the metric used to determine the similarity between a template and a window. There are many similar matching techniques in survey but in this work, sum of absolute intensity differences is used to determine the defect portion because it gives best matching than the others. Let a template be denoted by f1 and an image be denoted by f2. Assume that the template is of size n×n and the image is of size m×m and n is always less than m. Sum of absolute intensity differences is defined by the following equation:   s(x,y)=∑i=1n∑j=1n|f1(i,j)−f2(x+i−1,y+j−1)|wherex,y=1,…,m−n+1 (15)Measure s shows the dissimilarity between f1 and the window at location (x, y) in f2. The smaller the value of s(x, y), the more similar the template and the window. Assuming that the smallest value of s obtained so far is smin, the algorithm keeps track of smin and, at each iteration, compares the obtained sum to smin. If the sum obtained so far is equal to or greater than smin, further computation of the similarity measure at that position is abandoned. This is done because further computation at that position will only increase the value of s. If the sum obtained for all iterations is less than smin, smin is replaced with the new sum. 2.3.2. Volume rendering Each CT scan includes a large number of slices. Meanwhile, each slice is processed and they are combined to give the overall effect. Volume rendering is a technique for visualizing sampled functions of three spatial dimensions by computing 2D projections of a colored semitransparent volume. Volume rendering forms an RGBA volume from the data by projecting data onto the 2D viewing plane from the desired point of view. Ray Casting technique [26] is performed to produce a volume. The obtained 3D image can be rotated in various angles to view the full details of the injured part. The length and the width of the affected portions are also obtained to analyze the fracture portion in order to guide the physicians. 3. RESULTS AND DISCUSSIONS 3.1. Dataset The dataset has been obtained from the Arthi Scan Centre Tirunelveli, India. Data have been collected from 70 patients who have fractures in skull or bones. Forty-five to seventy-five images are collected for each patient, which is obtained from GE LightSpeed 16 Slice CT Scanner. Axial CT images with 5 mm slice thickness are used for the study. For fracture detection, from the collected details, 57 patients exhibited small to very severe bone fractures. 3.2. Experimental results The proposed method has been tested on various CT scan DICOM images. The performance of the shown figure here for an optimized MATLAB implementation was obtained. The performance was measured on 2.40 GHz Pentium(4) CPU with 512 MB of RAM running Microsoft Windows XP version 2002. The proposed method and earlier works are carried out in MATLAB with our native dataset. A sample CT scan image consists of 45–75 slices. The sample DICOM image is shown in Fig. 4a (test image T1). The DICOM image consists of the history of the patient. So we need to take the image which is necessary for processing image is resized as in Fig. 4(b). Normally preprocessing is important in image processing to enhance the image therefore the image is filtered using anisotropic diffusion filter and it shown in Fig. 4c. The enhanced image is subjected to HMRF-EM algorithm with k = 3 to make the mask for bone region. Since any image has soft tissue, bone and background region segment the region into three regions (k = 3). The output after HMRF-EM is shown in Fig. 4d. Figure 4. View largeDownload slide Sample outputs of T1 test image: (a) a 2D slice of a CT DICOM image, (b) resized image (region of interest), (c) filtered image using anisotropic diffusion filter, (d) HMRF output with k = 3 (to extract bone region), (e) after adaptive thresholding, (f) fracture region isolated in a 2D slice, (g) 3D image to visualize the fracture portion with different orientations and (h) fracture portion with different orientations. Figure 4. View largeDownload slide Sample outputs of T1 test image: (a) a 2D slice of a CT DICOM image, (b) resized image (region of interest), (c) filtered image using anisotropic diffusion filter, (d) HMRF output with k = 3 (to extract bone region), (e) after adaptive thresholding, (f) fracture region isolated in a 2D slice, (g) 3D image to visualize the fracture portion with different orientations and (h) fracture portion with different orientations. The result of HMRF-EM has three regions. Using the bone region mask, the bone is alone extracted from the input image. The extracted bone region is then subjected to adaptive thresholding method. After adaptive thresholding, a binary image is obtained. The result is shown in Fig. 4e. Then the image is subjected to template-matching technique. Based on the observation of various sample fracture images, the fractures are branched like a tree. The tree-like branching occurs in lines and is continued by lines. Another property of the fracture is more or less having a constant width. The generated 80 structuring elements involve with the template-matching process to identify the fracture region. If the width is less than 5, the images have minute hairline breakages. For major or right fractures, the structuring elements have width varing from 6 to 20. The fracture portion is isolated using template-matching technique and Fig. 4f depicts it. The fracture region is isolated for all 2D slices and it is combined to give a 3D image. Combining all 2D slices will generate the 3D image. This is used to visualize the fracture portion in a better manner. The obtained 3D image can be rotated in various angles to view the full details of the injured part. The 3D images with various orientations are given in Fig. 4g. The length and the width of the affected portions are also obtained from the 3D image to analyze the fracture portion. The fracture isolated 3D image is shown in Fig. 4h. The length and the width of the affected portion are also obtained for diagnosis. The length of the fracture in T1 test image is 16.4 cm and the width is 0.4 cm which is given in Table 1. The results for another sample image T2 are depicted in Fig. 5. A sample DICOM image for T2 test image is given in Fig. 5a. It is preprocessed using anisotropic diffusion filter (Fig. 5b) and subjected to HMRF algorithm to extract the bone region. The bone region is then subjected to adaptive thresholding (Fig. 5c). At last, the fracture region is isolated from the 2D slice (Fig. 5d), then all slices are combined to generate the 3D image. The fracture portion for the test image T2 with different orientation is given in Fig. 5e. The proposed work is carried out over various test images and the length and the width for eight test images are given in Table 1. Table 1. Length and width of the various sample images. Test image  T1  T2  T3  T4  T5  T6  T7  T8  Length (in cm)  16.4  10.33  7.7  6.54  3.64  10.74  8.57  7.87  Width (in cm)  0.4  0.3  0.3  0.08  0.11  0.2  0.13  0.11  Test image  T1  T2  T3  T4  T5  T6  T7  T8  Length (in cm)  16.4  10.33  7.7  6.54  3.64  10.74  8.57  7.87  Width (in cm)  0.4  0.3  0.3  0.08  0.11  0.2  0.13  0.11  Table 1. Length and width of the various sample images. Test image  T1  T2  T3  T4  T5  T6  T7  T8  Length (in cm)  16.4  10.33  7.7  6.54  3.64  10.74  8.57  7.87  Width (in cm)  0.4  0.3  0.3  0.08  0.11  0.2  0.13  0.11  Test image  T1  T2  T3  T4  T5  T6  T7  T8  Length (in cm)  16.4  10.33  7.7  6.54  3.64  10.74  8.57  7.87  Width (in cm)  0.4  0.3  0.3  0.08  0.11  0.2  0.13  0.11  Figure 5. View largeDownload slide (a) Input image (DICOM), (b) filtered image after anisotropic diffusion filter, (c) after adaptive thresholding, (d) fracture region isolation in a 2D slice and (e) 3D image formation in different orientation. Figure 5. View largeDownload slide (a) Input image (DICOM), (b) filtered image after anisotropic diffusion filter, (c) after adaptive thresholding, (d) fracture region isolation in a 2D slice and (e) 3D image formation in different orientation. 3.3. Discussion The results were validated on the basis of the assessment and evaluation made by radiologists on the CT scans in the above-mentioned database. As shown in the results, the designed algorithm is able to detect the fractures relatively accurately. Using the proposed algorithm, fractured bone may be further highlighted in the processed images; this could help the radiologists better analyze the scans and increase the chances of capturing the fractures. Sensitivity or True Positive Rate is the ability of a test those with the Crack, whereas test Specificity or True Negative Rate is the ability of the test to correctly identity those without the crack.   sensitivity=TP(TP+FN) (16)  specificity=TN(FP+TN) (17)  Accuracy=(TP+TN)(TP+TN+FP+FN) (18)where TP→True Positive (correctly identified), FP→False Positive (incorrectly identified), TN→True Negative (correctly rejected), FN→False Negative (incorrectly rejected). The results of sensitivity and specificity for the Eight Patients sample image are determined with the help of the ground truth image and it is given in Table 2. The overall True Positive Rate (Sensitivity) is 0.957 and the overall True Negative Rate (Specificity) is 0.978. The proposed work is compared with the work of Wu et al. [27] and Choudhury et al. [3] having the sensitivity 0.884 and 0.864, respectively. On seeing the results, the proposed method detects crack well because the sensitivity of the proposed work is very high (0.957) when compared with the existing works. Similarly, the specificity is also very high (0.978) when compared with the existing works (0.887 and 0.881). Figure 6 depicts the TPR, FNR, TNR and FPR comparison of the above-mentioned techniques. From this, it is clearly understood that the proposed method detects crack well. Figure 7 shows the graphical chart of accuracy comparison. The overall accuracy rate of the proposed work is very high (0.965) when compared with the previous technique (0.903 and 0.876). On seeing the sensitivity, specificity and accuracy, the proposed work works well than the other methods. The computational time of the work (Table 3) is adequate and it yields an average execution time of 10 min 26 s with no prior training is required. The percentage of accuracy improvement (PAI) with the existing work is computed using the following equation:   PAI=AP−AEAE (19)where AP is the accuracy of the proposed method and AE is the accuracy of the existing work. The proposed method achieves 10% and 6.7% improvement with the existing work [3, 27]. Table 2. Sensitivity and specificity analysis. S. no.  Test image  Sensitivity  Specificity  Accuracy  Proposed work  Wu et al. [27]  Choudhury et al. [3]  Proposed work  Wu et al. [27]  Choudhury et al. [3]  Proposed work  Wu et al. [27]  Choudhury et al. [3]  1  T1  0.975  0.943  0.912  0.999  0.962  0.931  0.998  0.945  0.904  2  T2  0.981  0.917  0.935  0.986  0.913  0.894  0.988  0.925  0.893  3  T3  0.909  0.863  0.854  0.98  0.865  0.862  0.995  0.903  0.884  4  T4  0.975  0.972  0.903  0.998  0.905  0.894  0.891  0.863  0.864  5  T5  0.867  0.726  0.82  0.948  0.882  0.869  0.979  0.942  0.925  6  T6  0.969  0.859  0.793  0.964  0.813  0.824  0.955  0.863  0.831  7  T7  0.988  0.892  0.813  0.969  0.847  0.868  0.948  0.901  0.865  8  T8  0.995  0.903  0.883  0.979  0.911  0.905  0.962  0.879  0.842  Overall percentage  0.957  0.884  0.864  0.978  0.887  0.881  0.965  0.903  0.876  S. no.  Test image  Sensitivity  Specificity  Accuracy  Proposed work  Wu et al. [27]  Choudhury et al. [3]  Proposed work  Wu et al. [27]  Choudhury et al. [3]  Proposed work  Wu et al. [27]  Choudhury et al. [3]  1  T1  0.975  0.943  0.912  0.999  0.962  0.931  0.998  0.945  0.904  2  T2  0.981  0.917  0.935  0.986  0.913  0.894  0.988  0.925  0.893  3  T3  0.909  0.863  0.854  0.98  0.865  0.862  0.995  0.903  0.884  4  T4  0.975  0.972  0.903  0.998  0.905  0.894  0.891  0.863  0.864  5  T5  0.867  0.726  0.82  0.948  0.882  0.869  0.979  0.942  0.925  6  T6  0.969  0.859  0.793  0.964  0.813  0.824  0.955  0.863  0.831  7  T7  0.988  0.892  0.813  0.969  0.847  0.868  0.948  0.901  0.865  8  T8  0.995  0.903  0.883  0.979  0.911  0.905  0.962  0.879  0.842  Overall percentage  0.957  0.884  0.864  0.978  0.887  0.881  0.965  0.903  0.876  Table 2. Sensitivity and specificity analysis. S. no.  Test image  Sensitivity  Specificity  Accuracy  Proposed work  Wu et al. [27]  Choudhury et al. [3]  Proposed work  Wu et al. [27]  Choudhury et al. [3]  Proposed work  Wu et al. [27]  Choudhury et al. [3]  1  T1  0.975  0.943  0.912  0.999  0.962  0.931  0.998  0.945  0.904  2  T2  0.981  0.917  0.935  0.986  0.913  0.894  0.988  0.925  0.893  3  T3  0.909  0.863  0.854  0.98  0.865  0.862  0.995  0.903  0.884  4  T4  0.975  0.972  0.903  0.998  0.905  0.894  0.891  0.863  0.864  5  T5  0.867  0.726  0.82  0.948  0.882  0.869  0.979  0.942  0.925  6  T6  0.969  0.859  0.793  0.964  0.813  0.824  0.955  0.863  0.831  7  T7  0.988  0.892  0.813  0.969  0.847  0.868  0.948  0.901  0.865  8  T8  0.995  0.903  0.883  0.979  0.911  0.905  0.962  0.879  0.842  Overall percentage  0.957  0.884  0.864  0.978  0.887  0.881  0.965  0.903  0.876  S. no.  Test image  Sensitivity  Specificity  Accuracy  Proposed work  Wu et al. [27]  Choudhury et al. [3]  Proposed work  Wu et al. [27]  Choudhury et al. [3]  Proposed work  Wu et al. [27]  Choudhury et al. [3]  1  T1  0.975  0.943  0.912  0.999  0.962  0.931  0.998  0.945  0.904  2  T2  0.981  0.917  0.935  0.986  0.913  0.894  0.988  0.925  0.893  3  T3  0.909  0.863  0.854  0.98  0.865  0.862  0.995  0.903  0.884  4  T4  0.975  0.972  0.903  0.998  0.905  0.894  0.891  0.863  0.864  5  T5  0.867  0.726  0.82  0.948  0.882  0.869  0.979  0.942  0.925  6  T6  0.969  0.859  0.793  0.964  0.813  0.824  0.955  0.863  0.831  7  T7  0.988  0.892  0.813  0.969  0.847  0.868  0.948  0.901  0.865  8  T8  0.995  0.903  0.883  0.979  0.911  0.905  0.962  0.879  0.842  Overall percentage  0.957  0.884  0.864  0.978  0.887  0.881  0.965  0.903  0.876  Figure 6. View largeDownload slide TPR, FNR, TNR and FPR comparison. Figure 6. View largeDownload slide TPR, FNR, TNR and FPR comparison. Figure 7. View largeDownload slide Accuracy comparison. Figure 7. View largeDownload slide Accuracy comparison. Table 3. Computation time. Test image  Number of slices  Proposed work  Wu et al. [27]  Choudhury et al. [3]  T1  60  11 min 50 s  12 min 01 s  13 min 42 s  T2  49  7 min 42 s  8 min 50 s  8 min 22 s  T3  38  5 min 55 s  4 min 37 s  7 min 86 s  T4  60  13 min 42 s  13 min 25 s  15 min 32 s  T5  75  16 min  17 min 35 s  17 min 08 s  T6  60  10 min 43 s  11 min 45 s  9 min 43 s  T7  55  9 min 36 s  9 min 26 s  10 min 45 s  T8  52  8 min 40 s  9 min 08 s  11 min 38 s  Average  10 min 26 s  11 min 06 s  12 min 04 s  Test image  Number of slices  Proposed work  Wu et al. [27]  Choudhury et al. [3]  T1  60  11 min 50 s  12 min 01 s  13 min 42 s  T2  49  7 min 42 s  8 min 50 s  8 min 22 s  T3  38  5 min 55 s  4 min 37 s  7 min 86 s  T4  60  13 min 42 s  13 min 25 s  15 min 32 s  T5  75  16 min  17 min 35 s  17 min 08 s  T6  60  10 min 43 s  11 min 45 s  9 min 43 s  T7  55  9 min 36 s  9 min 26 s  10 min 45 s  T8  52  8 min 40 s  9 min 08 s  11 min 38 s  Average  10 min 26 s  11 min 06 s  12 min 04 s  Table 3. Computation time. Test image  Number of slices  Proposed work  Wu et al. [27]  Choudhury et al. [3]  T1  60  11 min 50 s  12 min 01 s  13 min 42 s  T2  49  7 min 42 s  8 min 50 s  8 min 22 s  T3  38  5 min 55 s  4 min 37 s  7 min 86 s  T4  60  13 min 42 s  13 min 25 s  15 min 32 s  T5  75  16 min  17 min 35 s  17 min 08 s  T6  60  10 min 43 s  11 min 45 s  9 min 43 s  T7  55  9 min 36 s  9 min 26 s  10 min 45 s  T8  52  8 min 40 s  9 min 08 s  11 min 38 s  Average  10 min 26 s  11 min 06 s  12 min 04 s  Test image  Number of slices  Proposed work  Wu et al. [27]  Choudhury et al. [3]  T1  60  11 min 50 s  12 min 01 s  13 min 42 s  T2  49  7 min 42 s  8 min 50 s  8 min 22 s  T3  38  5 min 55 s  4 min 37 s  7 min 86 s  T4  60  13 min 42 s  13 min 25 s  15 min 32 s  T5  75  16 min  17 min 35 s  17 min 08 s  T6  60  10 min 43 s  11 min 45 s  9 min 43 s  T7  55  9 min 36 s  9 min 26 s  10 min 45 s  T8  52  8 min 40 s  9 min 08 s  11 min 38 s  Average  10 min 26 s  11 min 06 s  12 min 04 s  3.4. ROC analysis The segmentation results of the proposed work are compared with other standard techniques by calculating the True Positives (TP), True Negatives (TN), False Positives (FP) and False Negatives (FN). The True Positive Rate (TPR) or the Sensitivity, and the False Positive Rate (FPR) or 1-specificity are calculated for the sample images. Figure 8 shows an ROC curve with Sensitivity vs. 1-Specificity for all patients. The results also demonstrate that our new proposed technique has the ability to detect crack well when compared with the other techniques. Figure 8. View largeDownload slide ROC curve. Figure 8. View largeDownload slide ROC curve. The ROC analysis says that the proposed method provides better detections than two standard methods. It is possible to improve the accuracy of crack detection by decreasing the error rates, this work can be extended on 3D images and the depth of the crack can also calculated. 4. CONCLUSION This paper presents a method for detecting fractures in bones using automated bone segmentation, adaptive thresholding and template matching. The results show that the proposed method is capable of detecting fractures accurately. The proposed algorithm produces segmentations with high sensitivity and specificity. The overall sensitivity of the proposed method of all patients is 0.957 and the overall true Negative Rate (Specificity) is 0.978. The ROC analysis says that the proposed method provides better detections than other methods. In future, this work can be extended to improve the accuracy of fracture detection by decreasing the error rates. References 1 Linda, C.H. and Jiji, G.W. ( 2011) Crack detection in X-ray images using Fuzzy Index Measure. Appl. Soft Comput. , 11, 3571– 3579. Google Scholar CrossRef Search ADS   2 Chowdhury, A.S., Bhattacharya, A., Bhandarkar, S.M., Datta, G.S., Yu, J.C. and Figueroa, R. ( 2007) Hairline Fracture Detection using MRF and Gibbs Sampling. IEEE Worhshop on Application of Computer Vision (WACV), Austin, TX, USA, 21 and 22 February 2007, pp. 56–61, IEEE Explore Digital Library. 3 Chowdhury, A.S., Bhandarkar, S.M., Datta, G. and Yu, J.C. ( 2006) Automated Detection of Stable Fracture Points In Computed Tomography Image Sequences, 3rd IEEE Int. Symp. Biomedical imaging: Nano to Macro, Arlington, VA, USA, 6–9 May 2006, pp. 1320–23, IEEE Explore Digital Library. 4 Yap, D.W.-H., Chen, Y., Leow, W.K., Howe, T.S. and Png, M.A. ( 2004) Detecting Femur Fractures by Texture Analysis of Trabeculae, 17th IEEE Int. Conf. Pattern Recognition (ICPR), 26 August 2004, Cambridge, UK, pp. 730–733, IEEE Explore Digital Library. 5 Lum, V.L.F., Leow, W.K., Chen, Y., Howe, T.S. and Png, M.A., ( 2005) Combining Classifiers for Bone Fracture Detection in X-ray Images, IEEE Int. Conf. Image Processing (ICIP), 14 September 2005, Genova, Italy, pp. 1149–1152, IEEE Explore Digital Library. 6 He, J.C., Leow, W.K. and Howe, T.S., ( 2007) Hierarchical Classifiers for Detection of Fractures in X-ray Images, 12th Int. Conf. Computer Analysis of Images and Patterns (CAIP), 27–29 August 2007, Vienna, Austria, pp. 962–969, Springer Verlag, Berlin, Heidelberg. 7 Tian, T.P., Chen, Y., Leow, W.K., Hsu, W., Howe, T.S. and Png, M.A., ( 2003) Computing Neckshaft Angle of Femur for X-ray Fracture Detection, Int. Conf. Computer Analysis of Images and Patterns (CAIP 2003), 25–27 August 2003, Groningen, The Netherlands, pp. 82–89, Springer Verlag, Berlin, Heidelberg. 8 Lim, S.E., Xing, Y., Chen, Y., Leow, W.K., Howe, T.S. and Png, M.A., ( 2004) Detection of Femur and Radius Fractures in X-ray Images’, Int. Conf. Advances in Medical Signal and Information Processing, September 2004, Malte. 9 Jia, Y. and Jiang, Y., ( 2006) Active Contour Model with Shape Constraints for Bone Fracture Detection, Int. Conf. Computer Graphics, Imaging and Visualization, 26–28 July 2006, Sydney, Australia, pp. 90–95, IEEE Explore Digital Library. 10 Donnelley, M., Knoeles, G. and Hearn, T., ( 2008) A CAD System for Long-Bone Segmentation and Fracture Detection, Int. Conf. Image and Signal Processing (ICISP 2008), 1–3 July 2008, Cherbourg- Octerville, France, pp. 153–162, Springer Verlag, Berlin, Heidelberg. 11 Sezgin, M. and Aankur, B. ( 2004) Survey over image thresholding techniques and quantitative performance evaluation. J. Electron. Imaging , 13, 146– 165. Google Scholar CrossRef Search ADS   12 Canny, J. ( 1986) A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. , 8, 679– 698. Google Scholar CrossRef Search ADS PubMed  13 Thakare, P. ( 2011) A study of image segmentation and edge detection techniques. Int. J. Comput. Sci. Eng. (IJCSE) , 3, 899– 904. 14 Grau, V., Mewes, A.U.T., Alcaniz, M., Kikinis, R. and Warfield, S.K. ( 2004) Improved watershed transform for medical image segmentation using prior information. IEEE Trans. Med. Imaging , 23, 447– 458. Google Scholar CrossRef Search ADS PubMed  15 Boykov, Y.Y. and Jolly, M.P., ( 2001) Interactive Graph Cuts for Optimal Boundary & Region Segmentation of Objects in N-D images, Int. Conf. Computer Vision, Vancouver, Canada, July 2001, pp. 105–112. 16 Yogeswara Rao, K., James Stephen, M. and Siva Phanindra, D. ( 2012) Classification based image segmentation approach. Int. J. Comput. Sci. Technol. , 3, 658– 659. 17 McInerney, T. and Terzopoulos, D. ( 1996) Deformable models in medical image analysis: a survey. Med. Image Anal. , 1, 91– 108. Google Scholar CrossRef Search ADS PubMed  18 Rogowska, J. ( 2000) Overview and Fundamentals of Medical Image Segmentation, Handbook of Medical Imaging . Academic Press, Inc, Orlando, FL, USA. 19 Pharm, D.L., Xu, C. and Prince, J.L., ( 1998) A survey of current methods in medical image segmentation. Annu. Rev. Biomed. Eng., 2, 315– 337. 20 Mahmoodi, S. ( 2011) Anisotropic diffusion for noise removal of band pass signals. Elsevier Signal Process. , 91, 1298– 1307. Google Scholar CrossRef Search ADS   21 Mendrik, A.M., Vonken, E.J., Rutten, A., Viergever, M.A. and Van Ginneken, B. ( 2009) Noise reduction in computed tomography scans using 3D anisotropic hybrid diffusion with continuous switch. IEEE Trans. Med. Imaging , 28, 1585– 1594. Google Scholar CrossRef Search ADS PubMed  22 Zhang, Y., Brady, M. and Smith, S. ( 2001) Segmentation of brain MR images through a hidden Markov random field model and the expectation maximization algorithm. IEEE Trans. Med. Imaging , 20, 45– 57. Google Scholar CrossRef Search ADS PubMed  23 Said, T.B. and Azaiz, O., ( 2010), Segmentation of Liver Tumor using HMRF-EM Algorithm with Bootstrap Resampling, 5th Int. Symp. I/V Communications and Mobile Network (ISVC), 30 September–2 October 2010, Rabat Morocco, pp. 1–4, IEEE Explore Digital library. 24 Singh, T.R., Roy, S., Singh, O.I., Sinam, T. and Singh, K.M. ( 2012) A new local adaptive thresholding technique in binarization. Int. J. Comput. Sci. , 8, 271– 277. 25 Jurie, F. and Dhome, M., ( 2001) A simple and Efficient Template Matching Algorithm, Eighth IEEE Int. Conf. Computer Vision, ICCV 2001, 7–4 July 2001, Vancouver, BC, Canada, pp. 544–549, IEEE Explore Digital Library. 26 Mohamed Sathik, M., Mehaboobathunnisa, R., Haseena Thasneem, A.A. and Arumugam, S. ( 2015) Ray casting for 3D rendering – a review. Int. J. Innovations Eng. Technol. , 5, 121– 124. 27 Wu, J., Davuluri, P., Ward, K.R., Cockrell, C., Hobson, R. and Najarian, K. ( 2012) Fracture detection in traumatic pelvic CT images. Int. J. Biomed. Imaging , 2012, 327198. 10 pages. Google Scholar CrossRef Search ADS PubMed  Author notes Handling editor: Suchi Bhandarkar © The British Computer Society 2018. All rights reserved. For permissions, please email: journals.permissions@oup.com This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/about_us/legal/notices)

Journal

The Computer JournalOxford University Press

Published: Mar 9, 2018

There are no references for this article.

You’re reading a free preview. Subscribe to read the entire article.


DeepDyve is your
personal research library

It’s your single place to instantly
discover and read the research
that matters to you.

Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.

All for just $49/month

Explore the DeepDyve Library

Search

Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly

Organize

Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.

Access

Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.

Your journals are on DeepDyve

Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.

All the latest content is available, no embargo periods.

See the journals in your area

DeepDyve

Freelancer

DeepDyve

Pro

Price

FREE

$49/month
$360/year

Save searches from
Google Scholar,
PubMed

Create lists to
organize your research

Export lists, citations

Read DeepDyve articles

Abstract access only

Unlimited access to over
18 million full-text articles

Print

20 pages / month

PDF Discount

20% off