在好例子网,分享、交流、成长!
您当前所在位置:首页Others 开发实例一般编程问题 → 血管图像分割

血管图像分割

一般编程问题

下载此实例
  • 开发语言:Others
  • 实例大小:1.77M
  • 下载次数:12
  • 浏览次数:85
  • 发布时间:2020-10-25
  • 实例类别:一般编程问题
  • 发 布 人:robot666
  • 文件格式:.pdf
  • 所需积分:2
 

实例介绍

【实例简介】
非常好, 果断下载吧
IEEE TRANSACTIONS ON MEDICAL MMAGING, VOL 30, NO. 1, JANUARY 2011 retinal images, the Drive [54 and STaRE [55 databases were used. These databases have been widely used by other re searchers to test their vessel segmentation methodologies since apart from being public, they provide manual segmentations for performance evaluation The drive database comprises 40 eye-fundus color images (Seven of which present pathology taken with a Canon Cr5 (b) nonmydriatic 3 CCD camera with a 45 field-of-view(FOV) Each image was captured at 768X 584 pixels, 8 bits per color plane and, in spite of being offered in LZW compressed TIFF format they were originally saved in JPEG format. The database is divided into two sets a test set and a training set each of them containing 20 images. The test set provides the corresponding FOV masks for the images, which are circular(approximated d diameter of 540 pixels) and two manual segmentations gener- Fig. 1. Illustration of the preprocessing process: (a) Green channel of the orig ated by two different specialists for each image. The selection inal image. (b) The upper image is a fragment of the original image containing a of the first observer is accepted as ground truth and used for al- vessel with central light reflex, while the bottom image shows the effect of reflex gorithm performance evaluation in literature. The training set removal. (c)Background image. (d) Shade-corrected image (e) Homogenized and a set of manual Image. () Vessel-enhanced image segmentations made by the first observer The stare database, originally collected by Hoover et al. condition or adapting proportionately the whole set of used pa [38], comprises 20 eye-fundus color images(ten of them con- rameters to this new retina size tain pathology) captured with a Top Con TRV-50 fundus camera at 35 FOV. The images were digitalized to 700 X 605 pixels, A. Preprocessing 8 bits per color channel and are available in PPM format. The Color fundus images often show important lighting vari database contains two sets of manual segmentations made by ations, poor contrast and noise. In order to reduce these two different observers. Performance is computed with the seg- imperfections and generate images more suitable for extracting mentations of the first observer as ground truth the pixel features demanded in the classification step, a pre processing comprising the following steps is applied: 1) vessel IV PROPOSED VESSEL SEGMENTATION METHOD central light reflex removal, 2) background homogenization, This paper proposes a new supervised approach for blood and 3 )vessel enhancement. Next, a description of the proce vessel detection based on a nn for pixel classification. The nec- dure, illustrated through its application to a stare database essary feature vector is computed from preprocessed retinal im- fundus image(Fig. 1), is detailed ages in the neighborhood of the pixel under consideration. The 1)Vessel Central light refex removal: Since retinal blood following process stages may be identified: 1)original fundus vessels have lower reflectance when compared to other retinal image preprocessing for gray-level homogenization and blood surfaces, they appear darker than the background. Although the ssel enhancement, 2) feature extraction for pixel numerical typical vessel cross-sectional gray-level profile can be approxi representation, 3)application of a classifier to label the pixel as mated by a Gaussian shaped curve (inner vessel pixels are darker vessel or nonvessel, and 4) postprocessing for filling pixel gaps than the outermost ones), some blood vessels include a light in detected blood vessels and removing falsely-detected isolated streak (known as a light reflex)which runs down the central vessel pixels length of the blood vesse Input images are monochrome and obtained by extracting To remove this brighter strip, the green plane of the image is the green band from original RGB retinal images. The green filtered by applying a morphological opening using a three-pixel channel provides the best vessel-background contrast of the diameter disc, defined in a square grid by using eight-connexity RGB-representation, while the red channel is the brightest as structuring element. Disc diameter was fixed to the possible color channel and has low contrast, and the blue one offers poor minimum value to reduce the risk of merging close vessels. I dynamic range. Thus, blood containing elements in the retinal denotes the resultant image for future references layer (such as vessels)are best represented and reach higher An example of vessel central light reflex and its removal from contrast in the green channel [56 a fundus image by means of opening filtering operation is shown All parameters described below were set by experiments car- in Fig. I(a) and(b) ried out on DRIVE images with the aim of contributing the best 2) Background Homogenization Fundus images often con segmentation performance on this database (performance was tain background intensity variation due to nonuniform illumi- evaluated in terms of average accuracya detailed description nation. Consequently, background pixels may have different in is provided in Sections V-A and V-B). Therefore, they refer to tensity for the same image and, although their gray -levels are retinas of approximately 540 pixels in diameter. The applica- usually higher than those of vessel pixels (in relation to green tion of the methodology to retinas of different size (i.e, the di- channel images ), the intensity values of some background pixels ameter in pixels of stare database retinas is approximately is comparable to that of brighter vessel pixels. Since the fea 650 pixels) demands cither resizing input images to fulfil this ture vector used to represent a pixel in the classification stage is MaRin et al. A NEW SUPERVISED METHOD FOR BLOOD VESSEL SEGMEN TATION IN RETINAL IMAGES 149 formed by gray-scale values, this effect may worsen the per formance of the vessel segmentation methodology. With the purpose of removing these background lightening variations,a shade-corrected image is accomplished from a background es timate. This image is the result of a filtering operation with a large arithmetic mean kernel, as described below Firstly, a 3 3 mean filter is applied to smooth occasional salt-and-pepper noise. Further noise smoothing is performed by convolving the resultant image with a gaussian kernel of dimen SIons m×m=9×9, mean Al=0 and variance o2=1.8 Go.182. Secondly, a background image lB, is pro- duced by applying a 69x 69 mean filter [Fig. 1(c). when this filter is applied to the pixels in the Fov near the border, the results are strongly biased by the external dark region. To over (f) come this problem, out-of-the Fov gray-levels are replaced by Fig. 2. Two examples of application of the preprocessing on two images with average graylevels in the remaining pixels in the square. Then, different illumination conditions. (),(d)Green channel of the original images the difference d between Iy and lB is calculated for every pixel (b),(e)Homogenized images. (c), (f) Vessel-enhanced images D(C,=I(a, y)-IB(C, y) Fig. 2(a),(b)and(d),(e), shows this effect for two fundus To this respect, literature reports shade-correction methods images in the stare database based on the subtraction of the background image from the 3)vessel enhancement: The final preprocessing step con original image [101, [12 [57 or the division of the latter by sists on generating a new vessel-enhanced image(Ive),which the former [581,[59]. Both procedures rendered similar results proves more suitable for further extraction of moment invari upon testing. Moreover, none of them showed to contribute ants-based features(see Section IV-B) any appreciable advantage relative to the other. The subtractive Vessel enhancement is performed by estimating the comple approach in(1)was used in the present work mentary image of the homogenized image IH, IH, and sub Finally, a shade-corrected image Isc is obtained by trans- sequently applying the morphological Top-Ilat transformation forming linearly RD values into integers covering the whole [Fig. I(D)I range of possible gray-levels(0-255, referred to 8-bit images) Fig. I(d)shows the Isc corresponding to a nonuniformly illu E minated image. The proposed shade-correction algorithm is ob- where y is a morphological opening operation using a disc served to reduce background intensity variations and enhance of eight pixels in radius. Thus, while bright retinal structures contrast in relation to the original green channel image are removed (i.e, optic disc, possible presence of exudates Besides the background intensity variations in images, in- or reflection artifacts), the darker structures remaining after tensities can reveal significant variations between images due the opening operation become enhanced (i.e, blood vessels to different illumination conditions in the acquisition process. fovea, possible presence of microaneurysms or hemorrhages) In order to reduce this influence, a homogenized image IH Samples of vessel enhancement operation results are shown in ig. 1(a)l is produced as follows: the histogram of lsc is Fig. 2(c)and (f) for two fundus images with variable illumina displaced toward the middle of the gray-scale by modifying tion conditions pixel intensities according to the following gray-level global transformation function B. Feature Extraction if g<o The aim of the feature extraction stage is pixel characteriza oUtput 255,ifq>25 (2) tion by means of a feature vector, a pixel representation in terms otherwise of some quantifiable measurements which may be easily used in the classification stage to decide whether pixels belong to a real where blood vessel or not. In this paper, the following sets of features were selected 9=iNput 128-gInput-_Max Gray-level-based features: features based on the differ- ences between the gray-level in the candidate pixel and a and iNput and output are the gray-level variables of input and statistical value representative of its surroundings output images(lsc and lH, respectively). The variable denoted Moment invuriants-bused eatures: features based on mo- by iNput_Max defines the gray-level presenting the highest ment invariants for describing small image regions formed number of pixels in Isc. By means of this operation, pixels by the gray-scale values of a window centered on the rep with gray-level iNput. Max, which are observed to correspond resented pixels. to the background of the retina, are set to 128 for 8-bit images. 1)Gray-Level-Based Features. Since blood vessels are Thus, background pixels in images with different illumination always darker than their surroundings, features based on de- conditions will standardize their intensity around this value. scribing gray-level variation in the surroundings of candidate IEEE TRANSACTIONS ON MEDICAL MMAGING, VOL 30, NO. 1, JANUARY 2011 pixels seem a good choice. A set of gray-level-based descrip a set of seven moment invariants under size. translation and tors taking this information into account were derived from rotation known as hu moment invariants. can be derived from homogenized images IH considering only a small pixel region combinations of regular moments. Among them, our tests have centered on the described pixel(a, 1). S L, y stands for the set of revealed that only those defined by coordinates in a wx w sized square window centered on point (, ) Then, these descriptors can be expressed as 1=m120+m02 (15) 2=(m20+m02)2+4 (16) f1(a,y)=JH(, g)- min (s+)∈S2 f 2(3,g)= max IH(s, t))-IH(a, g) (6 constitute the combination providing optimal performance in (s,t)∈S9,y terms of average accuracy(see Section V-B). The inclusion of f (, y) =H(, y)-mean H(s, t)h the remainder moments result in decreasing classification per (s,t)∈S,g formance and increasing computation needed for classification L4(T, y= su H(s, UF 8) Moreover, the module of the logarithm was used instead of its alues themselves. Using the logarithm reduces the dynamic f s (, y)=H(,y) (9) range and the module prevents from having to deal with the complex numbers resulting from computing the logarithm of 2) Moment Invariants- Based Features: The vasculature in negative moment invariants retinal images is known to be piecewise linear and can be ap- Fig 3 shows several samples of pixels, marked with solid proximated by many connected line segments. For detecting white dots on an IvE image [Fig 3(a)l, and the subimages lve these quasi-linear shapes, which are not all equally wide and generated around them [Fig. 3(b)-(i) Pairs of pixels were se may be oriented at any angle, shape descriptors invariant to lected for different vessels: one inside and the other outside translation, rotation and scale change may play an important the vessel but near enough so that both subimages contain the role. Within this context, moment invariants proposed by Hu vessel. Table i shows the moment values corresponding to each [60] provide an attractive solution and are included in the fea- subimage. It can be checked that numbers are close, thus indi- ture vector. In this paper, they are computed as follows cating a high degree of invariance to size, rotation and transla- Given a pixel(, y) of the vessel-enhanced image IVE, a tion Moments computed as mentioned above characterize nu subimage is generated by taking the region defined by Srn. The merically a vessel independently of its width, orientation and size of this region was fixed to 17 X 17 so that, considering that location in the subimage. However, they are not useful to de the region is centered on the middle of a wide" vessel(8-g-pixel scribe the central pixel of the subimage in terms of vessel or wide and referred to retinas of approximately 540 pixels in di- nonvessel, as their values do not distinguish between these two ameter), the subimage includes an approximately equal number situations of vessel and nonvessel pixels. For this subimage, denoted by To overcome this problem, moments are computed on new IVE, the 2-D moment of order(p+ q) is defined as subimages IHu produced by multiplying the original ones y=∑∑识严 P,q=0,1,2 ZE by an equal-dimension matrix(17X 17) of Gaussian (10) values, whose mean is 0 and variance is 1.72, GI7 -. That is for every point of coordinates(i,j) where summations are over the values of the spatial coordinates I1(,)=IE(,xC1.72(,).(17) and j spanning the subimage, and IVE(i,j is the gray-level at point(i, With this choice of parameters, the 9 c9 central values in The corresponding central moment is defined as contain the 97%0 of the area of the represented gaussian distri bution, the remainder values being close to O(supposing that the =∑∑(-7(-元(,)(1) central pixel of iri y is located on the middle of awide"wes these 9x9 central values in Go, 1.72 correspond to vessel pixels S where in IVE: ) This operation is illustrated in Fig 3(b)-(1)and (-(q) The effect of this multiplication is clearly observed in these 710 m701 (12) and their associated t values(Table D) m00 These values become sensitive for describing vessel and non are the coordinates of the center of gravity of the subimage vessel central pixels, as they now reflect significant differences The normalized central moment of order (p+q) is defined as between them. Both 1 and values. in comparison with their original ones, increase if they describe vessel pixels and de g-m,q=0,1,2,… Tpq=(oy (13) crease otherwise In conclusion, the following descriptors were considered to be part of the feature vector of a pixel located at(a, where p+g f6(x,y)=|og(1) +1;(p+q)=2,3 (14) fr(c,y= log(o2) (19) MARin et aL. A NEW SUPERVISED METHOD FOR BLOOD VESSEL SEGMEN TATION IN RETINAL IMAGES 151 P-Ia P-2 P-lb P-2b P-4b P4a P-3a P-3b/ P-1b P4b P-1b P-2b () Fig 3. Examples of obtaining pixel environments for moment invariants calculation. (a) vessel enhanced subimage. Four pairs of pixels are marked with white dots: P-ka and P-kb with k= 1, 2, 3, 4; P-ka are vessel pixels and P-keb are background pixels close to their corresponding pair (b)-(i) From left to right, extracted subimages IvP- with k= 1, 2, 3, 4 and8=a, b (-(q) Subimages IHu result of multiplying the original ones in(b)-(i)by the Gaussian matrix TABLE T the class linear separability grade was not high enough for the MODULE OF THE O1 AND MOMENTS LOGARITHM CALCULATED accuracy level required for vasculature segmentation in retinal FROM THE SUBIMAGES IVE,9 SHOWN IN FIG 3 IMAGES (B)-(D mages. Therefore, the use of a non linear classifier was nec- essary. The following nonlinear classifiers can he found in the Pla p-1h P-2a P2h P-3a P-h P-4a P-4h existing literature on this topic: the knn method [51 and [491 log 5.26 34.875024.364233.963.92 lg(φ2)|11701129107111811092|10.90|1059|12l1 support vector machines [52], Bayesian classifier [501, or neural networks [43][48]. A multilayer feedforward nn was selected ABLEⅡ in this paper MODULE OF THE O1 AND Q MOMENTS LOGARITHM CALCULATED FROM THE SUBIMAGES IH.L SHOWN IN FIG 3 IMAGES (J)-(Q Two classification stages can be distinguished: a design stage, in which the nN configuration is decided and the nn is trained P-la p-1b P-2a p-2b P-3a P-3b P-4a P-4b and an application stage in which the trained nn is used to clas- log(中1) 5342.895.163.134.792344.122.21 sify each pixel as vessel or nonvessel to obtain a vessel binary log(2)135791612851011198311082|779 lmage 7)Neural Network Design: A multilayer feedforward net- where o1 and 2 are the moment invariants given by(15)and by(15) and work, consisting of an input layer, three hidden layers and an (16) computed on the subimages THu, generated according to output layer, is adopted in this paper. The input layer is com (17) posed by a number of neurons equal to the dimension of the fea ture vector(seven neurons). Regarding the hidden layers, sev C. Classification eral topologies with different numbers of neurons were tested A number of three hidden layers, each containing 15 neurons, In the feature extraction stage, each pixel from a fundus image provided optimal nn configuration. The output layer contains a is characterized by a vector in a 7-D feature space single neuron and is attached, as the remainder units to a non linear logistic sigmoid activation function, so its output ranges F(,y)=(1(x,y),,n(m,g) (20) between 0 and 1. This choice was grounded on the fact of inter- preting NN output as posterior probabilities Now, a classification procedure assigns one of the classes C1 The training set, Sr, is composed of a set of N candidates for (vessel)or C2(nonvessel) to each candidate pixel when its rep- which the feature vector [F(20)1, and the classification result resentation is known. In order to select a suitable classifier, the (Cl or C2: vessel or nonvessel) are known distribution of the training set data( described below) in the fea ture space was analyzed. The results of this analysis showed that 2} m=1,…,N:k∈{1,2 (21) 152 IEEE TRANSACTIONS ON MEDICAL IMAGING. VOL 30. NO. 1 JANUARY 2011 Fig 4.(a)Green channel of the original image.(b)Obtained probability map represented as an image. (c) Thresholded image.(d)Postprocessed image The samples forming S], were collected from manually labeled a drive database fundus image [Fig. 4(a)] is shown as an nonvessel and vessel pixels in the DrivE training images. image in Fig. 4(b). The bright pixels in this image indicate Specifically, around 30000 pixel samples, fairly divided into higher probability of being vessel pixel. In order to obtain vessel and non-vessel pixels, were used(as a reference, this a vessel binary segmentation, a thresholding scheme on the number represents 0.65% of the total number of the drive probability map is used to decide whether a particular pixel is test image pixels that will be classified later on). Unlike other part of a vessel or not. Therefore, the classification procedure authors [521, [531, who selected their training set by random assigns one of the classes C1(vessel)or C2(nonvessel)to each pixel-sample extraction from available manual segmentations candidate pixel, depending on if its associated probability is of drive and STARE images, we produced our own training greater than a threshold Th. Thus, a classification output image set by hand. I As discussed in literature, gold-standard images lco [Fig 4(c)], is obtained by associating classes C1 and C2 to may contain errors(see Bioux et al. [61] for a comprehensive the gray level values 255 and 0, respectively. Mathematically discussion on this issue) due to the considerable difficulty in volved by the creation of these handmade images. To reduce the Ico(, y) 255(=C1),ifp(4|(x,y)>T(23) 0 otherwise risk of introducing errors in Sr and, therefore, of introducin noise in the nn,we opted for carefully selecting specific where p(C F(a, a)) denotes the probability of a pixel(a, v) training samples covering all possible vessel, background, and described by feature vector F(a, y) to belong to class C].The noise patterns. Moreover, it should be pointed out that the optimal Th value is discussed in Section V-B network trained with the just defined ST, in spite of takin information from DRIVE images only, was applied to compute D. Postprocessing method performance with both DRIVE and STAre databases Classifier performance is enhanced by the inclusion of a two Since the features fi of F have very different ranges and step postprocessing stage: the first step is aimed at filling pixel values. each of these features is normalized to zero mean and gaps in detected blood vessels, while the second step is aimed unit variance independently by applying at removing falsely detected isolated vessel pixels 层左 From visual inspection of the nn output, vessels may have (22) a few gaps (1.e, pixels completely surrounded by vessel points but not labeled as vessel pixels ) To overcome this problem,an where ui and i stand for the average and standard deviation of iterative filling operation is performed by considering that pixels the ith feature calculated over ST with at least six neighbors classified as vessel points must also Once Sr is established, nN is trained by adjusting the be vessel pixels. Besides, small isolated regions misclassified as weights of the connections through error interpretation. The blood vessel pixels are also observed. In order to remove these back-propagation training algorithm [62] was used with this artifacts, the pixel area in each connected region is measured. In purpose artifact removal, each region connected to an area below 25 is 2) Neural Network Application: At this stage, the trained reclassified as nonvessel. An example of the final vessel seg- NN is applied to an"unseen"fundus image to generate a mented image after this further processing stage is shown in binary image in which blood vessels are identified from retinal Fig 4(d) background: pixels mathematical descriptions are individu ally passed through the NN. In our case, the nn input unit V. EXPERIMENTAL RESULTS receive the set of features provided by (5)-(9) and(18)and (19), normalized according to( 22). Since a logistic sigmoidal A. Performance measures activation function was selected for the single neuron of the In order to quantify the algorithmic performance of the pro- output layer, the nn decision determines a classification value posed method on a fundus image, the resulting segmentation is between0 and 1. Thus, a vessel probability map indicating compared to its corresponding gold-standard image This image the probability for the pixel to be part of a vessel is produced. is obtained by manual creation of a vessel mask in which all Illustratively, the resultant probability map corresponding to vessel pixels are set to one and all nonvessel pixels are set to Theusedtrainingsetisavailableonlineathttp:/www.uhu.es/retinopathy/zero.Thus,automatedvessel egmentation performance can be eng/bd. php assesses d MARin et aL. A NEW SUPERVISED METHOD FOR BLOOD VESSEL SEGMEN TATION IN RETINAL IMAGES 153 TABLE III TABLE V CONTINGENCY VESSEL CLASSIFICATION PERFORMANCE RESULTS ON STARE DATABASE IMAGES Vessel presen essel absent Vessel detected True Positive (TP) False Positive(FP Vessel not detected Falsc Ncgativc(FN) Truc Ncgativc(TN) IMage Acc 0.59970.98440.82450.9527 0.50740.9310.8798095270.9489 TABLE IV 065340.98920.8422096990.9619 PERFORMANCE RESULTS ON DRIVE DATABASE IMAGES 041590.99530.90940.93790.965 058840.98570.85080.4520.9372 Image Acc 34567 079580.97120.7250098030959 0.8183097710.81420.977209593 0778097340.81490.96670.947 076650.97930.8671095970.974 0.8682096730.75120.98480.9572 07250.97110.80990.95350.9349 07729098180.83550973109595 0.7032098550.88180.95570.9479 066700980908100960109467 0.681509872089310.951809457 8901234 0.8109097240.76010.97950.9567 0.629609893090600.94200.9385 087810.9675076030.985409581 6789 0.5678009861088170.95250.9453 07%96097700.82460.969709530 0.57040.99170.90830.94140.9388 07650.97870.374096870.953 065300989080909540949 15 069100.98500.85980.959909504 069670.98460.85990.95990.9502 06802098250.86390.94970.402 0-716409745080680958509410 070390.98820.89260.95990.9534 12345 07399097850.83120.963409486 05840099101720969809675 19 06522098660.88960.94500.9392 067760.98720.7694097990.9689 07730.97020.77690.97020.9474 062250.9763072450962809441 081309440.62900.9770908 Average 069440.98190.8227096590.9526 16 0327098160.85730.96060.9491 06629098510.86230.95410.9454 0710609781080780.963009473 dark background outside the fov is easily detected, Se 0.805809813085520.97360.9602 20 0.6436098610.8463095860.9495 Pp, Npu and acc values were computed for each image con- Average0.70670.9801084330.95820.9452 sidering Fov pixels only. Since Fov masks are not provided or STARE images, they were generated with an approximate In this paper, our algorithm was evaluated in terms of sen diameter of 650x 550. The results are listed in Tables iy and sitivity(Sc), specificity(Sp), positive predictive value (Pp), V2. The last row of the tables shows average Se, Sp, Pp, NpU negative predictive value(Ny), and accuracy(Acc ). Taking and Acc values(denoted as Acc for future references),for the Table iii into account these metrics are defined as 20 images in each database The performance results shown in Tables IV and V were ob TP TP+FN (24) tained considering the same Th threshold value for all the im- ages in the same database(0.63 and 0.91 for dRIVE and stare (25) images, respectively ) These values were set to provide max TN+FP imum average accuracy(Accmax) in each database in the fol 26) lowing way. For a given Th value, one Acc value is obtained for TP+ FP each of the 20 images selected for testing on a given database TM 27) These 20 Acc values are then averaged to obtain a single perfor TN+FN mance measure. Acc linked to the selected Th value. Several TP+TM Acc 28) Acc values are obtained at certain thresholds applying these op- TP+FN+TN+FP erations for different Th values. The final Th threshold value Se and S metrics are the ratio of well-classified vessel and selected for a given database is that providing the maximum acc nonvessel pixels, respectively. P is the ratio of pixels classi- value. ACCMAX. Fig. 5 shows the Acc values calculated forT'h fied as vessel pixel that are correctly classified. Nu is the ratio values from 0 to 1(step of 0.02). The results for both DRIVE of pixels classified as background pixel that are correctly classi- and stare databases are shown. The accmax values and their fied.Finally, acc is a global measure providing the ratio of total corresponding Th values are marked for every database in this well-classified pixels figure. It is worth mentioning that Acc variation shows no sig In addition, algorithm performance was also measured with nificant dependence on Th. As it can be observed in Fig. 5, al- receiver operating characteristic(ROC)curves. A ROC curve is though different optimum Th values are reached depending on a plot of true positive fractions(Se) versus false positive frac- the database on which performance is computed, a wide range of tions(1-Sp) by varying the threshold on the probability map. Th values provides acc values very close to AccMAX. There The closer a curve approaches the top left corner, the better the fore, Th values can be concluded not to be a critical method performance of the system. The area under the curve(AuC), to assess performance in terms of Acc, since it slowly varies ac which is 1 for a perfect system, is a single measure to quantify cording to it. This infuence of Th on system performance is also this behavior visible in the roc curves for the two databases shown in Fig. 6 These curves were produced by calculating the true and the B. Proposed Method evaluation false positive fraction on all test images through Th-threshold i This method was evaluated on Drive and staRe database 2thefinalvesselsegmentedimagesareavailableonlineathttp://www.uhu.es/ nages with available gold-standard images. Since the images' retinopathy/eng/bd. php 154 IEEE TRANSACTIONS ON MEDICAL IMAGING. VOL 30. NO. 1 JANUARY 2011 Fig. 7. Illustration of the spacial location of classification errors on a segmentation of a STARE image.(a)Green channel of the original image. (b) Thin and non-thin blood vessels extracted from the manual segmentation in white and dark-gray colors, respectively. (c)Segmentation of (a) generated by the presented algorithm. (d)FN and TP obtained hy the proposed algorithm represented in white and dark-gray colors, respectively. (e) F P and T N obtained by the proposed algorithm represented in white and dark-gray colors respectively. TABLE VI 0.9 STUDY OF FN AND FP SPACIAL LOCATION ON THE 0.8 DRIVE AND STARE DATABASES s07}( Th, ACCMAX/VE=0630.9452) FP Databas fn average rate FP average rate 0 Th, ACCMAX)STARE =(0.91,0.9526) DRIVE STARE 0.0199 0.O181 0.5 IThin vessels Non-thin Vessels Near to vesselsfar from vessels 0.4 DRIⅤE|46.08% 53.92% 86.25% 13.75% STARE 34.60% 65.40% 8355% 1645% 0.3 0.2 O DRIVE an example, see Fig. 7(b). A vessel was considered thin if its X STARE width is lower than 50% of the width of the widest optic disc 0 vessel. Otherwise the vessel is considered non -thin On the other 00.10.20.3040.50.60.7080.91 hand. a Fp is considered to be far from a vessel border if the Threshold. th distance from its nearest vessel border pixel in the gold-standard Fig. 5. Acc of the segmentation algorithm as a function of the threshold pa- is over two pixels. Otherwise, the FP is considered to be near Table vi summarizes the results of this study This table shows the average ratio of FN and FP provided by the segmenta tion algorithm for the 20 test images in the drive and stare databases. The average percent of FN and FP corresponding to the different spacial locations considered are also shown. For both databases, the percent of F'N produced in non-thin vessel 0.7 pixels was higher than that in thin vessel pixels. However, taking AUCDRIVE =0.9588 into account that thin vessels are composed by a considerably 0.5 AUCSTARE =0.9769 lower number of pixels than non-thin ones, the value obtained for thin vessels bears a more negative impact. This can be ob- 04 served in ig. 4(d), and Fig. 7(d). While Fn in non-thin vessels 0.3 involve no degradation in the segmented structure, F'N in thin vessels produce disconnections in some of them. Regarding FP DRIVE distribution F Ps tend to be near vessel borders. as it can be 0.1 STARE checked in Fig. 7(e), this means that most FPs produced by the 00.10.20.304050.60.70.809 segmentation algorithm tend to slightly enlarge the vessels and False Positive fraction not to introduce meaningless isolated noise Fig.6. ROC curvcs for thc DRIVE and STARE databases. Mcasured AUc C. Comparison to Other Methods values are given In order to compare our approach to other retinal vessel seg mentation algorithms, Acc and auc were used as measures variations. The AUC measured for both curves was 0.9588 and of method performance. Since these measurements were per 0.9769 for the DrivE and stare databases, respectively formed by other authors, this choice facilities comparing our On the other hand the spacial distribution of the classification results to theirs. Tables vil and viii show performance com errors produced by the segmentation algorithm, FN and FP, parison results in terms of Acc and AUC, respectively, with the was studied. The following four situations were considered FN following published methods: Chaudhuri et al. [37 Hoover et produced in thin and non-thin vessel pixels, and FP produced al. [38 Jiang and Mojon [43], Niemeijer et al. [50], Staal et al in pixels near to and far from vessel borders. For that, thin and [5l mendonca et al. 361, Soares et al. [52, Martinez-Perez et non-thin vessels were separated in each gold standard image [as al. [46], Ricci and Perfetti, [53], and Cinsdikici and Aydin, 142 MARin et aL. A NEW SUPERVISED METHOD FOR BLOOD VESSEL SEGMEN TATION IN RETINAL IMAGES 155 TABLEⅤI TABLE IX PERFORMANCE RESULTS COMPARED TO OTHER METHODS ON THE STARE PERFORMANCE RESULTS COMPARED TO RICCI AND PERFETTT'S METHOD AND DRIVE DATABASES IN TERMS OF AVERAGE ACCURACY WITH CROSS TRAINNING IN TERMS OF AVERAGE ACCURACY DRIVE Method DRIVE (training STARE(training Method Type Method RIVE STARE STARE on STARE) on DRIver Staal et al. [51 09441 Ricci and perfetti [531 0.9266 0.9452 Niemeijer et al. [50]0.9417 Marin et al. (this work) 0.9448 0.9526 Soares et al. 52 0.94660.94800.9473 Ricci and Perfetti [53]0.95950.96460.9621 Marin et al.(this work)0.9452095260.9489 first when AUC is the reference measurement. This result gains Chaudhuri et al.[3708773 Hoover et al. [38] 0.9275 more importance by the fact that our classifier was trained only Ruled-Based Jiang and Mojon [43]|0.89110.90090.8960 on drive images, unlike the other supervised approaches pre- Mendonca et al.[36094630.947909471 Martinez-Perez et aL. [46]0.93440.94100.9377 sented in tables vii and vill. for instance since there are no Cinsdikici and Aydin [42] 0.9293 available labeled training images for STARE images, Soares et al. [52 performed leave-one-out tests on this database(i TABLE VIII every image is classified by using samples from the other 19 im PERFORMANCE RESULTS COMPARED TO OTHER METHODS ON THE STARE AND DRIVE DATABASES IN TERMS OF AREA LNDER ROC CURVE ages), while Ricci and Perfetti [53] built its classifier by using a training set comprising samples randomly extracted from test images In our case, with the purpose of using one and the same Method Type Method DRIVE STARE DRIVE十 STARE trained classifier for testing the method on the 20 STARE images 0.9520 and including no sample belonging to the test set in the trainin Niemeijer et al. [50]0.9294 revised Soares et al. [52 096140.96710.9642 we opted for forming the training set by collecting pixels from Ricci and perfetti153]0.96330.96800.9656 DRIVE training images. Thus, the methods suitability for being Marin et al.( this work) 0.95880.97690.9678 Chaudhuri et al. [37 applied on any fundus image can be checked in a more realistic Hoover el al. [38] 0.7590 way. We should also mention that these good results with re Ruled-Based Jiang and Mojon [43]0.9327 0.92980.9312 Mendonca et al. [36] spect to other existing approaches were obtained on images con- Martinez-Perez et al. [46] taining pathological artifacts. The stare database contains ten Cinsdikici and Aydin [42] 0.9407 images with pathologies, while the test of DRIvE only contains four Moreover abnormal regions are wider in Stare All these supervised or rule-based methods have been brief Regarding performance comparison in terms of Acc when commented in section il. The values shown in both tables are results are jointly analyzed for dRIvE and STare images presented for each database as reported by their authors. If the Table vIl, last column), our algorithm renders greater accuracy are not available for a specific database or were not calculated than others authors' algorithms, being outperformed only by for the 20 images selected for testing, they were not included Ricci and perfetti's proposal [53. However, this method proved in the tables, thus appearing as gaps. The values in the last very dependent on the training set. Ricci and Pefetti [53,to research the dependence of their classification method on the column of each table indicate the overall Acc and AUC when dataset, carried out an experiment based on, firstly, training the both databases are taken into account An overview of the segmentation results on drive images classifier on each of the drive and stare databases and shows our proposed method reaches better performance than then testing it on the other Their maximum accuracy values most of the other methods, being comparable to or approx- are shown in Table IX. It can be observed that performance imating the performance of other detection techniques. The is worse now, since Acc strongly decreases from 0. to achieved with our algorithm is outperformed only 0.9266 on DRIVE and 0.9646 to 0.9452 on STare databas Dy Soares et al.[52]. Mendonca el ul.[361. and Ricci and nages. Therefore, as assumed by these authors, classifier Perfetti [531. Regarding the approaches by Soares et al. [52] retraining is necessary before applying their methodology on a and Mendonca et al. [36], it is important to point out that our new database. To verify our method dependence on the training method clearly outperforms the Acc these authors reported on set, the same experiment was completed. Thus, performance STARE images. Therefore, our approach renders better overall was computed on the drive database, training the classifier Acc for both databases than theirs. The same conclusions are with STARE images(as previously mentioned, our accuracy on drawn when these methods are compared in terms of auC STARE was already obtained by training on dRIVE images) On Drive database images, the AUC value provided by our The resulting Ac values are shown in Table IX to facilitate comparisons between both methods under identical conditions proposal is only lower than those reported by Soares et al.[52 and ricci and perfetti [53(mendonca et al. 36] did not report In this case, it is clearly observed that our estimated perfor- AUC values). However, due to the excellent Auc result on mance in terms of method accuracy is higher, thus proving the stare database our approach reaches the highest average higher training set robustness AuC when both databases are considered The proposed method proves especially useful for vessel de VI. DISCUSSION AND CONCLUSION tection in STARE images. Its application to this database re- Previous methods for blood vessel detection in retinal images sulted in the second highest accuracy score among all experi- can be classified into rule-based and supervised methods. This ments(only behind Ricci and Perfetti's approach [53]) and the study proposes a method within the latter category. This method 【实例截图】
【核心代码】

标签:

网友评论

发表评论

(您的评论需要经过审核才能显示)

查看所有0条评论>>

小贴士

感谢您为本站写下的评论,您的评论对其它用户来说具有重要的参考价值,所以请认真填写。

  • 类似“顶”、“沙发”之类没有营养的文字,对勤劳贡献的楼主来说是令人沮丧的反馈信息。
  • 相信您也不想看到一排文字/表情墙,所以请不要反馈意义不大的重复字符,也请尽量不要纯表情的回复。
  • 提问之前请再仔细看一遍楼主的说明,或许是您遗漏了。
  • 请勿到处挖坑绊人、招贴广告。既占空间让人厌烦,又没人会搭理,于人于己都无利。

关于好例子网

本站旨在为广大IT学习爱好者提供一个非营利性互相学习交流分享平台。本站所有资源都可以被免费获取学习研究。本站资源来自网友分享,对搜索内容的合法性不具有预见性、识别性、控制性,仅供学习研究,请务必在下载后24小时内给予删除,不得用于其他任何用途,否则后果自负。基于互联网的特殊性,平台无法对用户传输的作品、信息、内容的权属或合法性、安全性、合规性、真实性、科学性、完整权、有效性等进行实质审查;无论平台是否已进行审查,用户均应自行承担因其传输的作品、信息、内容而可能或已经产生的侵权或权属纠纷等法律责任。本站所有资源不代表本站的观点或立场,基于网友分享,根据中国法律《信息网络传播权保护条例》第二十二与二十三条之规定,若资源存在侵权或相关问题请联系本站客服人员,点此联系我们。关于更多版权及免责申明参见 版权及免责申明

;
报警