An Unsupervised Method for Detection and Validation of The Optic Disc and The Fovea

Please download to get full document.

View again

of 8
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Information Report



Views: 11 | Pages: 8

Extension: PDF | Download: 2

Related documents
—In this work, we have presented a novel method for detection of retinal image features, the optic disc and the fovea, from colour fundus photographs of dilated eyes for Computer-aided Diagnosis(CAD) system. A saliency map based method was used to
  1 An Unsupervised Method for Detection andValidation of The Optic Disc and The Fovea Mrinal Haloi, Samarendra Dandapat, and Rohit Sinha  Abstract —In this work, we have presented a novel method fordetection of retinal image features, the optic disc and the fovea,from colour fundus photographs of dilated eyes for Computer-aided Diagnosis(CAD) system. A saliency map based methodwas used to detect the optic disc followed by an unsupervisedprobabilistic Latent Semantic Analysis for detection validation.The validation concept is based on distinct vessels structuresin the optic disc. By using the clinical information of standardlocation of the fovea with respect to the optic disc, the macularegion is estimated. Accuracy of 100% detection is achieved forthe optic disc and the macula on MESSIDOR and DIARETDB1and 98.8% detection accuracy on STARE dataset.  Index Terms —Retinal images, PLSA, Image processing, CAD. I. I NTRODUCTION Analysis of retinal image for detection of its pathologi-cal [23] and non-pathological features is very important forautomatic computer aided detection and diagnosis of retinaldiseases. With the emergence of medical image analysis re-search for faster and accurate analysis by reducing cost andtime, researchers developing computer software to facilitateeasier medical treatment. Use of computer aided diagnosiswill help doctors in remote areas and faster analysis of retinalimages, lower cost of treatment by reducing manpower. Mostof the works on retinal image analysis uses retinal imagesobtained from fundus photography [20], [3] by dilating pupil. The optic disc, the fovea, blood vessels and veins are mainfeatures of retinal image. Different types of retinal diseaseseffect those features. Age related macular degeneration causesdefect of macula region also by diabetic retinopathy. Hard andsoft exudates are pathological features responsible for diabeticretinopathy. Damage of optic disc is result of Glaucomadiseases [19], one major cause of vision loss. Optic disc cupand neuro retinal rim surface areas ratio, determine presenceand progression of Glaucoma. Diameter changes of retinalarteries and veins are associated with different cardiovasculardiseases. Thinning of the arteries and widening of the veins,result in increased risk of stroke and myocardial infraction[19].Detection of the optic disc, the fovea, blood vessels andveins are very important for pathological analysis of retinalimage [23]. Since severity of diseases depends on locationof features of corresponding diseases with respect to thoseessential features. Also some pathological features occurs inspecific areas. The optic disc (OD) can be observed as bright M.Haloi, S. Dandapat, R. Sinha is with the Department of Electrical andCommunication Engineering, Indian Institute of Technology, Guwahati,Indiae-mail: (h.mrinal, samaren,rsinha) / Fig. 1: Retinal Featurespart of normal retinal image, and the fovea as the darkestregion. In pathological images sometimes its very hard todetect the optic disc and the macula region due to abnor-malities caused by different diseases. In most of the recentworks on detection accuracy of the optic disc and the macularegion in pathological images is very low, no proper validationof detection region is presented. Eventhough the location of the macula can be estimated from the optic disc location butdue to age related macular degeneration that region may beseverely effected, to find whether the region is normal or thatof pathological eyes, proper validation is needed. For efficientdetection of the OD and the fovea consideration of adverseillumination variation and damaging for these features due toeye diseases need to be addressed. Also illuminace variationlinked with imaging setup. Fig. 1 shows a typical retinal imagewith the optic disc, the macula region, the blood vessels andpathological feature exudates.Retinal image analysis is a mature area, lots of work hasbeen done in this area. But still a complete state of artsystem for performing all analysis with high accuracy is notachieved. People have used image processing and machinelearning based approach for detection and classification of various features. It is not possible to give a complete analysisof these works. A few works were selected for discussing.Lu et al. [11] have used line operator to detect circularbrightness structure of the optic disc. Their method failed toaddress large illuminance variation effect with neighbouringregions and limited to clear circular brightness structure.Matched filter based method was proposed by Abdel et   a  r   X   i  v  :   1   6   0   1 .   0   6   6   0   8  v   1   [  c  s .   C   V   ]   2   5   J  a  n   2   0   1   6  2 al. [9], they first preprocesses the images by illuminationnormalization and histogram equalization. Their algorithmperformance depends on the vessel segmentation. And willbe effected by adverse illuminance variation. Using retinalvessel direction informtion Foracchia et al. [10] proposed ageometrical parametric model for the optic disc detection.The algorithm need vessel segmented images and hence vesselsegmentation performance effect its results.A probability map based localization of the optic disc ispresented by Budai et al. [17], they have constructed twoprobability map. One is brightness based from the brightnessof image and other is vessel segmentation based. And com-bination of these two map was used to locate the optic disc.And this method failed to address the problems of illuminancevariation and difficulty involves in pathological images. Amodel based approach was used by Li et al. [13] to detectthe optic disc, the macula and exudates. Principal componentbased method was used to locate the optic disc and activeshape based method for shape detection of the optic disc andthe fovea.In this work, two new novel method for detection of retinalfeatures is developed. The optic disc detection is based onsaliency map that can capture significant variation of localstructure. Once the potential probable location of the opticdisc detected, a probabilistic Latent Semantic Analysis basedunsupervised method was used to validate whether the regionis the optic disc or not, this is based on specific vasculaturestructure in that region. This method gives us 100 % accuracyin optic disc detection in different challenging images withpathological symptoms. This algorithm is described in sectionII(A) and II(B). The macula region detection by using theinformation of the optic disc location and the main courses of blood vessels is described in section II(C).II. M ETHOD This method comprises of mainly two parts. In the first stagedetection and validation of the optic disc is performed, whilethe fovea detection is dependent on OD detection. A completeoverview of the method used in this work is explained in theFig. 2.  A. Optic Disc Detection The optic disc is one of the most important anatomical struc-tures of retina. The optic disc is also known as the blind spot,because there are no light sensitive rods or cones to respondto a light stimulus. The retinal arteries and veins emerge fromthe left of the optic disc. Its central white depression calledthe physiologic cup and horizontal diameter of it should notexceed 1/2 that of entire disc, otherwise its a sign of pathologicoptic disc cupping reasons behind glaucoma. For detection of optic disc a saliency region detection algorithm [1] was usedto identify salient region of the image, based on image localstructure variation. For this first image will be convert to CIELab colour space, where luminance value from  L  channel of image and colour value from  a  and  b  channel can be seperated.Conversion process is described [6] as follows.Fig. 2: Method Overview   X Y  Z   =  0 . 4887180 0 . 3106803 0 . 20060170 . 1762044 0 . 8129847 0 . 01081090 . 0000000 0 . 0102048 0 . 9897952  ×   RG B  (1)  L  =  116 [ h (  Y Y  W  )] − 16 a  =  500 [ h (  X  X  W  ) − h (  Y Y  W  )] b  =  200 [ h (  Y Y  W  ) − h (  Z  Z  W  )] (2) h ( q ) =   3 √  q ,  if   q  >  0 . 0088567 . 787 q + 16 / 116 ,  otherwise(3)Because of optic disc colour and structuring variation fromneighbourhood variation, saliency map (SM) can efficientlydetect it. Saliency is computed on the basis of variation of localcontrast of a image patch with respect to neighbourhood. Evenif the colour vaiation is low but with strong structural variationSM can capture OD. This process is repeated at different scalesfor getting better accuracy in constructing the map and keepingfiner details. The contrast based saliency of a given pixel atposition  ( i ,  j )  is  c i ,  j  and computed as follows [1]. c i ,  j  =  D [(  1  N  1  N  1 ∑  p = 1 v  p ) , (  1  N  2  N  2 ∑ q = 1 v q )]  (4)Where D is Euclidean distance between the average vectorsof two non-overlapping patch  P 1  and  P 2  and with total numberof pixels as  N  1 and  N  2. If these two vectors are correlated,Mohalanobis distance will be used. Also,  v  p andv q  are vector of features of two pixels corresponding to two different regions.Vector corresponding to each pixels have three features, Lumi-nance and two colour ’a’ and ’b’ from CIE Lab colour spacechannel. v i  = [  L i , a i , b i ]  (5)  3 Fig. 3: Window selection from Segmented RegionSize of patch  P 1  is taken as 9x9 and that of   P 2  is calculatedfrom the equation. Input image have c number of columns. c 2  ≥ (  N  P 2 ) ≥  c 8 (6)Final saliecny map values is computed as sum of saliencymap of that image at different scales, smap i ,  j  = ∑ S  c i ,  j  (7)For segmentation of relevant region from saliency map, patchbased segmentation techniques was used by using the relationas follows. For each patch, mean  m  p  and standard deviation σ   p  is computed. If   SM  i ,  j  from equation (8) is greater than 1then the pixels will be included in final interest map, otherwiseit will be discarded. SM  i ,  j  =  smap ( i ,  j ) − m  p σ   p (8)In the final map  SM  , due to presence of different pathologicalsymptoms in retinal images, other features along with opticdisc also got detected. To get final location of optic disc avalidation method will be applied.  B. Validation of detected Optic Disc Final saliency map may have different regions, optic disc ornon-optic disc. To validate whether a region is optic disc wehave used a unsupervised Probabilistic latent Semantic Anal-ysis classification algorithm. The optic disc structures consistof a complex pattern of vessels srcinate from it, no other partof retina has this structures, this concept have been exploitedfor classification and this method is independent of luminanceof optic disc region. Even if due to some pathological regionluminance is depreciated our method still can accurately detectthe optic disc. The specific vasculature structure may bedefected due to several pathological problems, to address thisproblem, the part based classification model was exploited.Our model comprised six classes, five classes for the opticdisc and its parts and the sixth class is other retinal featuresfor differentiation. The optic disc is divided into four partsas shown in Fig. 4. This formulation efficiently detect theFig. 4: OD part and corresponding HOG featuresoptic disc even if its structure is defected. In the testing phasearound each regions multiples window shifted to right, left, topand bottom was chosen. Variable window sizes as shown inFig. 3 and window pixels value will be that of srcinal imageat those pixels location. Below stages involves in designingthe classifier have been discussed. Since image is a very highdimensional data, pre-process it to reduce its dimensionalityby using visual codebook formation method. Each image willbe represented as a bag of visual words. By using histogramof words concept, each image will be converted to a documentwith previously designed vocabulary. 1) Feature Extraction:  For extracting meaningful edgesinformation from images HOG [7] descriptor was used. Forobject recognition HOG is a very popular local descriptor.This descriptor capture fine structure of images and suitablefor object recognition. HOG computation based on gradientmagnitude and phase. Window with 16 × 16 is selected with50 % overlap with neighbouring window, then it is furtherdivided into 2 × 2 cells having size 8 × 8. For each windowgradient phase is quantized into equally spaced 9 bins andgradient magnitude was used to determine values of each bin. 2) LLC Codebook Formation:  For the formation of visualwords locality constrained linear coding [4] based methodwas used. This algorithm generate similar codes for similardescriptors by sharing bases. Locality also leads to sparsity.Idea based locality importance more than sparsity is usedand given by below optimization problem. Here X is aD dimensional local descriptors ,  X   = [  x 1 ,  x 2 , ...,  x  N  ]  ∈  ℜ  DxN  and  B  = [ b 1 , b 2 , ..., b  M  ] ∈ ℜ  DXM  is  M   dimensional codebook.Process is describes as follows. min c Σ i = 1 ,  N  ||  x i −  Bc i || 2 + λ  || d  i ⊙ c i || 2 (9) s . t  . 1 T  c i  =  1 , ∀ i  (10) d  i  =  exp ( dist  (  x i ,  B ) σ   )  (11)Where  ⊙  denotes the element-wise multiplication, and  d  i  ∈ ℜ  M  is the locality adaptor that gives different freedom foreach basis vector proportional to its similarity to the input  4 Fig. 5: PLSA algorithm ideaFig. 6: Codeword formationdescriptor  x i . Also  dist  (  x i ,  B )  is the Euclidean distance between  x i  and B. Shift invariance nature is confirmed by the constraintequation (3). In our case we will have 4608 dimension HOGdescriptors  x i for each image. And a codebook   B  of size113 words was formed. Each image was expressed as acombination of these words. Each image was represented ashistogram of visual words. Fig. 6 Depicts codebook formationscenario. 3) PLSA model:  Probabilistic latent semantic analysis [2]is a topic discovery model, its concept based on latent variableanalysis. An image also can be considered as a collection of topics. Every image can be considered as a text documentwith words from a specific vocabulary. The vocabulary willbe formed by using LLC algorithm on HOG features fromtraining image. Suppose a collection of N images(document)  D  =  d  1 , d  2 , ..., d   N   is avilable, and corresponding vocabularywith size  N  1 is  W   =  w 1 , w 2 , ..., w  N  1 . And let there be  N  2topic  Z   =  z 1 ,  z 2 , ...,  z  N  2  Model parameter are computed usingexpectation maximization method. In Fig. 5 Concept of pLSAmodel and its training and testing process is shown. P (  z | d  , w ) =  P (  z ) P ( d  |  z ) P ( w |  z ) Σ  z ′ P (  z ′ ) P ( d  |  z ′ ) P ( w |  z ′ )  (12) P ( w |  z ) =  Σ d  n ( d  , w ) P (  z | d  , w ) Σ d  , w ′ n ( d  , w ′ ) P (  z | d  , w ′ )  (13) P ( d  |  z ) =  Σ w n ( d  , w ) P (  z | d  , w ) Σ d  ′ , w n ( d  ′ , w ) P (  z | d  ′ , w )  (14) P (  z ) =  Σ d  , w n ( d  , w ) P (  z | d  , w )  R (15)  R ≡ Σ d  , w n ( d  , w )  (16) 4) Fuzzy-KNN classification:  Since our framework basedon topic modelling and discovery, fuzzy KNN classification[12] techniques was used for getting appropriate topic of a testimage corresponding to training images. Fuzzy kNN performbetter than traditional KNN algorithm, because it depends onweight of neighbours. In the training stage we have calculated P ( w |  z ) , which was used as input for testing algorithm to com-pute  P (  z | d  test  ) . After that a K- nearest neighbour algorithm wasused to classify these image by using probability distribution P (  z | d  train ) . u i (  x ) = ∑  j = 1 , K  u ij (  1 ||  x −  x  j || 2 ( m − 1 ) ) ∑  j = 1 , K  (  1 ||  x −  x  j || 2 ( m − 1 ) ) (17) u i  is value for membership strength to be computed, and  u ij is previously labelled value of i th class for j th vector. Finalclass label of the query point x is computed as follows: u 0 (  x ) =  argmax i ( u i (  x ))  (18) C. Macula Localization The macula is the central region of the retina situated at theposterior pole of the eye, between the superior and inferiortemporal arteries. Its centre at a distance of 2.5 D from opticdisc centre. D is optic disc diameter. Fovea is located at thecentre of macula and this part responsible for specialized high  5 Fig. 7: Parabolic model Fittingacuity vision. Fovea region is exclusively made up of cones,without it fine details could not be seen. Age-related maculardegeneration, diabetic macular edema etc. are most commondisorder of macula region. As per study and observation foveathe centre part of macula is located at a distance 2.5 D alongthe axis of symmetry of a parabola which vertex is at thecentre of optic disc. Finally a parabola was fitted to the maincourses of blood vessels, process is depicted in Fig. 7. 1) Vessel Point Selection and Curve Fitting:  For extractingblood vessels from image, a algorithm described in this work [18] was used. In the first stage the binary map of vesselswas extracted. And then connected component analysis wasused to remove small unconnected parts as a pre-processingstep. To get centreline of blood vessels by removing bordera morphological skeletonize algorithm [16] based algorithmwas used. Also we have computed distance transform [15] of binary vessels map. Final threshold for getting rid of smallvessels and vein is computed using cumulative histogramcount and real data volume. Since the real interest was extractthe main courses blood vessels, a point wise multiplication of skeleton and distance map was used to get potential candidatepoints.A parabolic model is fitted to the main course of bloodvessels using least squares non linear optimization algorithm.  y  =  x 2 4 rsin θ   (19)r and  θ   are two constants, which are calculated using Newtonnon linear least squares optimization method as describedfollows. Suppose (  x i ,  y i  ) are vessels coordinates, minimize  ∑ i = 1 , m (  y i −  f  (  x i )) 2 (20)Once the position of the fovea is estimated we need toestimate whether the macula region is infected to any diseases.For this a template matching method have been applied andhealthy eyes macula region was used as standard template. Awindow of size 1 . 5  D × 1 . 5  D  centred on the fovea proved to beefficient for this task and computed its distance (error) fromstandard template. If error is very high then the macula regionis infected with age related macular degeneration.Fig. 8: Left: Optic disc Hog feature, Right: non optic discelementDatabase # Pathological Img Resolution Acc(%)Messidor 300 1000 × 1504 100DIARETDB1 89 1152 × 1500 100STARE 81 605 × 700 98.8TABLE I: Result of the optic disc and the MaculaMethods # Images Acc(%) # Failed imgPrposed Method 81 98.8 1Abdel et al. [9] 81 98.8 1Foracchia et al. [10] 81 97.5 2Lu et al. [11] 81 96.3 3TABLE II: Result Comparisons of Different Optic Disc detec-tion method on STARE datasetIII. RESULT  AND  DISCUSSIONSFor analysing the accuracy of this method on differentset of images, publicly available MESSIDOR [21], STARE[[8]and DIARETDB1 [14] dataset was experimented. Images with various difficulty from these databses for was testes toevaluate this algorithm accuracy. These datasets include imagewith pathological features such as haemorrhages, exudates andmicroaneurysms. These datasets are provided by expert inOphthalmology with proper annotation of features location inimages. These three datasets includes 400 retinal images withvariety of pathological symptoms and captures under differentconditions. Also camera used to capture those images for allthese databases are different. We have used MATLAB platformin a windows 8.1 machine with intel i7 processor to test thismethod.In Fig. 3, result of saliency map is shown with possibleselection windows, it is clear that due to pathological symp-toms it also capture irrelevant features. To get exact locationof the optic disc a validation process in the detected areas wasperformed with multiple windows of size [122, 112], aroundthe detected region.For validation of the optic disc detection, the PLSA clas-sifier is trained with 400 training images, these images areselected to include all type of possible difficulties. Fromeach images the optic disc and its four parts is selectedfor training, and each image is resized to [122,112] usingbilinear interpolation. From each window HOG feature asshown in Fig. 4 and Fig. 8 is extracted. Use of part basedmodel improved the accuracy of this method and can detect
View more...
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks