This is an open access article published by the IET, Chinese Association for Artificial Intelligence and Chongqing University of Technology under the Creative Commons Attribution-NonCommercial License (http://creativecommons.org/licenses/by-nc/3.0/)
Annotations are critical for machine learning and developing computer aided diagnosis (CAD) algorithms. Good performance of CAD is critical to their adoption, which generally rely on training with a wide variety of annotated data. However, a vast amount of medical data is either unlabeled or annotated only at the image-level. This poses a problem for exploring data driven approaches like deep learning for CAD. In this paper, we propose a novel crowdsourcing and synthetic image generation for training deep neural net-based lesion detection. The noisy nature of crowdsourced annotations is overcome by assigning a reliability factor for crowd subjects based on their performance and requiring region of interest markings from the crowd. A generative adversarial network-based solution is proposed to generate synthetic images with lesions to control the overall severity level of the disease. We demonstrate the reliability of the crowdsourced annotations and synthetic images by presenting a solution for training the deep neural network (DNN) with data drawn from a heterogeneous mixture of annotations. Experimental results obtained for hard exudate detection from retinal images show that training with refined crowdsourced data/synthetic images is effective as detection performance in terms of sensitivity improves by 25%/27% over training with just expert-markings.
References
-
-
1)
-
[7]. Maier-Hein, L., Rob, T., Grohl, J., et al: ‘Crowd-algorithm collaboration for large-scale endoscopic image annotation with confidence’. Medical Image Computing and Computer-Assisted Intervention (MICCAI), Athens, Greece, October 2016, pp. 616–623.
-
2)
-
25. Zhou, W., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: ‘Image quality assessment: from error visibility to structural similarity’, IEEE Trans. Image Process., 2004, 13, (4), pp. 600–612 (doi: 10.1109/TIP.2003.819861).
-
3)
-
[13]. Rezaei, M., Harmuth, K., Gierke, W., et al: ‘Conditional adversarial network for semantic segmentation of brain tumor’, , August 2017.
-
4)
-
[5]. Ganz, M., Kondermann, D., Andrulis, J., et al: ‘Crowdsourcing for error detection in cortical surface delineations’, Int. J. Comput. Assist. Radiol Surg., 2017, 12, pp. 12–161 (doi: 10.1007/s11548-016-1445-9).
-
5)
-
[4]. Maier-Hein, L., Mersmann, S., Kondermann, D., et al: ‘Crowdsourcing for reference correspondence generation in endoscopic images’. Medical Image Computing and Computer-Assisted Intervention (MICCAI), Boston, MA, USA, September 2014, pp. 349–356.
-
6)
-
[19]. Kauppi, T., Kalesnykiene, V., Kamarainen, J.K., et al: ‘Diaretdb1 diabetic retinopathy database and evaluation protocol’, 2007.
-
7)
-
[15]. Costa, P., Galdran, A., Meyer, M., et al: ‘Adversarial synthesis of retinal images from vessel trees’. Image Analysis and Recognition: 14th Int. Conf., ICIAR, Montreal, Canada, 2017, pp. 516–523.
-
8)
-
[11]. Havaei, M., Davy, A., Warde-Farley, D., et al: ‘Brain tumor segmentation with deep neural networks’, Med. Image Anal., 2017, 35, pp. 18–31 (doi: 10.1016/j.media.2016.05.004).
-
9)
-
[20]. Decencière, E., Zhang, X., Cazuguel, G., et al: ‘Feedback on a publicly distributed database: the Messidor database’, Image Anal. Stereol., 2014, 33, pp. 231–234 (doi: 10.5566/ias.1155).
-
10)
-
[18]. Joshi, G., Sivaswamy, J.: ‘Colour retinal image enhancement based on domain knowledge’. Indian Conf. on Computer Vision, Graphics and Image Processing, Bhubaneswar, India, December 2008, pp. 591–598.
-
11)
-
[3]. Mitry, D., Peto, T., Hayat, S., et al: ‘Crowdsourcing as a novel technique for retinal fundus photography classification: analysis of images in the EPIC Norfolk cohort on behalf of the UKBiobank Eye and Vision Consortium’, PLoS ONE, 2013, 8, (8), pp. 1–7 (doi: 10.1371/journal.pone.0071154).
-
12)
-
[14]. Nie, D., Trullo, R., Petitjean, C., et al: ‘Medical image synthesis with context-aware generative adversarial networks’. Medical Image Computing and Computer-Assisted Intervention, MICCAI, Quebec City, Canada, September 2017, pp. 417–425.
-
13)
-
7. Gulshan, V., Peng, L., Coram, M., et al: ‘Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs’, J. Am. Med. Assoc., 2016, 316, (22), pp. 2402–2410 (doi: 10.1001/jama.2016.17216).
-
14)
-
[17]. Shankaranarayana, S.M., Ram, K., Mitra, K., et al: ‘Joint optic disc and cup segmentation using fully convolutional and adversarial networks’. Fetal, Infant and Ophthalmic Medical Image Analysis: OMIA Held in Conjunction with MICCAI, Québec City, Canada, 2017, pp. 168–176.
-
15)
-
[24]. Giancardo, L., Meriaudeau, F., Karnowski, T., et al: ‘Exudate-based diabetic macular edema detection in fundus images using publicly available datasets’, Med. Image Anal., 2012, 16, pp. 216–226 (doi: 10.1016/j.media.2011.07.004).
-
16)
-
[22]. Ronneberger, O., Fischer, P., Brox, T.: ‘U-net: convolutional networks for biomedical image segmentation’, , May 2015.
-
17)
-
[23]. Prentasic, P., Loncaric, S., Vatavuk, Z., et al: ‘Diabetic retinopathy image database (DRiDB): a new database for diabetic retinopathy screening programs research’. Int. Symp. on Image and Signal Processing and Analysis (ISPA), Trieste, Italy, 2013, pp. 704–709.
-
18)
-
[6]. Maier-Hein, L., Mersmann, S., Kondermann, D., et al: ‘Can masses of non-experts train highly accurate image classifiers?’. Medical Image Computing and Computer-Assisted Intervention (MICCAI), Boston, MA, USA, January 2014, pp. 438–445.
-
19)
-
[26]. Kohler, T., Budai, A., Kraus, M., et al: ‘Automatic no-reference quality assessment for retinal fundus images using vessel segmentation’. Int. Symp. on Computer-Based Medical Systems, CBMS, Porto, Portugal, 2013, pp. 95–100.
-
20)
-
[8]. Albarqouni, S., Baur, C., Achilles, F., et al: ‘AggNet: deep learning from crowds for mitosis detection in breast cancer histology images’, IEEE Trans. Med. Imag., 2016, 35, pp. 1313–1321 (doi: 10.1109/TMI.2016.2528120).
-
21)
-
[25]. Maninis, K.-K., Pont-Tuset, J., Arbelaez, P., et al: ‘Deep retinal image understanding’. Medical Image Computing and Computer-Assisted Intervention (MICCAI), Athens, Greece, 2016, pp. 140–148.
-
22)
-
[27]. Prentasic, P., Loncaric, S.: ‘Detection of exudates in fundus photographs using deep neural networks and anatomical landmark detection fusion’, Comput. Methods Programs Biomed., 2016, 137, pp. 281–292 (doi: 10.1016/j.cmpb.2016.09.018).
-
23)
-
[2]. de Brebisson, A., Montana, G.: ‘Deep neural networks for anatomical brain segmentation’, , June 2015.
-
24)
-
[12]. Goodfellow, I., Pouget-Abadie, J., Mirza, M., et al: ‘Generative adversarial nets’, Adv. Neural. Inf. Process. Syst., 2014, 27, pp. 2672–2680.
-
25)
-
[10]. Bonaldi, L., Menti, E., Ballerini, L., et al: ‘Automatic generation of synthetic retinal fundus images: vascular network’. Simulation and Synthesis in Medical Imaging: SASHIMI, Held in Conjunction with MICCAI, Loughborough, UK, October 2016, pp. 167–176.
-
26)
-
[16]. Virdi, T., Guibas, J.T., Li, P.S.: ‘Synthetic medical images from dual generative adversarial networks’, , September 2017.
-
27)
-
[9]. Collins, L., Zijdenbos, A., Kollokian, V., et al: ‘Design and construction of a realistic digital brain phantom’, IEEE Trans. Med. Imaging, 1998, 17, (3), pp. 463–468 (doi: 10.1109/42.712135).
http://iet.metastore.ingenta.com/content/journals/10.1049/trit.2018.1010
Related content
content/journals/10.1049/trit.2018.1010
pub_keyword,iet_inspecKeyword,pub_concept
6
6