Your browser does not support JavaScript!
http://iet.metastore.ingenta.com
1887

access icon openaccess Solution to overcome the sparsity issue of annotated data in medical domain

Annotations are critical for machine learning and developing computer aided diagnosis (CAD) algorithms. Good performance of CAD is critical to their adoption, which generally rely on training with a wide variety of annotated data. However, a vast amount of medical data is either unlabeled or annotated only at the image-level. This poses a problem for exploring data driven approaches like deep learning for CAD. In this paper, we propose a novel crowdsourcing and synthetic image generation for training deep neural net-based lesion detection. The noisy nature of crowdsourced annotations is overcome by assigning a reliability factor for crowd subjects based on their performance and requiring region of interest markings from the crowd. A generative adversarial network-based solution is proposed to generate synthetic images with lesions to control the overall severity level of the disease. We demonstrate the reliability of the crowdsourced annotations and synthetic images by presenting a solution for training the deep neural network (DNN) with data drawn from a heterogeneous mixture of annotations. Experimental results obtained for hard exudate detection from retinal images show that training with refined crowdsourced data/synthetic images is effective as detection performance in terms of sensitivity improves by 25%/27% over training with just expert-markings.

References

    1. 1)
      • [7]. Maier-Hein, L., Rob, T., Grohl, J., et al: ‘Crowd-algorithm collaboration for large-scale endoscopic image annotation with confidence’. Medical Image Computing and Computer-Assisted Intervention (MICCAI), Athens, Greece, October 2016, pp. 616623.
    2. 2)
    3. 3)
      • [13]. Rezaei, M., Harmuth, K., Gierke, W., et al: ‘Conditional adversarial network for semantic segmentation of brain tumor’, CoRR, vol. abs/1708.05227, August 2017.
    4. 4)
    5. 5)
      • [4]. Maier-Hein, L., Mersmann, S., Kondermann, D., et al: ‘Crowdsourcing for reference correspondence generation in endoscopic images’. Medical Image Computing and Computer-Assisted Intervention (MICCAI), Boston, MA, USA, September 2014, pp. 349356.
    6. 6)
      • [19]. Kauppi, T., Kalesnykiene, V., Kamarainen, J.K., et al: ‘Diaretdb1 diabetic retinopathy database and evaluation protocol’, 2007.
    7. 7)
      • [15]. Costa, P., Galdran, A., Meyer, M., et al: ‘Adversarial synthesis of retinal images from vessel trees’. Image Analysis and Recognition: 14th Int. Conf., ICIAR, Montreal, Canada, 2017, pp. 516523.
    8. 8)
    9. 9)
    10. 10)
      • [18]. Joshi, G., Sivaswamy, J.: ‘Colour retinal image enhancement based on domain knowledge’. Indian Conf. on Computer Vision, Graphics and Image Processing, Bhubaneswar, India, December 2008, pp. 591598.
    11. 11)
    12. 12)
      • [14]. Nie, D., Trullo, R., Petitjean, C., et al: ‘Medical image synthesis with context-aware generative adversarial networks’. Medical Image Computing and Computer-Assisted Intervention, MICCAI, Quebec City, Canada, September 2017, pp. 417425.
    13. 13)
    14. 14)
      • [17]. Shankaranarayana, S.M., Ram, K., Mitra, K., et al: ‘Joint optic disc and cup segmentation using fully convolutional and adversarial networks’. Fetal, Infant and Ophthalmic Medical Image Analysis: OMIA Held in Conjunction with MICCAI, Québec City, Canada, 2017, pp. 168176.
    15. 15)
    16. 16)
      • [22]. Ronneberger, O., Fischer, P., Brox, T.: ‘U-net: convolutional networks for biomedical image segmentation’, CoRR, vol. abs/1505.04597, May 2015.
    17. 17)
      • [23]. Prentasic, P., Loncaric, S., Vatavuk, Z., et al: ‘Diabetic retinopathy image database (DRiDB): a new database for diabetic retinopathy screening programs research’. Int. Symp. on Image and Signal Processing and Analysis (ISPA), Trieste, Italy, 2013, pp. 704709.
    18. 18)
      • [6]. Maier-Hein, L., Mersmann, S., Kondermann, D., et al: ‘Can masses of non-experts train highly accurate image classifiers?’. Medical Image Computing and Computer-Assisted Intervention (MICCAI), Boston, MA, USA, January 2014, pp. 438445.
    19. 19)
      • [26]. Kohler, T., Budai, A., Kraus, M., et al: ‘Automatic no-reference quality assessment for retinal fundus images using vessel segmentation’. Int. Symp. on Computer-Based Medical Systems, CBMS, Porto, Portugal, 2013, pp. 95100.
    20. 20)
    21. 21)
      • [25]. Maninis, K.-K., Pont-Tuset, J., Arbelaez, P., et al: ‘Deep retinal image understanding’. Medical Image Computing and Computer-Assisted Intervention (MICCAI), Athens, Greece, 2016, pp. 140148.
    22. 22)
    23. 23)
      • [2]. de Brebisson, A., Montana, G.: ‘Deep neural networks for anatomical brain segmentation’, CoRR, vol. abs/1502.02445, June 2015.
    24. 24)
      • [12]. Goodfellow, I., Pouget-Abadie, J., Mirza, M., et al: ‘Generative adversarial nets’, Adv. Neural. Inf. Process. Syst., 2014, 27, pp. 26722680.
    25. 25)
      • [10]. Bonaldi, L., Menti, E., Ballerini, L., et al: ‘Automatic generation of synthetic retinal fundus images: vascular network’. Simulation and Synthesis in Medical Imaging: SASHIMI, Held in Conjunction with MICCAI, Loughborough, UK, October 2016, pp. 167176.
    26. 26)
      • [16]. Virdi, T., Guibas, J.T., Li, P.S.: ‘Synthetic medical images from dual generative adversarial networks’, ArXiv e-prints, September 2017.
    27. 27)
http://iet.metastore.ingenta.com/content/journals/10.1049/trit.2018.1010
Loading

Related content

content/journals/10.1049/trit.2018.1010
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address