access icon free DRAN: Deep recurrent adversarial network for automated pancreas segmentation

Automated pancreas segmentation in abdominal computed tomography (CT) scans is of high clinical relevance (i.e. pancreas cancer diagnosis and prognosis), but extremely difficult because the pancreas is a soft, small, and flexible abdominal organ with high anatomical variability, which causes the previous segmentation methods to result in low precision. In this study, the authors present a new deep recurrent adversarial network (DRAN) to tackle this challenge. DRAN contains three steps: (i) preserving global resolution of CT scans and modifying the receptive field of kernel adaptively through a dilated convolution autoencoder module; (ii) modelling contextual spatial correlation between neighbouring CT scan patches benefits from a specially designed local long short-term memory module; and (iii) improving the performance and generalisation by leveraging an adversarial module, which can constrain the spatial smoothness consistency between continuous CT scans based on the long-range spatial interaction. The system is evaluated on a dataset of 80 manually segmented CT volumes, using four-fold cross-validation. Its performance surpasses other state-of-the-art methods, with the Dice similarity coefficient of and pixel-wise accuracy of . Also, they perform a qualitative evaluation by an expert further revealing the effectiveness and potential of their DRAN as a clinical segmentation tool.

Inspec keywords: medical image processing; image segmentation; biological organs; cancer; computerised tomography

Other keywords: convolution autoencoder module; previous segmentation methods; abdominal computed tomography scans; deep recurrent adversarial network; CT scan patches; clinical segmentation tool; spatial interaction; short-term memory module; automated pancreas segmentation; contextual spatial correlation; pancreas cancer diagnosis

Subjects: Patient diagnostic methods and instrumentation; Biology and medical computing; X-rays and particle beams (medical uses); X-ray techniques: radiography and computed tomography (biomedical imaging/measurement); Computer vision and image processing techniques; Optical, image and video signal processing

References

    1. 1)
      • 30. Wang, C., Xu, C., Wang, C., et al: ‘Perceptual adversarial networks for image-to-image transformation’, IEEE Trans. On Image Processing, 2018, 27, pp. 40664079.
    2. 2)
      • 33. Long, J., Shelhamer, E., Darrell, T.: ‘Fully convolutional networks for semantic segmentation’. Conf. on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 34313440.
    3. 3)
      • 17. Tran, Q., Thanh, N., Won, J.: ‘Compressed sensing MRI reconstruction using a generative adversarial network with a cyclic loss’, IEEE Trans. Med. Imaging, 2018, 37, pp. 14881497.
    4. 4)
      • 2. Chu, C., Oda, M., Kitasaka, T., et al: ‘Multi-organ segmentation based on spatially-divided probabilistic atlas from 3D abdominal CT images’. Medical Image Computing and Computer Assisted Intervention (MICCAI), Nagoya, Japan, 2013.
    5. 5)
      • 32. Luc, P., Couprie, C., Chintala, S., et al: ‘Semantic segmentation using adversarial networks’, arXiv preprint arXiv:1611.08408, 2016.
    6. 6)
      • 13. Goodfellow, I., Pouget-Abadie, J., Mirza, M., et al: ‘Generative adversarial nets’. Neural Information Processing Systems (NIPS), Montreal, Canada, 2014, pp. 26722680.
    7. 7)
      • 20. Ning, Y., Liu, Y., Zhang, Y., et al: ‘Adaptive image rational upscaling with local structure as constraints’, Multimedia Tools Appl., 2018, 78, pp. 68896911, https://doi.org/10.1007/s11042-018-6182-3.
    8. 8)
      • 37. Litjens, G., Kooi, T., Bejnordi, B.E., et al: ‘A survey on deep learning in medical image analysis’, Med. Image Anal., 2017, 42, pp. 6088.
    9. 9)
      • 4. Ning, Y., Han, Z., Zhong, L., et al: ‘Automated pancreas segmentation using recurrent adversarial learning’. IEEE Int. Conf. on Bioinformatics and Biomedicine (BIBM), Madrid, Spain, 2018, pp. 927934.
    10. 10)
      • 7. Akinobu, S., Tatsuya, K., Hidefumi, K., et al: ‘Automated pancreas segmentation from three-dimensional contrast-enhanced computed tomography’, Int. J. CARS, 2010, 5, pp. 8598.
    11. 11)
      • 14. Wolterink, J.M., Leiner, T., Viergever, M.A., et al: ‘Generative adversarial networks for noise reduction in low-dose CT’, IEEE Trans. Med. Imag., 2017, 36, pp. 25362545.
    12. 12)
      • 6. Akinobu, S., Rena, O., Takaya, I., et al: ‘Segmentation of multiple organs in non-contrast 3D abdominal CT images’, Int. J. CARS, 2007, 2, pp. 135142.
    13. 13)
      • 24. Sundermeyer, M., Schluter, R., Ney, H.: ‘Lstm neural networks for language modeling’. Thirteenth Annual Conf. of the Int. Speech Communication Association, Portland, USA, 2012.
    14. 14)
      • 19. Ren, S., He, K., Girshick, R., et al: ‘Faster R-CNN: towards real-time object detection with region proposal networks’. Neural Information Processing Systems (NIPS), Montreal, Canada, 2015.
    15. 15)
      • 35. Ronneberger, O., Fischer, P., Brox, T.: ‘Convolutional networks for biomedical image segmentation’. Medical Image Computing and Computer Assisted Intervention (MICCAI), Boston, USA, 2015, pp. 234241.
    16. 16)
      • 8. Roth, H.R., Lu, L., Farag, A., et al: ‘DeepOrgan: multi-level deep convolutional networks for automated pancreas segmentation’. Medical Image Computing and Computer Assisted Intervention (MICCAI), Munich, Germany, 2015.
    17. 17)
      • 15. Liu, Y., Fu, D., Huang, Z., et al: ‘Optic disc segmentation in fundus images using adversarial training’, IET Image Process., 2019, 13, pp. 375381.
    18. 18)
      • 3. Tong, T., Wolz, R., Wang, Z., et al: ‘Discriminative dictionary learning for abdominal multi-organ segmentation’, Med. Image Anal., 2015, 23, pp. 92104.
    19. 19)
      • 28. Xue, W., Nachum, I.B., Pandey, S., et al: ‘Direct estimation of regional wall thicknesses via residual recurrent neural network’. Int. Conf. on Information Processing in Medical Imaging, Boone, USA, 2017, pp. 505516.
    20. 20)
      • 31. Radford, A., Metz, L., Chintala, S.: ‘Unsupervised representation learning with deep convolutional generative adversarial networks’, arXiv preprint arXiv:1511.06434, 2015.
    21. 21)
      • 22. Chen, L.C., Papandreou, G., Schroff, F., et al: ‘Rethinking atrous convolution for semantic image segmentation’, arXiv preprint arXiv:1702.08014, 2017.
    22. 22)
      • 1. Ghaneh, P., Costello, E., Neoptolemos, J.P.: ‘Biology and management of pancreatic cancer’, Postgrad. Med. J., 2008, 84, pp. 478497.
    23. 23)
      • 18. Creswell, A., Pouplin, A., Bharath, A.A.: ‘Denoising adversarial autoencoders: classifying skin lesions using limited labelled training data’, IET Image Process., 2018, 12, pp. 11051111.
    24. 24)
      • 23. Hochreiter, S., Schmidhuber, J.: ‘Long short-term memory’, Neural Comput., 1997, 9, pp. 17351780.
    25. 25)
      • 25. Krause, J., Johnson, J., Krishna, R., et al: ‘A hierarchical approach for generating descriptive image paragraphs’. Conf. on Computer Vision and Pattern Recognition (CVPR), Hawaii, USA, 2017.
    26. 26)
      • 12. Heinrich, M.P., Oktay, O., Bouteldja, N.: ‘OBELISK-Net: fewer layers to solve 3D multi-organ segmentation with sparse deformable convolutions’, Med. Image Anal., 2019, 54, pp. 19.
    27. 27)
      • 5. Robin Wolz, W., Cheng, C., Kazunari, M., et al: ‘Automated abdominal multi-organ segmentation with subject-specific atlas generation’, IEEE Trans. Med. Imaging, 2013, 32, pp. 17231730.
    28. 28)
      • 21. Holschneider, M., Kronland-Martinet, R., Morlet, J., et al: ‘A real-time algorithm for signal analysis with the help of the wavelet transform’. Wavelets, 1990, pp. 286297.
    29. 29)
      • 11. Farag, A., Lu, L., Member, S., et al: ‘A bottom-up approach for pancreas segmentation using cascaded superpixels and (deep) image patch labeling’, IEEE Trans. Image Process., 2017, 26, pp. 386399.
    30. 30)
      • 36. Chen, L.C., Zhu, Y., Papandreou, G., et al: ‘Encoder–decoder with atrous separable convolution for semantic image segmentation’, arXiv preprint arXiv:1802.02611, 2018.
    31. 31)
      • 29. Moon, B., Jagadish, H.V., Faloutsos, C., et al: ‘Analysis of the clustering properties of the Hilbert space-filling curve’, IEEE Trans. Knowl. Data Eng., 2001, 13, pp. 124141.
    32. 32)
      • 26. Vedantam, R., Bengio, S., Murphy, K., et al: ‘Context-aware captions from context-agnostic supervision’, arXiv preprint arXiv:1701.02870, 2017.
    33. 33)
      • 27. Liang, X., Shen, X., Xiang, D., et al: ‘Semantic object parsing with local-global long short-term memory’. IEEE Conf. on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016, pp. 31853193.
    34. 34)
      • 16. Varghese, A., Mohammed, K.P., Sai, S.C., et al: ‘Generative adversarial networks for brain lesion detection’. Proc. SPIE Medical Imaging, Orlando, USA, 2017.
    35. 35)
      • 9. Roth, H.R., Lu, L., Farag, A., et al: ‘Spatial aggregation of holistically-nested networks for automated pancreas segmentation’. Medical Image Computing and Computer Assisted Intervention (MICCAI), Istanbul, Turkey, 2016.
    36. 36)
      • 10. Roth, H.R., Lu, L., Lay, N., et al: ‘Spatial aggregation of holistically-nested convolutional neural networks for automated pancreas localization and segmentation’, Med. Image Anal., 2018, 45, pp. 94107.
    37. 37)
      • 34. Vijay, B., Alex, K., Roberto, C.: ‘SegNet: a deep convolutional encoder-decoder architecture for image segmentation’, IEEE Trans. Pattern Anal. Mach. Intell., 2017, 39, pp. 24812495.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-ipr.2019.0399
Loading

Related content

content/journals/10.1049/iet-ipr.2019.0399
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading