Biomedical compound figure detection using deep learning and fusion techniques

Biomedical compound figure detection using deep learning and fusion techniques

For access to this article, please select a purchase option:

Buy article PDF
(plus tax if applicable)
Buy Knowledge Pack
10 articles for £75.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Your details
Why are you recommending this title?
Select reason:
IET Image Processing — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

Images contain significant amounts of information but present different challenges relative to textual information. One such challenge is compound figures or images made up of two or more subfigures. A deep learning model is proposed for compound figure detection (CFD) in the biomedical article domain. First, pre-trained convolutional neural networks (CNNs) are selected for transfer learning to take advantage of the image classification performance of CNNs and to overcome the limited dataset of the CFD problem. Next, the pre-trained CNNs are fine-tuned on the training data with early-stopping to avoid overfitting. Alternatively, layer activations of the pre-trained CNNs are extracted and used as input features to a support vector machine classifier. Finally, individual model outputs are combined with score-based fusion. The proposed combined model obtained a best test accuracy of 90.03 and 96.93% outperforming traditional hand-crafted and other deep learning representations on the ImageCLEF 2015 and 2016 CFD subtask datasets, respectively, by using AlexNet, VGG-16, VGG-19 pre-trained CNNs fine-tuned until best validation accuracy stops increasing combined with the combPROD score-based fusion operator.


    1. 1)
      • 1. Müller, H., Michoux, N., Bandon, D., et al: ‘A review of content-based image retrieval systems in medical applications – clinical benefits and future directions’, Int. J. Med. Inform., 2003, 73, (1), pp. 123.
    2. 2)
      • 2. Garcia Seco de Herrera, A., Schaer, R., Bromuri, S., et al: ‘Overview of the ImageCLEF 2016 medical task’. Proc. ImageCLEF, Évora, Portugal, September 2016, pp. 219232.
    3. 3)
      • 3. Kalpathy-Cramer, J., Hersh, W.: ‘Automatic image modality based classification and annotation to improve medical image retrieval’. Proc. 12th World Congress on Health (Medical) Informatics; Building Sustainable Health Systems, Brisbane, Australia, 2007, pp. 13341338.
    4. 4)
      • 4. Cheng, B., Antani, S., Stanley, R.J., et al: ‘Automatic segmentation of subfigure image panels for multimodal biomedical document retrieval’. Proc. IS&T/SPIE Electronic Imaging, San Francisco, CA, USA, 2011, doi: 10.1117/12.873685.
    5. 5)
      • 5. Pelka, O., Friedrich, C.M.: ‘Fhdo biomedical computer science group at medical classification task of ImageCLEF 2015’. Proc. ImageCLEF, Toulouse, France, 2015.
    6. 6)
      • 6. Wang, X., Jiang, X., Kolagunda, A., et al: ‘CIS UDEL working notes on ImageCLEF 2015: compound figure detection task’. Proc. ImageCLEF, Toulouse, France, 2015.
    7. 7)
      • 7. Li, P., Sorensen, S., Kolagunda, A., et al: ‘UDEL CIS at ImageCLEF medical task 2016’. Proc. ImageCLEF, Évora, Portugal, September 2016.
    8. 8)
      • 8. Zare, M.R., Müller, H.: ‘Automatic detection of biomedical compound figure using bag of words’. International Journal of Computing, Communication and Instrumentation Engineering, 2017, 4, (1), pp. 610.
    9. 9)
      • 9. Taschwer, M., Marques, O.: ‘Automatic separation of compound figures in scientific articles’, Multimedia Tools Appl., 2018, 77, (1), pp. 519548.
    10. 10)
      • 10. Garcia Seco de Herrera, A., Bromuri, S., Müller, H.: ‘Overview of the ImageCLEF 2015 medical task’. Proc. ImageCLEF, Toulouse, France, 2015.
    11. 11)
      • 11. LeCun, Y., Bengio, Y., Hinton, G.: ‘Deep learning’, Nature, 2015, 521, (7553), pp. 436444.
    12. 12)
      • 12. Russakovsky, O., Deng, J., Su, H., et al: ‘Imagenet large scale visual recognition challenge’, Int. J. Comput. Vis., 2015, 115, (3), pp. 211252.
    13. 13)
      • 13. Bengio, Y.: ‘Deep learning of representations for unsupervised and transfer learning’. Proc. ICML Workshop on Unsupervised and Transfer Learning, Bellevue, USA, 2012, pp. 1736.
    14. 14)
      • 14. Yu, Y., Lin, H., Meng, J., et al: ‘Assembling deep neural networks for medical compound figure detection’, Information, 2017, 8, (2), p. 48.
    15. 15)
      • 15. Kumar, A., Lyndon, D., Kim, J., et al: ‘Subfigure and multi-label classification using a fine-tuned convolutional neural network’. Proc. ImageCLEF, Évora, Portugal, September 2016, pp. 318321.
    16. 16)
      • 16. Koitka, S., Friedrich, C.M.: ‘Traditional feature engineering and deep learning approaches at medical classification task of ImageCLEF 2016 FHDO biomedical computer science group (BCSG)’. Proc. ImageCLEF, Évora, Portugal, September 2016, pp. 304317.
    17. 17)
      • 17. Fu, R., Li, B., Gao, Y., et al: ‘Fully automatic figure-ground segmentation algorithm based on deep convolutional neural network and grabcut’, IET Image Process., Institution of Engineering and Technology, 2016, 10, (12), pp. 937942.
    18. 18)
      • 18. Guo, H., Wang, J., Lu, H.: ‘Multiple deep features learning for object retrieval in surveillance videos’, IET Comput. Vis., Institution of Engineering and Technology, 2016, 10, (4), pp. 268272.
    19. 19)
      • 19. Chitroub, S.: ‘Classifier combination and score level fusion: concepts and practical aspects’, Int. J. Image Data Fusion, 2010, 1, (2), pp. 113135.
    20. 20)
      • 20. Li, Y., Shi, N., Hsu, D.F.: ‘Fusion analysis of information retrieval models on biomedical collections’. Proc. 14th Int. Conf. on Information Fusion, Chicago, USA, August 2011, pp. 18.
    21. 21)
      • 21. Fox, E.A., Shaw, J.A.: ‘Combination of multiple searches’. Proc. The Second Text REtrieval Conf. (TREC-2), Gaithersburg, USA, 1994, pp. 243252.
    22. 22)
      • 22. Lee, J.H.: ‘Analyses of multiple evidence combination’. Proc. of the 20th Annual Int. ACM SIGIR Conf. on Research and Development in Information Retrieval, Philadelphia, PA, USA, July 1997, pp. 267276.
    23. 23)
      • 23. Vogt, C.C., Cottrell, G.W.: ‘Fusion via a linear combination of scores’, Inf. Retr., 1999, 1, (3), pp. 151173.
    24. 24)
      • 24. Escalante, H.J., Hérnadez, C.A., Sucar, L.E., et al: ‘Late fusion of heterogeneous methods for multimedia image retrieval’. Proc. 1st ACM Int. Conf. on Multimedia Information Retrieval, Vancouver, Canada, October 2008, pp. 172179.
    25. 25)
      • 25. Sun, X., Gong, L., Natsev, A., et al: ‘Image modality classification: a late fusion method based on confidence indicator and closeness matrix’. Proc. 1st ACM Int. Conf. on Multimedia Information Retrieval, Trento, Italy, April 2011.
    26. 26)
      • 26. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ‘Imagenet classification with deep convolutional neural networks’, in: Advances in neural information processing systems, (Curran Associates, Inc, Red Hook, New York, 2012), pp. 10971105.
    27. 27)
      • 27. Simonyan, K., Zisserman, A.: ‘Very deep convolutional networks for large-scale image recognition’. Proc. Int. Conf. Learning Representations, San Diego, USA, 2015.
    28. 28)
      • 28. Oquab, M., Bottou, L., Laptev, I., et al: ‘Learning and transferring Mid-level image representations using convolutional neural networks’. Proc. IEEE Conf. Computer Vision and Pattern Recognition, Columbus, USA, 2014, pp. 171172.
    29. 29)
      • 29. Tajbakhsh, N., Shin, J.Y., Gurudu, S.R., et al: ‘Convolutional neural networks for medical image analysis: full training or fine tuning?’, IEEE Trans. Med. Imaging, 2016, 35, (5), pp. 12991312.
    30. 30)
      • 30. García Seco de Herrera, A., Müller, H.: ‘Fusion techniques in biomedical information retrieval’, in: Ionescu, B., Benois-Pineau, J., Piatrik, T., Quénot, G. (Eds.), Fusion in computer vision, (Springer, Cham, 2014), pp. 209228.
    31. 31)
      • 31. Olah, C., Mordvintsev, A., Schubert, L.: ‘Feature visualization’, Distill, 2017,
    32. 32)
      • 32. Zare, M.R., Mueen, A., Awedh, M., et al: ‘Automatic classification of medical X-ray images: hybrid generative-discriminative approach’, IET Image Process., Institution of Engineering and Technology, 2013, 7, (5), pp. 523532.
    33. 33)
      • 33. Wang, Q., Li, S.Y.: ‘Database of human segmented images and its application in boundary detection’, IET Image Process., Institution of Engineering and Technology, 2012, 6, (3), pp. 222229.
    34. 34)
      • 34. Wang, Q., Yuan, Y., Yan, P., et al: ‘Saliency detection by multiple-instance learning’, IEEE Trans. Cybernetics, 2013, 43, (2), pp. 660672.

Related content

This is a required field
Please enter a valid email address