http://iet.metastore.ingenta.com
1887

Fully-connected semantic segmentation of hyperspectral and LiDAR data

Fully-connected semantic segmentation of hyperspectral and LiDAR data

For access to this article, please select a purchase option:

Buy eFirst article PDF
£12.50
(plus tax if applicable)
Buy Knowledge Pack
10 articles for £75.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
IET Computer Vision — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

Semantic segmentation is an emerging field in the computer vision community where one can segment and label an object all at once, by considering the effects of the neighbouring pixels. In this study, the authors propose a new semantic segmentation model that fuses hyperspectral images with light detection and ranging (LiDAR) data in the three-dimensional space defined by Universal Transverse Mercator (UTM) coordinates and solves the task using a fully-connected conditional random field (CRF). First, the authors’ pairwise energy in the CRF model takes into account the UTM coordinates of the data; and performs fusion in the real world coordinates. Second, as opposed to the commonly used Markov random fields (MRFs) which consider only the nearby pixels; the fully-connected CRF considers all the pixels in an image to be connected. In doing so, they show that these long-term interactions significantly enhance the results when compared to traditional MRF models. Third, they propose an adaptive scaling scheme to decide the weights of LiDAR and hyperspectral sensors in shadowy or sunny regions. Experimental results on the Houston dataset indicate the effectiveness of their method in comparison to the several MRF based approaches as well as other competing methods.

References

    1. 1)
      • 1. Richards, J.A., Xiuping, J.: ‘Remote sensing digital image analysis, an introduction’ (Springer-Verlag, Berlin, Heidelberg, 2013).
    2. 2)
      • 2. Qian, Y., Yao, F., Jia, S.: ‘Band selection for hyperspectral imagery using affinity propagation’, IET Comput. Vis., 2009, 3, (4), pp. 213222.
    3. 3)
      • 3. Khodadadzadeh, M., Li, J., Prasad, S., et al: ‘Fusion of hyperspectral and LiDAR remote sensing data using multiple feature learning’, IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., 2015, 8, (6), pp. 29712982.
    4. 4)
      • 4. Krähenbühl, P., Koltun, V.: ‘Efficient inference in fully connected CRFs with Gaussian edge potentials’. Advances in Neural Information Processing Systems 24, Granada, Spain, 2011.
    5. 5)
      • 5. Li, J., Bioucas-Dias, J.M., Plaza, A.: ‘Spectral-spatial hyperspectral image segmentation using subspace multinomial logistic regression and Markov random fields’, IEEE Trans. Geosci. Remote Sens., 2012, 50, (3), pp. 809823.
    6. 6)
      • 6. Paisitkriangkrai, S., Sherrah, J., Janney, P., et al: ‘Effective semantic pixel labelling with convolutional networks and conditional random fields’. CVPR Workshops, Boston, MA, USA, 2015, pp. 3643.
    7. 7)
      • 7. Marmanis, D., Wegner, J.D., Silvano, G., et al: ‘Semantic segmentation of aerial images with an ensemble of CNNs’, ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci., 2016, 3, pp. 473480.
    8. 8)
      • 8. Glenn, T.: ‘Context-dependent detection in hyperspectral imagery’. Ph.D. dissertation, University of Florida, 2013.
    9. 9)
      • 9. Cross, G.R., Jain, A.K.: ‘Markov random field texture models’, IEEE Trans. Pattern Anal. Mach. Intell., 1983, 5, (1), pp. 2539.
    10. 10)
      • 10. Vicente, S., Kolmogorov, V., Rother, C.: ‘Joint optimization of segmentation and appearance models’. Int. Conf. Computer Vision (ICCV), Kyoto, Japan, 2009, pp. 755762.
    11. 11)
      • 11. Vicente, S., Kolmogorov, V., Rother, C.: ‘Graph cut based image segmentation with connectivity priors’. Computer Vision and Pattern Recognition (CVPR), Alaska, USA, 2008.
    12. 12)
      • 12. Shotton, J., Winn, J., Rother, C., et al: ‘Textonboost: joint appearance, shape and context modeling for multi-class object recognition and segmentation’. European Conf. Computer Vision (ECCV), Graz, Austria, 2006, pp. 115.
    13. 13)
      • 13. Yu, L., Xie, J., Chen, S.: ‘Conditional random field-based image labelling combining features of pixels, segments and regions’, IET Comput. Vis., 2012, 6, (5), pp. 459467.
    14. 14)
      • 14. Shotton, J., Winn, J., Rother, C., et al: ‘Textonboost for image understanding: multi-class object recognition and segmentation by jointly modeling texture, layout, and context’, Int. J. Comput. Vis., 2009, 81, (1), pp. 223.
    15. 15)
      • 15. Salamati, N., Larlus, D., Csurka, G., et al: ‘Semantic image segmentation using visible and near-infrared channels’. European Conf. Computer Vision (ECCV), Florence, Italy, 2012, pp. 461471.
    16. 16)
      • 16. Farag, A.A., Mohamed, R.M., El-Baz, A.: ‘A unified framework for map estimation in remote sensing image segmentation’, IEEE Trans. Geosci. Remote Sens., 2005, 43, (7), pp. 16171634.
    17. 17)
      • 17. Bai, J., Xiang, S., Pan, C.: ‘A graph-based classification method for hyperspectral images’, IEEE Trans. Geosci. Remote Sens., 2013, 51, (2), pp. 803817.
    18. 18)
      • 18. Tarabalka, Y., Rana, A.: ‘Graph-cut-based model for spectral-spatial classification of hyperspectral images’. IEEE Int. Geoscience and Remote Sensing Symp., Quebec, July 2014.
    19. 19)
      • 19. Tarabalka, Y., Fauvel, M., Chanussot, J., et al: ‘SVM and MRF based method for accurate classification of hyperspectral images’, IEEE Geosci. Remote Sens. Lett., 2010, 7, (4), pp. 736740.
    20. 20)
      • 20. Khodadadzadeh, M., Li, J., Plaza, A., et al: ‘Spectral-spatial classification for hyperspectral data using SVM and subspace MLR’, IEEE Trans. Geosci. Remote Sens., 2010, 48, (10), pp. 809823.
    21. 21)
      • 21. Liao, W., Pizurica, A., Bellens, R., et al: ‘Generalized graph-based fusion of hyperspectral and LiDAR data using morphological features’, IEEE Geosci. Remote Sens. Lett., 2015, 12, (3), pp. 552556.
    22. 22)
      • 22. Lafferty, J., McCallum, A., Pereira, F.C.N.: ‘Conditional random fields: probabilistic models for segmenting and labeling sequence data’. Proc. Eighteenth Int. Conf. Machine Learning, ICML ‘01, Williamstown, MA, USA, June 2001, pp. 282289.
    23. 23)
      • 23. Kohli, P., Rohter, C.: ‘Higher-order models in computer vision’, in Lezoray, O., Grady, L. (Eds.): ‘Image Processing and Analysis with Graphs: Theory and Practice’, (CRC Press, 2012, 1st edn.), pp. 128.
    24. 24)
      • 24. Bioucas-Dias, J.M., Nascimento, J.M.P.: ‘Hyperspectral subspace identification’, IEEE Trans. Geosci. Remote Sens., 2008, 46, (8), pp. 24352445.
    25. 25)
      • 25. Dalla, M.M., Benediktsson, J.A., Waske, B., et al: ‘Morphological attribute profiles for the analysis of very high resolution images’, IEEE Trans. Geosci. Remote Sens., 2010, 48, (10), pp. 37473762.
    26. 26)
      • 26. Bohning, D.: ‘Multinomial logistic regression algorithm’, Ann. Inst. Stat. Math., 1992, 44, (1), pp. 197200.
    27. 27)
      • 27. Luo, R., Liao, W., Zhang, H., et al: ‘Classification of cloudy hyperspectral image and LiDAR data based on feature fusion AND decision fusion’. IEEE Int. Geoscience and Remote Sensing Symp., Beijing, 2016.
    28. 28)
      • 28. Debes, C., Merentitis, A., Heremans, R., et al: ‘Hyperspectral and LiDAR data fusion: outcome of the 2013 data fusion contest’, IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., 2014, 7, (6), pp. 24052417.
    29. 29)
      • 29. Hughes, G.: ‘On the mean accuracy of statistical pattern recognizers’, IEEE Trans. Inf. Theory, 1968, 14, (1), pp. 5563.
    30. 30)
      • 30. Blake, A., Pushmeet, K., Carsten, R.: ‘Markov random fields for vision and image processing’ (MIT Press, Massachusetts, USA, 2011).
    31. 31)
      • 31. Platt, J.: ‘Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods’, Adv. Large Margin Classifiers, 1999, 10, (3), pp. 6174.
    32. 32)
      • 32. Krähenbühl, P., Koltun, V.: ‘Code for efficient inference in fully connected CRFs with Gaussian edge potentials’. Available at http://graphics.stanford.edu/projects/densecrf/.
    33. 33)
      • 33. 2013 IEEE GRSS data fusion contest’. Available at http://www.grss-ieee.org/community/technical-committees/data-fusion/.
    34. 34)
      • 34. Rasti, B., Ghamisi, P., Plaza, J., et al: ‘Fusion of hyperspectral and LiDAR data using sparse and low-rank component analysis’, IEEE Trans. Geosci. Remote Sens., 2017, 55, (11), pp. 63546365.
    35. 35)
      • 35. Ghamisi, P., Hofle, B., Zhu, X.X.: ‘Hyperspectral and LiDAR data fusion using extinction profiles and deep convolutional neural network’, IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., 2017, 10, (6), pp. 30113024.
    36. 36)
      • 36. Aytaylan, H., Yuksel, S.E.: ‘Semantic segmentation of hyperspectral images with the fusion of LiDAR data’. IEEE Int. Geoscience and Remote Sensing Symp., Beijing, 2016.
    37. 37)
      • 37. Aytaylan, H.: ‘Fusion and classification of using conditional random fields’. Master thesis, Hacettepe University, Department of Electrics and Electronics Engineering, January2017.
    38. 38)
      • 38. Cheng, G., Wang, Y., Gong, Y., et al: ‘Urban road extraction via graph cuts based probability propagation’. 2014 IEEE Int. Conf. Image Processing (ICIP), Paris, France, 2014, pp. 50725076.
    39. 39)
      • 39. Chang, C.-C., Lin, C.-J.: ‘LIBSVM: A library for support vector machines’, ACM Trans. Intell. Syst. Technol. (TIST), 2011, 2, (3), p. 27.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-cvi.2018.5067
Loading

Related content

content/journals/10.1049/iet-cvi.2018.5067
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address