http://iet.metastore.ingenta.com
1887

Multi-scale features fusion from sparse LiDAR data and single image for depth completion

Multi-scale features fusion from sparse LiDAR data and single image for depth completion

For access to this article, please select a purchase option:

Buy eFirst article PDF
£12.50
(plus tax if applicable)
Buy Knowledge Pack
10 articles for £75.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
Electronics Letters — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

Recently deep learning-based methods for dense depth completion from sparse depth data have shown superior performance than traditional techniques. However, sparse depth data lose the details of the scenes, for instance, the spatial and texture information. To overcome this problem, additional single image is introduced and a multi-scale features fusion scheme to learn more correlations of the two different data is proposed. Furthermore, sparse convolution operation to improve feature robustness for sparse depth data is exploited. Experiments demonstrate that the approach obviously improves the performance for depth completion and outperforms all the previous published methods. The authors believe their works also have the guidance significance for stereo images depth estimation fused with sparse LiDAR depth data.

References

    1. 1)
      • 1. Jampani, V., Kiefel, M., Gehler, P.V.: ‘Learning sparse high dimensional filters: image filtering, dense CRFS and bilateral neural networks’. IEEE Conf. on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, June 2016.
    2. 2)
      • 2. Barron, J.T., Poole, B.: ‘The fast bilateral solver’. Proc. of the European Conf. on Computer Vision, Amsterdam, Netherlands, October 2016.
    3. 3)
      • 3. Ferstl, D., Reinbacher, C., Ranftl, R., et al: ‘Image guided depth upsampling using anisotropic total generalized variation’. IEEE Int. Conf. on Computer Vision, Sydney, NSW, Australia, December 2013.
    4. 4)
      • 4. Schneider, N., Schneider, L., Pinggera, P., et al: ‘Semantically guided depth upsampling’. Proc. of the German Conf. on Pattern Recognition, Hannover, Germany, September 2016.
    5. 5)
      • 5. Uhrig, J., Schneider, N., Schneider, L., et al: ‘Sparsity Invariant CNNs’, 2017, arXiv preprint arXiv:1708.06500.
    6. 6)
      • 6. Eldesokey, A., Felsberg, M., Shahbaz Khan, F.: ‘Propagating Confidences through CNNs for Sparse Data Regression’, 2018, arXiv preprint arXiv:1805.11913.
    7. 7)
      • 7. Chodosh, N., Wang, C., Lucey, S.: ‘Deep Convolutional Compressed Sensing for LiDAR Depth Completion’, 2018, arXiv preprint arXiv:1803.08949.
    8. 8)
      • 8. Cadena, C., Dick, A., Reid, I.D.: ‘Multi-modal auto-encoders as joint estimators for robotics scene understanding’, Robotics: Science and Systems, Ann Arbor, MI, USA, June 2016, pp. 19.
    9. 9)
      • 9. Ma, F., Karaman, S.: ‘Sparse-to-dense: Depth prediction from sparse depth samples and a single image’, 2017, arXiv preprint arXiv:1709.07492.
http://iet.metastore.ingenta.com/content/journals/10.1049/el.2018.6149
Loading

Related content

content/journals/10.1049/el.2018.6149
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address