http://iet.metastore.ingenta.com
1887

Sparse representation via optimal matching convolution framelets

Sparse representation via optimal matching convolution framelets

For access to this article, please select a purchase option:

Buy article PDF
$19.95
(plus tax if applicable)
Buy Knowledge Pack
10 articles for $120.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
IET Signal Processing — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

Recently, a tight frame, called convolution framelets (CFs) and constructed by convolving local and non-local bases, is proposed to provide valuable insights in understanding the patch-based processing approaches in the viewpoint of sparse representation (SR). However, it is still unclear how to represent signals with energy concentration guarantee in its lifted space and how to optimise the local base for a given non-local base. To address these issues, the equivalence between the signal space and its lifted space is established by the Hankel operator. In the lifted space, the energy concentration property of signals is measured by the sparsity instead of the Euclidean norm and the rank. With the new objective function, an optimisation model is built to train the optimal local base for a given nonlocal base from the training samples, motivating us to propose the optimal matching convolution framelets (OMCFs). In addition, a numerical algorithm is also designed to solve the proposed model using alternating optimisation strategy. The OMCF for SR is tested on the speech signals, and the comparisons with traditional popular SR tools, such as discrete cosine transform (DCT) and Haar wavelets, demonstrate its better performance.

References

    1. 1)
      • 1. Bruckstein, A. M., Donoho, D. L., Elad, M.: ‘From sparse solutions of systems of equations to sparse modeling’, SIAM Rev.., 2009, 51, (1), pp. 3481.
    2. 2)
      • 2. Wright, J., Ma, Y., Mairal, J., et al: ‘Sparse representation for computer vision and pattern recognition’, Proc. IEEE, 2010, 98, (6), pp. 10311044.
    3. 3)
      • 3. Wohlberg, B.: ‘Efficient algorithm for convolutional sparse representations’, IEEE Trans. Image Process., 2016, 25, (1), pp. 301315.
    4. 4)
      • 4. Liu, D., Wang, Z., Wen, B., et al: ‘Robust single image super-resolution via deep networks with sparse prior’, IEEE Trans. Image Process., 2016, 25, (7), pp. 31943207.
    5. 5)
      • 5. Xiong, J., Liu, Q., Wang, Y., et al: ‘A two-stage convolutional sparse prior model for image restoration’, J. Vis. Commun. Image Represent., 2017, 48, pp. 268280.
    6. 6)
      • 6. Wang, Z., Tan, X., Yu, Q., et al: ‘Sparse PDE for SAR image speckle suppression’, IET Image Process.., 2017, 11, (6), pp. 425432.
    7. 7)
      • 7. Li, W., Zhou, Y., Poh, N., et al: ‘Feature denoising using joint sparse representation for incar speech recognition’, IEEE Signal Process. Lett., 2013, 20, (7), pp. 681684.
    8. 8)
      • 8. Zhang, L., Xu, X., Chen, H., et al: ‘Supervised single-channel speech dereverberation and denoising using a two-stage model based sparse representation’, Speech Commun., 2018, 97, pp. 18.
    9. 9)
      • 9. Messaoud, M., Bouzid, A.: ‘Sparse representations for single channel speech enhancement based on voiced/unvoiced classification’, Circ. Syst. Signal Process., 2017, 36, (5), pp. 122.
    10. 10)
      • 10. Pennebaker, W. B., Mitchell, J. L.: ‘JPEG still image compression data standard’ (Van Nostrand Reinhold, New York, 1992).
    11. 11)
      • 11. Mallat, S. G.: ‘A theory for multi-resolution signal decomposition: the wavelet representation’, IEEE Trans. Pattern Anal. Mach. Intell., 1989, 11, (7), pp. 674693.
    12. 12)
      • 12. Selesnick, I., Baraniuk, R., Kingsbury, N.: ‘The dual-tree complex wavelet transform’, IEEE Signal Process. Mag., 2005, 22, (6), pp. 123151.
    13. 13)
      • 13. Aharon, M., Elad, M., Bruckstein, A.: ‘K-SVD: an algorithm for designing overcomplete dictionaries for sparse representation’, IEEE Trans. Signal Process., 2006, 54, (11), pp. 43114322.
    14. 14)
      • 14. Lesage, S., Gribonval, R., Bimbot, F., et al: ‘Learning unions of orthonormal bases with thresholded singular value decomposition’. IEEE Int. Conf. Acoustics, Speech, and Signal Processing, Philadelphia, PA, USA, 2005, v, pp. 293296.
    15. 15)
      • 15. Sahoo, S. K., Makur, A.: ‘Sparse sequential generalization of K-means for dictionary training on noisy signals’, Signal Process., 2016, 129, pp. 6266.
    16. 16)
      • 16. Baraniuk, R., Wakin, M.: ‘Random projections of smooth manifold’, Found. Comput. Math., 2009, 9, (1), pp. 5177.
    17. 17)
      • 17. Rudin, I. L., Osher, S., Fatemi, E.: ‘Nonlinear total variation based noise removal algorithm’, Physica D, 1992, 60, pp. 259268.
    18. 18)
      • 18. Buades, A., Coll, B., Morel, J. M.: ‘A non-local algorithm for image denoising’. IEEE Computer Society Conf. Computer Vision and Pattern Recognition (CVPR), IEEE, Washington, DC, 2005, vol. 2, pp. 6065.
    19. 19)
      • 19. Buades, A., Coll, B., Morel, J. M.: ‘A review of image denoising algorithms, with a new one’, SIAM J. Multiscale Model. Simul., 2005, 4, (2), pp. 490530.
    20. 20)
      • 20. Dabov, K., Foi, A., Katkovnik, V., et al: ‘Image denoising by sparse 3-D transform- domain collaborative filtering’, IEEE Trans. Image Process., 2007, 16, (8), pp. 20802095.
    21. 21)
      • 21. Chatterjee, P., Milanfar, P.: ‘Patch-based near-optimal image denoising’, IEEE Trans. Image Process., 2012, 21, (4), pp. 16351649.
    22. 22)
      • 22. Kheradmand, A., Milanfar, P.: ‘A general framework for regularized, similarity-based image restora- tion’, IEEE Trans. Image Process., 2014, 23, (12), pp. 51365151.
    23. 23)
      • 23. Taylor, K. M., Meyer, F. G.: ‘A random walk on image patches’, SIAM J. Image Sci., 2011, 5, (2), pp. 688725.
    24. 24)
      • 24. Bougleux, S., Cohen, L. D., Peyré, G.: ‘Non-local regularization of inverse problems’, Inverse Probl. Imag., 2017, 5, (2), pp. 511530.
    25. 25)
      • 25. Lee, A. B., Izbicki, R.: ‘A spectral series approach to high-dimensional nonparametric regression’, Electron. J. Stat., 2016, 10, (1), pp. 423463.
    26. 26)
      • 26. Eslahi, N., Aghagolzadeh, A.: ‘Compressive sensing image restoration using adaptive curvelet thresholding and nonlocal sparse regularization’, IEEE Trans. Image Process., 2016, 25, (7), pp. 31263140.
    27. 27)
      • 27. Izbicki, R., Lee, A. B.: ‘Nonparametric conditional density estimation in a high-dimensional regression setting’, J. Comput. Graph. Statist., 2016, 25, (4), pp. 12971316.
    28. 28)
      • 28. Chatterjee, P., Milanfar, P.: ‘Clustering-based denoising with locally learned dictionaries’, IEEE Trans. Image Process., 2009, 18, (7), pp. 14381451.
    29. 29)
      • 29. Elad, M., Aharon, M.: ‘Image denoising via sparse and redundant representations over learned dictionaries’, IEEE Trans. Image Process., 2006, 15, (12), pp. 37363745.
    30. 30)
      • 30. Jin, K. H., Ye, J. C.: ‘Annihilating filter-based low-rank Hankel matrix approach for image inpainting’, IEEE Trans. Image Process., 2015, 24, (11), pp. 34983511.
    31. 31)
      • 31. Jin, K. H., Lee, J., Lee, D., et al: ‘Sparse and low-rank decomposition of MR artifact images using annihilating filter-based Hankel matrix’. IEEE Int. Symp. Biomedical Imaging, Prague, Czech Republic, 2016, pp. 13881391.
    32. 32)
      • 32. Bruna, J., Mallat, S.: ‘Invariant scattering convolution networks’, IEEE Trans. Pattern Anal. Mach. Intell., 2013, 35, (8), pp. 18721886.
    33. 33)
      • 33. Osher, S., Shi, Z., Zhu, W.: ‘Low dimensional manifold model for image processing’. Technical Report, CAM report 16-04, UCLA, Los Angeles, CA, 2016.
    34. 34)
      • 34. Yin, R., Gao, T., Lu, Y. M., et al: ‘A tale of two bases: local-nonlocal regularization on image patches with convolution framelets’, SIAM J. Imag. Sci., 2017, 10, (2), pp. 711750.
    35. 35)
      • 35. Yin, R.: ‘High dimensional signal representation’. PhD Thesis, Duke University, Durham, North Carolina, USA, 2017.
    36. 36)
      • 36. Ye, J. C., Han, Y. S.: ‘Deep convolutional framelets: a general deep learning for inverse problems’, arXiv:1707.00372v3, 2017.
    37. 37)
      • 37. Kang, E., Yoo, J., Ye, J. C.: ‘Wavelet residual network for low-dose CT via deep convolutional framelets’, arXiv:1707.09938v2, 2017.
    38. 38)
      • 38. Yang, J., Lu, H., Wei, Q., et al: ‘Compressive sampling using annihilating filter-based low-rank interpolation’, IEEE Trans. Inf. Theory, 2017, 63, (2), pp. 777801.
    39. 39)
      • 39. Ying, J., et al: ‘Hankel matrix nuclear norm regularized tensor completion for N-dimensional exponential signals’, IEEE Trans. Signal Process., 2017, 65, (14), pp. 37023717.
    40. 40)
      • 40. Trigano, T., Shevtsova, I., Luengo, D.: ‘Cosa: an accelerated ISTA algorithm for dictionaries based on translated waveforms’, Signal Process., 2017, 139, pp. 131135.
    41. 41)
      • 41. Fornasier, M., Rauhut, H.: ‘Compressive sensing’, ‘Handbook of mathematical methods in imaging’ (Springer-Verlag, New York, 2011), vol. 56, issue 4, sec. iv-v.
    42. 42)
      • 42. Candes, E. J., Wakin, M. B.: ‘An introduction to compressive sampling’, IEEE Signal Process. Mag., 2008, 25, (2), pp. 2130.
    43. 43)
      • 43. Chandrasekaran, V., Sanghavi, S., Parrilo, P. A., et al: ‘Rank-sparsity incoherence for matrix decomposition’, SIAM J. Optim., 2009, 21, (2), pp. 572596.
    44. 44)
      • 44. Hsu, D., Kakade, S. M., Zhang, T.: ‘Robust matrix decomposition with sparse corruptions’, IEEE Trans. Inf. Theory, 2011, 57, (11), pp. 72217234.
    45. 45)
      • 45. Boyd, S., Parikh, N., Chu, E., et al: ‘Distributed optimization and statistical learning via the alternating direction method of multipliers’, Found. Trends Mach. Learn., 2011, 3, (1), pp. 1122.
    46. 46)
      • 46. Hong, M., Luo, Z. Q.. ‘On the linear convergence of the alternating direction method of multipliers’, Math. Program., 2017, 162, (1-2), pp. 165199.
    47. 47)
      • 47. Hale, E. T., Yin, W., Zhang, Y.: ‘Fixed-point continuation for 1 minimization: methodology and convergence’, SIAM J. Optim., 2008, 19, (3), pp. 11071130.
    48. 48)
      • 48. Osher, S., Mao, Y., Dong, B., et al: ‘Fast linearized Bregman iteration for compressive sensing and sparse denoising’, Math. Comput., 2010, 8, (1), pp. 93111.
    49. 49)
      • 49. Donoho, D. L., Maleki, A., Montanari, A.: ‘Message-passing algorithms for compressed sensing’, Proc. Natl. Acad. Sci. USA, 2009, 106, (45), pp. 1891418919.
    50. 50)
      • 50. Schönemann, P. H.: ‘A generalized solution of the orthogonal procrustes problem’, Psychometrika, 1966, 31, (1), pp. 110.
    51. 51)
      • 51. Eldèn, L., Park, H.: ‘A procrustes problem on the Stiefel manifold’, Numer. Math., 1999, 82, (4), pp. 599619.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-spr.2018.5108
Loading

Related content

content/journals/10.1049/iet-spr.2018.5108
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address