http://iet.metastore.ingenta.com
1887

access icon openaccess Application of quantisation-based deep-learning model compression in JPEG image steganalysis

  • HTML
    42.595703125Kb
  • PDF
    1.7459287643432617MB
  • XML
    42.0146484375Kb
Loading full text...

Full text loading...

/deliver/fulltext/joe/2018/16/JOE.2018.8299.html;jsessionid=3ap4d6cu5n3ht.x-iet-live-01?itemId=%2fcontent%2fjournals%2f10.1049%2fjoe.2018.8299&mimeType=html&fmt=ahah

References

    1. 1)
      • 1. Holub, V., Fridrich, J., Denemark, T.: ‘Universal distortion function for steganography in an arbitrary domain’, EURASIP J. Inf. Secur., 2014, 2014, (1), pp. 113.
    2. 2)
      • 2. Xu, G.: ‘Deep convolutional neural network to detect J-UNIWARD’. Proc. 5th ACM Information Hiding and Multimedia Security Workshop (IH&MMSec'2017), Philadelphia, USA, June 2017, pp. 6773.
    3. 3)
      • 3. Xu, G., Wu, H.Z., Shi, Y.Q.: ‘Structural design of convolutional neural networks for steganalysis’, IEEE Signal Process. Lett., 2016, 23, (5), pp. 708712.
    4. 4)
      • 4. Zeng, J., Tan, S., Li, B.: ‘Pre-training via fitting deep neural network to rich-model features extraction procedure and its effect on deep learning for steganalysis’. Proc. Media Watermarking, Security, and Forensics, Part of IS&T Int. Symp. on Electronic Imaging (EI'2017), Burlingame, USA, February 2017, pp. 4449.
    5. 5)
      • 5. Zeng, J., Tan, S., Li, B., et al: ‘Large-scale JPEG steganalysis using hybrid deep-learning framework’, IEEE Trans. Inf. Forensics Sec., 2018, 13, (5), pp. 12421257.
    6. 6)
      • 6. He, K., Zhang, X., Ren, S., et al: ‘Deep residual learning for image recognition’. Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR’ 2016), Las Vegas, USA, June 2016, pp. 770778.
    7. 7)
      • 7. Iandola, F.N.: ‘Squeezenet: alexnet-level accuracy with 50X fewer parameters and <0.5 MB model size’, arXiv preprint arXiv:1602.07360, 2016.
    8. 8)
      • 8. Han, S., Pool, J., Tran, J., et al: ‘Learning both weights and connections for efficient neural networks’. Advances in Neural Information Processing Systems 28 (NIPS 2015), Montreal, Canada, December 2015.
    9. 9)
      • 9. Han, S., Mao, H., Dally, W.J.: ‘Deep compression: compressing deep neural network with pruning, trained quantization and huffman coding’, arXiv preprint arXiv:1510.00149, 2015.
    10. 10)
      • 10. Wen, W., Wu, C., Wang, Y., et al: ‘Learning structured sparsity in deep neural networks’. 30th Conf. on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain, December 2016.
    11. 11)
      • 11. Lin, Z., Courbariaux, M., Memisevic, R., et al: ‘Neural networks with few multiplications’, arXiv:1510.03009v3 [cs.LG], 26 February 2016.
    12. 12)
      • 12. Kim, Y.-D., Park, E., Yoo, S., et al: ‘Compression of deep convolutional neuralnetworks for fast and low power mobile applications’, arXiv:1511.06530v2 [cs.CV], 24 February 2016.
    13. 13)
      • 13. Judd, P., Albericio, J., Hetherington, T., et al: ‘Reduced-precision strategies for boundedmemory in deep neural nets’, arXiv:1511.05236v4 [cs.LG], 8 January 2016.
    14. 14)
      • 14. Molchanov, D., Ashukha, A., Vetrov, D.: ‘Variational dropout sparsifies deep neural networks’, arXiv:1701.05369v3 [stat.ML], 13 June 2017.
    15. 15)
      • 15. Srivastava, N., Hinton, G., Krizhevsky, A., et al: ‘Dropout: a simple way to prevent neural networks from overfitting’, J. Mach. Learn. Res., 2014, 15, pp. 19291958.
http://iet.metastore.ingenta.com/content/journals/10.1049/joe.2018.8299
Loading

Related content

content/journals/10.1049/joe.2018.8299
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address