http://iet.metastore.ingenta.com
1887

Image-based CAPTCHAs based on neural style transfer

Image-based CAPTCHAs based on neural style transfer

For access to this article, please select a purchase option:

Buy article PDF
$19.95
(plus tax if applicable)
Buy Knowledge Pack
10 articles for $120.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
IET Information Security — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

Over the last few years, completely automated public turing test to tell computers and humans apart (CAPTCHA) has been used as an effective method to prevent websites from malicious attacks, however, CAPTCHA designers failed to reach a balance between good usability and high security. In this study, the authors apply neural style transfer to enhance the security for CAPTCHA design. Two image-based CAPTCHAs, Grid-CAPTCHA and Font-CAPTCHA, based on neural style transfer are proposed. Grid-CAPTCHA offers nine stylized images to users and requires users to select all corresponding images according to a short description, and Font-CAPTCHA asks users to click Chinese characters presented in the image in sequence according to the description. To evaluate the effectiveness of this techniques on enhancing CAPTCHA security, they conducted a comprehensive field study and compared them to similar mechanisms. The comparison results demonstrated that the neural style transfer decreased the success rate of automated attacks. Human beings have achieved a successful solving rate of 75.04 and 84.49% on the Grid-CAPTCHA and Font-CAPTCHA schemes, respectively, indicating good usability. The results prove deep learning can have a positive effect on enhancing CAPTCHA security and provides a promising direction for future CAPTCHA study.

References

    1. 1)
      • 1. Von Ahn, L., Blum, M., Hopper, N.J., et al: ‘CAPTCHA: using hard AI problems for security’. Int. Conf. on Theory and Applications of Cryptographic Techniques, 2003, pp. 294311.
    2. 2)
      • 2. Goodfellow, I.J., Bulatov, Y., Ibarz, J., et al: ‘Multi-digit number recognition from street view imagery using deep convolutional neural networks’. arXiv preprint arXiv:1312.6082, 2013.
    3. 3)
      • 3. Karthik, C.P., Recasens, R.A.: Breaking microsoft's CAPTCHA. Tech. Rep, 2015.
    4. 4)
      • 4. Rui, Y., Liu, Z.: ‘Artifacial: automated reverse turing test using facial features’, Multimedia Syst., 2004, 9, (6), pp. 493502.
    5. 5)
      • 5. Elson, J., Douceur J, R., Howell, J., et al: ‘Asirra: a CAPTCHA that exploits interest-aligned manual image categorization’. ACM Conf. on Computer and Communications Security, 2007, vol. 7, pp. 366374.
    6. 6)
      • 6. D'Souza, D., Polina, P.C., Yampolskiy, R.V.: ‘Avatar captcha: telling computers and humans apart via face classification’. 2012 IEEE Int. Conf. on Electro/Information Technology (EIT), 2012, pp. 16.
    7. 7)
      • 7. Baird, H.S., Bentley, J.L.: ‘Implicit captchas’. Proc. SPIE, 2005, vol. 5676, pp. 191196.
    8. 8)
      • 8. Geetest CAPTCHA. Available at http://www.geetest.com/exp.html, accessed 21 April 2017.
    9. 9)
      • 9. Ren, S., He, K., Girshick, R., et al: ‘‘Faster R-CNN: towards real-time object detection with region proposal networks’’, Adv. Neural. Inf. Process. Syst., 2015, pp. 9199.
    10. 10)
      • 10. Hu, J., Shen, L., Sun, G.: ‘Squeeze-and-excitation networks’. arXiv preprint arXiv:1709.01507, 2017.
    11. 11)
      • 11. Sivakorn, S., Polakis, I., Keromytis, A.D.: ‘I am robot:(deep) learning to break semantic image captchas’. 2016 IEEE European Symp. Security and Privacy (EuroS&P), 2016, pp. 388403.
    12. 12)
      • 12. Cheung, B.: ‘Convolutional neural networks applied to human face classification’. 2012 11th Int. Conf. Machine Learning and Applications (ICMLA), 2012, vol. 2, pp. 580583.
    13. 13)
      • 13. Liu, B., Liu, Y., Zhou, K.: ‘Image classification for dogs and cats’.
    14. 14)
      • 14. China Railway Customer Service Website. Available at http://www.12306.cn/mormhweb/, accessed 20 May 2017.
    15. 15)
      • 15. Osadchy, M., Hernandez-Castro, J., Gibson, S., et al: ‘No Bot expects the DeepCAPTCHA! introducing immutable adversarial examples, with applications to CAPTCHA generation’, IEEE Trans. Inf. Forensics Sec., 2017, 12, (11), pp. 26402653.
    16. 16)
      • 16. Kwon, H., Kim, Y., Yoon, H., et al: ‘CAPTCHA image generation systems using generative adversarial networks’, IEICE Trans. Inf. Syst., 2018, 101, (2), pp. 543546.
    17. 17)
      • 17. Gatys, L.A., Ecker, A.S., Bethge, M.: ‘A neural algorithm of artistic style’, arXiv preprint arXiv:1508.06576, 2015.
    18. 18)
      • 18. Johnson, J., Alahi, A., Fei-Fei, L.: ‘Perceptual losses for real-time style transfer and super-resolution’. European Conf. Computer Vision, 2016, pp. 694711.
    19. 19)
      • 19. Schlaikjer, A.: ‘A dual-use speech CAPTCHA: aiding visually impaired web users while providing transcriptions of audio streams’, LTI-CMU Technical Report, 2007, pp. 07014.
    20. 20)
      • 20. Mori, G., Malik, J.: ‘Recognizing objects in adversarial clutter: breaking a visual CAPTCHA’. Proc. 2003 IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, 2003, vol. 1, pp. I-I.
    21. 21)
      • 21. Chellapilla, K., Simard, P.Y.: ‘Using machine learning to break visual human interaction proofs (HIPs)’, Adv. Neural. Inf. Process. Syst., 2005, pp. 265272.
    22. 22)
      • 22. Gao, H., Wang, W., Qi, J., et al: ‘The robustness of hollow CAPTCHAs’. Proc. 2013 ACM SIGSAC Conf. Computer & Communications Security, 2013, pp. 10751086.
    23. 23)
      • 23. Bursztein, E., Aigrain, J., Moscicki, A., et al: ‘The end is nigh: generic solving of text-based CAPTCHAs’. WOOT, 2014.
    24. 24)
      • 24. Gao, H., Jeff, Y., Cao, F., et al: ‘A simple generic attack on text captchas’. The Network and Distributed System Security Symposium (NDSS), San Diego, California, 2016, pp. 114.
    25. 25)
      • 25. Datta, R., Li, J., Wang, J.Z.: ‘IMAGINATION: a robust image-based CAPTCHA generation system’. Proc. 13th Annual ACM Int. Conf. on Multimedia, 2005, pp. 331–334.
    26. 26)
      • 26. Goswami, G., Powell, B.M., Vatsa, M., et al: ‘FR-CAPTCHA: CAPTCHA based on recognizing human faces’, PLOS ONE, 2014, 9, (4), p. e91708.
    27. 27)
      • 27. Zhu, B.B., Yan, J., Li, Q., et al: ‘Attacks and design of image recognition CAPTCHAs’. Proc. 17th ACM Conf. on Computer and Communications Security, 2010, pp. 187200.
    28. 28)
      • 28. Gao, H., Lei, L., Zhou, X., et al: ‘The robustness of face-based CAPTCHAs’. 2015 IEEE Int. Conf. on Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing (CIT/IUCC/DASC/PICOM), 2015, pp. 22482255.
    29. 29)
      • 29. Li, Q.: ‘A computer vision attack on the ARTiFACIAL CAPTCHA’, Multimedia Tools Appl., 2015, 74, (13), pp. 45834597.
    30. 30)
      • 30. Gossweiler, R., Kamvar, M., Baluja, S.: ‘What's up CAPTCHA?: a CAPTCHA based on image orientation’. Proc. 18th Int. Conf. on World Wide Web, 2009, pp. 841850.
    31. 31)
      • 31. Capy Puzzle CATCHA. Available at https://www.capy.me/, accessed 20 January 2014.
    32. 32)
      • 32. Hernández-Castro, C.J., R-moreno, M.D., Barrero, D.F.: ‘Side-channel attack against the Capy HIP’. 2014 Fifth Int. Conf. on Emerging Security Technologies (EST), 2014, pp. 99104.
    33. 33)
      • 33. I. Prisma Labs: ‘Prisma: turn memories into art using artificial intelligence’, 2016. Available at http://prisma-ai.com 2, 16.
    34. 34)
      • 34. Ulyanov, D., Lebedev, V., Vedaldi, A., et al: ‘Texture networks: feed-forward synthesis of textures and stylized images’, ICML, 2016, 1, (2), pp. 13491357.
    35. 35)
      • 35. He, K., Zhang, X., Ren, S., et al: ‘Deep residual learning for image recognition’. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, 2016, pp. 770778.
    36. 36)
      • 36. Clevert, D.A., Unterthiner, T., Hochreiter, S.: ‘Fast and accurate deep network learning by exponential linear units (ELUs)’, arXiv preprint arXiv:1511.07289, 2015.
    37. 37)
      • 37. Algwil, A., Ciresan, D., Liu, B., et al: ‘A security analysis of automated Chinese turing tests’. Proc. 32nd Annual Conf. on Computer Security Applications, 2016, pp. 520532.
    38. 38)
      • 38. Pérez, P., Gangnet, M., Blake, A.: ‘‘Poisson image editing’’, ACM Trans. Graphics (TOG), 2003, 22, (3), pp. 313318.
    39. 39)
      • 39. Von Ahn, L., Blum, M., Langford, J.: ‘Telling humans and computers apart automatically’, Commun. ACM, 2004, 47, (2), pp. 5660.
    40. 40)
      • 40. Bursztein, E., Bethard, S., Fabry, C., et al: ‘How good are humans at solving CAPTCHAs? A large scale evaluation’. 2010 IEEE Symp. on Security and Privacy (SP), 2010, pp. 399413.
    41. 41)
      • 41. Uijlings, J.R.R., Van De Sande, K.E.A., Gevers, T., et al: ‘Selective search for object recognition’, Int. J. Comput. Vis., 2013, 104, (2), pp. 154171.
    42. 42)
      • 42. Rosenberg, C.: ‘Improving photo search: A step across the semantic gap’. Google Research Blog, 2013. Available at http://googleresearch.blogspot.com/2013/06/ improving-photo-search-step-across.html.
    43. 43)
      • 43. Baidu Image. Available at http://image.baidu.com/?fr=shitu, accessed 27 April 2018.
    44. 44)
      • 44. Tencent Youtu. Available at http://open.youtu.qq.com/#/img-content-identity, accessed 27 April 2018.
    45. 45)
      • 45. 12306.cn during spring festival transportation period. http://app.techweb.com.cn/wp/2018-01-16/2629356.shtml, accessed 29 April 2018.
    46. 46)
      • 46. Ya, H., Sun, H., Helt, J., et al: ‘Learning to associate words and images using a large-scale graph’. 14th Conference on Computer and Robot Vision (CRV), Edmonton, AB, Canada, 2017, pp. 1623.
    47. 47)
      • 47. Swain, M.J., Ballard, D.H.: ‘Indexing via color histograms’, in Sood, A.K. (Ed.): ‘Active perception and robot vision’ (Springer, Berlin, Heidelberg, 1992), pp. 261273.
    48. 48)
      • 48. Chen, H., Tsai, S.S., Schroth, G., et al: ‘Robust text detection in natural images with edge-enhanced maximally stable extremal regions’. 2011 18th IEEE Int. Conf. on Image Processing (ICIP), 2011, pp. 26092612.
    49. 49)
      • 49. Shirali-Shahreza, M., Shirali-Shahreza, S.: ‘Collage captcha’. 9th Int. Symp. on Signal Processing and its Applications, 2007 (ISSPA 2007), 2007, pp. 14.
    50. 50)
      • 50. Golle, P.: ‘Machine learning attacks against the Asirra CAPTCHA’. Proc. 15th ACM Conf. on Computer and Communications Security, 2008, pp. 535542.
    51. 51)
      • 51. Shirali-Shahreza, M., Shirali-Shahreza, S.: ‘Motion captcha’. 2008 Conf. on Human System Interactions2008, pp. 10421044.
    52. 52)
      • 52. Goswami, G., Powell, B.M., Vatsa, M., et al: ‘FaceDCAPTCHA: face detection based color image CAPTCHA’, Future Gener. Comput. Syst., 2014, 31, pp. 5968.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-ifs.2018.5036
Loading

Related content

content/journals/10.1049/iet-ifs.2018.5036
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address