Your browser does not support JavaScript!

Robustness of text-based completely automated public turing test to tell computers and humans apart

Robustness of text-based completely automated public turing test to tell computers and humans apart

For access to this article, please select a purchase option:

Buy article PDF
(plus tax if applicable)
Buy Knowledge Pack
10 articles for $120.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Your details
Why are you recommending this title?
Select reason:
IET Information Security — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

Text-based completely automated public turing tests to tell computers and humans apart (CAPTCHAs) have been widely deployed across the Internet to defend against undesirable or malicious bot programmes. In this study, the authors provide a systematic analysis of text-based CAPTCHAs and innovatively improve their earlier attack on hollow CAPTCHAs to expand applicability to attack all the text CAPTCHAs. With this improved attack, they have successfully broken the CAPTCHA schemes adopted by 19 out of the top 20 web sites in Alexa including two versions of the famous ReCAPTCHA. With success rates ranging from 12 to 88.8% (note that the success rate for Yandex CAPTCHA is 0%), they demonstrate the effectiveness of their attack method. It is not only applicable to hollow CAPTCHAs, but also to non-hollow ones. As their attack casts serious doubt on the viability of current designs, they offer lessons and guidelines for designing better text-based CAPTCHAs.


    1. 1)
      • 13. Bursztein, E., Martin, M., Mitchell, J.: ‘Text-based CAPTCHA strengths and weaknesses’. Proc. 18th ACM Conf. on Computer and Communications Security, 2011, pp. 125138.
    2. 2)
      • 7. Yan, J., El Ahmad, A.S.: ‘A low-cost attack on a Microsoft CAPTCHA’. Proc. of the 15th ACM Conf. on Computer and Communications Security, 2008, pp. 543554.
    3. 3)
    4. 4)
      • 11. Simard, P.Y.: ‘Using machine learning to break visual human interaction proofs’, Adv. Neural Inf. Process. Syst., 2005, 17, pp. 265272.
    5. 5)
      • 2. von Ahn, L., Blum, M., Hopper, N.J., Langford, J.: ‘CAPTCHA: using hard AI problems for security’ (Eurocrypt, 2003).
    6. 6)
      • 5. Chellapilla, K., Larson, K., Simard, P.Y., et al: ‘Building segmentation based human-friendly human interaction proofs (HIPs)’, in Baird, H.S., Lopresti, D.P., (Eds.): ‘Human interactive proofs’ (Springer, Berlin, Heidelberg, 2005), pp. 126.
    7. 7)
      • 19. Cruz-Perez, C., et al: ‘Breaking ReCAPTCHAs with unpredictable collapse: heuristic character segmentation and recognition’, in Carrasco-Ochoa, J.A., (Eds.): ‘Pattern recognition’ (Springer, Berlin, Heidelberg, 2012), pp. 155165.
    8. 8)
      • 15. Gao, H., Wang, W., Fan, Y., et al: ‘The robustness of ‘connecting characters together CAPTCHAs’, J. Inf. Sci. Eng., 2014, 30, (2), pp. 347369.
    9. 9)
      • 18. Bursztein, E., Aigrain, J., Moscicki, A., et al: ‘The end is nigh: generic solving of text-based CAPTCHAs’. Proc. Eighth USENIX Conf. on Offensive Technologies. USENIX Association, 2014, pp. 33.
    10. 10)
      • 17. Bruce, J., Balch, T., Veloso, M.: ‘Fast and inexpensive color image segmentation for interactive robots’. Proc. 2000 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, 2000. (IROS 2000), 2000, vol. 3, pp. 20612066.
    11. 11)
      • 3. EI Ahmad, A.S., Yan, J., Tayara, M.: ‘The robustness of Google CAPTCHAs’ (Newcastle University, Bericht, 2011).
    12. 12)
      • 9. Chellapilla, K., Larson, K., Simard, P., et al: ‘Designing human friendly human interaction proofs (HIPs)’. Proc. SIGCHI Conf. on Human Factors in Computing Systems, 2005, pp. 711720.
    13. 13)
      • 12. Yan, J., El Ahmad, A.S.: ‘Breaking visual CAPTCHAs with naive pattern recognition algorithms’. 23rd Annual Computer Security Applications Conf., 2007. ACSAC 2007, 2007, pp. 279291.
    14. 14)
      • 8. Gao, H., Wang, W., Qi, J., et al: ‘The robustness of hollow CAPTCHAs’. Proc. of the 2013 ACM SIGSAC Conf. on Computer & Communications Security, 2013, pp. 10751086.
    15. 15)
      • 4. Yan, J., El Ahmad, A.S.: ‘Usability of CAPTCHAs or usability issues in CAPTCHA design’. Proc. of the Fourth Symp. on Usable Privacy and Security, 2008, pp. 4452.
    16. 16)
    17. 17)
      • 14. Xu, Y., Reynaga, G., Chiasson, S., et al: ‘Security and usability challenges of moving-object CAPTCHAs’. Decoding Codewords in Motion[C]//USENIX Security Symp., 2012, pp. 4964.
    18. 18)
      • 10. Mori, G., Malik, J.: ‘Recognizing objects in adversarial clutter: ‘breaking a visual CAPTCHA’. Proc. 2003 IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, 2003’, 2003, vol. 1, pp. 1: I-134I-141.
    19. 19)
      • 6. Chellapilla, K., Larson, K., Simard, P.Y., et al: ‘Computers beat humans at single character recognition in reading based human interaction proofs (HIPs)’. Second Conf. on Email and Anti-Spam (CEAS) 2005, Stanford University, USA, July 2005.

Related content

This is a required field
Please enter a valid email address