Your browser does not support JavaScript!

access icon free A cost-effective adaptive random testing approach by dynamic restriction

A key objective of software testing is to find program errors that cause failure in software, at less cost. One basic testing technique is random testing (RT), but many researchers have criticised its failure-detection effectiveness. Several researchers have proposed that an enhancement of the failure-detection effectiveness of RT is achieved if test cases are evenly spread within the input domain. Adaptive RT (ART) describes a family of algorithms that employ various strategies to evenly and randomly spread test cases. Fixed sized candidate set ART (FSCS-ART) is an ART algorithm that has gained many research studies far and wide; however, the high distance computations make its algorithm computationally expensive. The authors propose a new ART method that restricts distance computations to only test cases inside an exclusion zone. The experimental results show that the new ART method not only improves RT but also provides failure-detection effectiveness similar to FSCS-ART, while significantly minimising computation overhead.


    1. 1)
      • 17. Chen, T.Y., Tse, T., Yu, Y.-T.: ‘Proportional sampling strategy: a compendium and some insights’, J. Syst. Softw., 2001, 58, (1), pp. 6581.
    2. 2)
      • 2. Chen, T.Y., Leung, H., Mak, I.: ‘Adaptive random testing’. Advances in Computer Science – Asian 2004. Higher-Level Decision Making, Chiang Mai, Thailand, 2004, pp. 320329.
    3. 3)
      • 5. Chen, J., Kuo, F.-C., Chen, T.Y., et al: ‘A similarity metric for the inputs of OO programs and its application in adaptive random testing’, IEEE Trans. Reliab., 2017, 66, (2), pp. 373402.
    4. 4)
      • 8. Mao, C., Chen, T.Y., Kuo, F.-C.: ‘Out of sight, out of mind: a distance-aware forgetting strategy for adaptive random testing’, Sci. China Inf. Sci., 2017, 60, (9), p. 092106.
    5. 5)
      • 4. Feldt, R., Poulding, S., Clark, D., et al: ‘Test set diameter: quantifying the diversity of sets of test cases’. Proc. of the 9th IEEE Int. Conf. on Software Testing, Verification and Validation (ICST'16), Chicago, IL, USA, 2016, pp. 223233.
    6. 6)
      • 21. Chan, K.P., Chen, T., Towey, D.: ‘Forgetting test cases’. Proc. of the 30th Annual Int. Computer Software and Applications Conference (COMPSAC'06), Chicago, IL, USA, 2006, pp. 485494.
    7. 7)
      • 18. Ahmad, M.A., Oriol, M.: ‘Automated discovery of failure domain’, Lect. Notes Softw. Eng., 2013, 1, (3), p. 289.
    8. 8)
      • 26. Chan, K.P., Chen, T.Y., Towey, D.P.: ‘Restricted random testing’. Software quality—ECSQ, Helsinki, Finland, 2002, pp. 321330.
    9. 9)
      • 22. Geng, J., Zhang, J.: ‘A new method to solve the ‘boundary effect’ of adaptive random testing’. Int. Conf. on Educational and Information Technology (ICEIT), Chongqing, China, 2010, pp. V1-298V291-302.
    10. 10)
      • 9. Chen, T.Y., Kuo, F.-C., Zhou, Z.: ‘On the relationships between the distribution of failure-causing inputs and effectiveness of adaptive random testing’. Proc. of the 17th Int. Conf. on Software Engineering and Knowledge Engineering (SEKE'2005), Taipei, Taiwan, China, 2005, pp. 306311.
    11. 11)
      • 12. Schneckenburger, C., Mayer, J.: ‘Towards the determination of typical failure patterns’. Proc. of the 4th Int. Workshop on Software Quality Assurance (SOQUA'07), Dubrovnik, Croatia, 2007, pp. 9093.
    12. 12)
      • 13. Bishop, P.G.: ‘The variation of software survival time for different operational input profiles (or why you can wait a long time for a big bug to fail)’. Proc. of the 23rd Int. Symp. on Fault-Tolerant Computing (FTCS-23), Toulouse, France, 1993, pp. 98107.
    13. 13)
      • 15. Barus, A.C., Chen, T.Y., Kuo, F.-C., et al: ‘The impact of source test case selection on the effectiveness of metamorphic testing’. IEEE/ACM 1st Int. Workshop on Metamorphic Testing (MET), Austin, TX, USA, 2016, pp. 511.
    14. 14)
      • 10. Chen, J., Zhu, L., Chen, T.Y., et al: ‘Test case prioritization for object-oriented software: an adaptive random sequence approach based on clustering’, J. Syst. Softw., 2018, 135, pp. 107125.
    15. 15)
      • 27. Jia, Y., Harman, M.: ‘An analysis and survey of the development of mutation testing’, IEEE Trans. Softw. Eng., 2011, 37, (5), pp. 649678.
    16. 16)
      • 25. Qi, Y., Wang, Z., Yao, Y.: ‘Influence of the distance calculation error on the performance of adaptive random testing’. IEEE Int. Conf. on Software Quality, Reliability and Security Companion (QRS-C), Prague, Czech Republic, 2017, pp. 316319.
    17. 17)
      • 6. Chen, T.Y., Kuo, F.-C., Towey, D., et al: ‘A revisit of three studies related to random testing’, Sci. China Inf. Sci., 2015, 58, (5), pp. 19.
    18. 18)
      • 1. Hamlet, R.: ‘Random testing’ in Marciniak, J. (Ed.): ‘Encyclopedia of software engineering’ (John Wiley & Sons, New York, 2002, 2nd edn.), pp. 970978.
    19. 19)
      • 23. Liu, H., Xie, X., Yang, J., et al: ‘Adaptive random testing by exclusion through test profile’. 10th Int. Conf. on Quality Software (QSIC), Zhangjiajie, China, 2010, pp. 92101.
    20. 20)
      • 16. Patrick, M., Jia, Y.: ‘KD-ART: should we intensify or diversify tests to kill mutants?’, Inf. Softw. Technol., 2017, 81, pp. 3651.
    21. 21)
      • 3. Chen, T.Y., Kuo, F.-C., Merkel, R.G., et al: ‘Adaptive random testing: the art of test case diversity’, J. Syst. Softw., 2010, 83, (1), pp. 6066.
    22. 22)
      • 24. Chan, K.P., Chen, T.Y., Towey, D.: ‘Restricted random testing: adaptive random testing by exclusion’, Int. J. Softw. Eng. Knowl. Eng., 2006, 16, (04), pp. 553584.
    23. 23)
      • 20. Mayer, J., Schneckenburger, C.: ‘An empirical analysis and comparison of random testing techniques’. Int. Symp. on Empirical Software Engineering (ISESE 2006), Rio de Janeiro, Brazil, 2006, pp. 105114.
    24. 24)
      • 19. Mak, I.K.: ‘On the effectiveness of random testing’, University of Melbourne, Faculty of Science, 1998.
    25. 25)
      • 14. Huang, R., Liu, H., Xie, X., et al: ‘Enhancing mirror adaptive random testing through dynamic partitioning’, Inf. Softw. Technol., 2015, 67, pp. 1329.
    26. 26)
      • 7. Chen, T.Y.: ‘Fundamentals of test case selection: diversity, diversity, diversity’. Proc. of the 2nd Int. Conf. on Software Engineering and Data Mining (SEDM'10), Chengdu, China, 2010, pp. 723724.
    27. 27)
      • 11. White, L.J., Cohen, E.I.: ‘A domain strategy for computer program testing’, IEEE Trans. Softw. Eng., 1980, 6, (3), pp. 247257.

Related content

This is a required field
Please enter a valid email address