access icon free Adaptive random testing based on flexible partitioning

Adaptive random testing (ART) achieves better failure-detection effectiveness than random testing due to its even spreading of test cases. ART by random partitioning (RP-ART) is a lightweight method, but its advantage over random testing is relatively low. Although iterative partition testing (IPT) method has good performance for detecting failures in a block pattern, it loses randomness during the test case generation. To overcome the shortcomings of the above two algorithms, a new algorithm named ART by flexible partitioning (FP-ART) is proposed. In the FP-ART, a set of random candidates is used to select an appropriate test case by considering their boundary distance. Accordingly, the corresponding sub-domain is also partitioned by the new test case. Based on this kind of flexible partitioning, the randomness of test case selection can be guaranteed and the spatial distribution of test cases is even more diverse. According to the results in simulation and empirical experiments, FP-ART demonstrates better failure-detection effectiveness than RP-ART and is more suitable to detect the failures in strip patterns than the IPT method. Meanwhile, its failure-detection ability is much stronger than that of fixed-size-candidate-set ART in the cases of a relatively high failure rate.

Inspec keywords: software reliability; program testing; iterative methods

Other keywords: test case selection; test case generation; failure detection; failure-detection effectiveness; FP-ART; flexible partitioning; random partitioning; lightweight method; IPT method; iterative partition testing method; RP-ART; boundary distance; failure-detection ability; adaptive random testing method

Subjects: Interpolation and function approximation (numerical analysis); Diagnostic, testing, debugging and evaluating systems; Software engineering techniques

References

    1. 1)
      • 12. Meulen, M.J.P., Bishop, P.G., Villa, R.: ‘An exploration of software faults and failure behaviour in a large population of programs’. Proc. 15th Int. Symp. on Software Reliability Engineering (ISSRE'04), Saint-Malo, Bretagne, France, 2004, pp. 101112.
    2. 2)
      • 28. Tramontana, P., Amalfitano, D., Amatucci, N., et al: ‘Developing and evaluating objective termination criteria for random testing’, ACM Trans. Softw. Eng. Methodol., 2019, 28, pp. 17:117:52.
    3. 3)
      • 19. Press, W.H., Teukolsky, S.A., Vetterling, W.T., et al: ‘Numerical recipes: the art of scientific computing’ (Cambridge University Press, New York, USA, 2007, 3rd edn.).
    4. 4)
      • 16. Chen, T.Y., Kuo, F.-C., Merkel, R.: ‘On the statistical properties of testing effectiveness measures’, J. Syst. Softw., 2006, 79, pp. 591601.
    5. 5)
      • 10. Mao, C., Zhan, X.: ‘Towards an improvement of bisection-based adaptive random testing’. Proc. 24th Asia–Pacific Software Engineering Conf. (APSEC'17), Nanjing, China, 2017, pp. 689694.
    6. 6)
      • 29. Chen, J., Ackah-Arthur, H., Mao, C., et al: ‘A taxonomic review of adaptive random testing: current status, classifications, and issues’, CoRR abs/1909.10879, 2019, pp. 134.
    7. 7)
      • 38. Ackah-Arthur, H., Chen, J., Towey, D., et al: ‘One-domain-one-input: adaptive random testing by orthogonal recursive bisection with restriction’, IEEE Trans. Reliab., 2019, 68, pp. 14041428.
    8. 8)
      • 36. Mao, C.: ‘Adaptive random testing based on two-point partitioning’, Informatica, 2012, 36, pp. 297303.
    9. 9)
      • 37. Salmon, J.K.: ‘Parallel hierarchical N-body methods’. PhD dissertation, California Institute of Technology, Pasadena, CA, USA, 1991.
    10. 10)
      • 20. Zhou, M., Cheng, X., Guo, X., et al: ‘Improving failure detection by automatically generating test cases near the boundaries’. Proc. IEEE 40th Annual Computer Software and Applications Conf. (COMPSAC'16), Atlanta, Georgia, USA, 2016, pp. 164173.
    11. 11)
      • 5. Chen, T.Y., Kuo, F.-C., Towey, D., et al: ‘A revisit of three studies related to random testing’, Sci. China, Inf. Sci., 2015, 58, pp. 052104:1052104:9.
    12. 12)
      • 14. Gutjahr, W.J.: ‘Partition testing vs. random testing: the influence of uncertainty’, IEEE Trans. Softw. Eng., 1999, 25, pp. 661674.
    13. 13)
      • 32. Chen, T.Y., Kuo, F.-C., Merkel, R.G., et al: ‘Mirror adaptive random testing’, Inf. Softw. Technol., 2004, 46, pp. 10011010.
    14. 14)
    15. 15)
      • 1. Hamlet, R.: ‘Random testing’, in Marciniak, J.J. (Ed.): ‘Encyclopedia of software engineering’ (John Wiley and Sons, Chichester, 2002, 2nd edn.), pp. 15071513.
    16. 16)
      • 27. Arcuri, A., Briand, L.C.: ‘A hitchhiker's guide to statistical tests for assessing randomized algorithms in software engineering’, Softw. Test. Verif. Reliab., 2014, 24, pp. 219250.
    17. 17)
      • 39. Shahbazi, A., Tappenden, A.F., Miller, J.: ‘Centroidal voronoi tessellations: a new approach to random testing’, IEEE Trans. Softw. Eng., 2013, 39, pp. 163183.
    18. 18)
      • 8. Mak, I.K.: ‘One the effectiveness of random testing’. Master's thesis, Department of Computer Science, University of Melbourne, Melbourne, Australia, 1997.
    19. 19)
      • 17. Mao, C., Chen, T.Y., Kuo, F.-C.: ‘Out of sight, out of mind: a distance-aware forgetting strategy for adaptive random testing’, Sci. China, Inf. Sci., 2017, 60, pp. 092106:1092106:21.
    20. 20)
      • 22. Do, H., Elbaum, S., Rothermel, G.: ‘Supporting controlled experimentation with testing techniques: an infrastructure and its potential impact’, Empir. Softw. Eng., 2005, 10, pp. 405435.
    21. 21)
      • 31. Arcuri, A., Briand, L.: ‘Adaptive random testing: an illusion of effectiveness?’. Proc. 20th Int. Symp. on Software Testing and Analysis (ISSTA'11), Toronto, Canada, 2011, pp. 265275.
    22. 22)
      • 6. Chen, T.Y., Tse, T.H., Yu, Y.T.: ‘Proportional sampling strategy: a compendium and some insights’, J. Syst. Softw., 2001, 58, pp. 6581.
    23. 23)
      • 2. Mariani, L., Pezzè, M., Zuddas, D.: ‘Recent advances in automatic black-box testing’, Adv. Comput., 2015, 99, pp. 157193.
    24. 24)
    25. 25)
      • 15. Anand, S., Burke, E.K., Chen, T.Y., et al: ‘An orchestrated survey of methodologies for automated software test case generation’, J. Syst. Softw., 2013, 86, pp. 19782001.
    26. 26)
      • 34. Huang, R., Liu, H., Xie, X., et al: ‘Enhancing mirror adaptive random testing through dynamic partitioning’, Inf. Softw. Technol., 2015, 67, pp. 1329.
    27. 27)
      • 35. Chan, K.P., Chen, T.Y., Towey, D.: ‘Restricted random testing: adaptive random testing by exclusion’, Int. J. Softw. Eng. Knowl. Eng., 2006, 16, pp. 553584.
    28. 28)
      • 7. Chen, T.Y., Merkel, R., Eddy, G., et al: ‘Adaptive random testing through dynamic partitioning’. Proc. 4th Int. Conf. on Quality Software (QSIC'04), Braunschweig, Germany, 2004, pp. 7986.
    29. 29)
      • 13. Schneckenburger, C., Mayer, J.: ‘Towards the determination of typical failure patterns’. Proc. 4th Int. Workshop on Software Quality Assurance (SOQUA'07) (in conjunction with the 6th Joint Meeting of European Software Engineering Conf. and ACM SIGSOFT Int. Symp. on Foundations of Software Engineering (ESEC/FSE'07)), Dubrovnik, Croatia, 2007, pp. 9093.
    30. 30)
      • 33. Chow, C., Chen, T.Y., Tse, T.H.: ‘The ART of divide and conquer: an innovative approach to improving the efficiency of adaptive random testing’. Proc. 13th Int. Conf. on Quality Software (QSIC'13), Nanjing, China, 2013, pp. 268275.
    31. 31)
      • 3. Chen, T.Y., Leung, H., Mak, I.K.: ‘Adaptive random testing’. Proc. 9th Asian Computing Science Conf. (ASIAN'04), Chiang Mai, Thailand, 2004, pp. 320329.
    32. 32)
      • 9. Mayer, J., Schneckenburger, C.: ‘An empirical analysis and comparison of random testing techniques’. Proc. 2006 ACM/IEEE Int. Symp. on Empirical Software Engineering (ISESE'06), Rio de Janeiro, Brazil, 2006, pp. 105114.
    33. 33)
      • 24. Jia, Y., Harman, M.: ‘An analysis and survey of the development of mutation testing’, IEEE Trans. Softw. Eng., 2011, 37, pp. 649678.
    34. 34)
      • 18. ‘ACM Collected algorithms’. Available at http://calgo.acm.org/, accessed 16 April 2018.
    35. 35)
      • 4. Chen, T.Y., Kuo, F.-C., Merkel, R.G., et al: ‘Adaptive random testing: the ART of test case diversity’, J. Syst. Softw., 2010, 83, pp. 6066.
    36. 36)
      • 21. May, P.S.: ‘Test Data Generation: Two Evolutionary Approaches to Mutation Testing’. PhD dissertation, The University of Kent, 2007.
    37. 37)
      • 11. Zhang, X.-F., Zhang, Z.-Z., Xie, X.-Y., et al: ‘An approach of iterative partition testing based on priority sampling’, Chin. J. Comput., 2016, 39, pp. 23072323(in Chinese).
    38. 38)
      • 23. Just, R., Jalali, D., Inozemtseva, L., et al: ‘Are mutants a valid substitute for real faults in software testing?’. Proc. 22nd ACM SIGSOFT Int. Symp. on Foundations of Software Engineering (FSE'14), Hong Kong, China, 2014, pp. 654665.
    39. 39)
      • 26. Chen, T.Y., Huang, D.H., Tse, T.H., et al: ‘An innovative approach to tackling the boundary effect in adaptive random testing’. Proc. 40th Annual Hawaii Int. Conf. on System Sciences (HICSS'07), Waikoloa, HI, USA., 2007, pp. 262262.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-sen.2019.0325
Loading

Related content

content/journals/10.1049/iet-sen.2019.0325
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading