http://iet.metastore.ingenta.com
1887

A cost-effective adaptive random testing approach by dynamic restriction

A cost-effective adaptive random testing approach by dynamic restriction

For access to this article, please select a purchase option:

Buy article PDF
$19.95
(plus tax if applicable)
Buy Knowledge Pack
10 articles for $120.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
IET Software — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

A key objective of software testing is to find program errors that cause failure in software, at less cost. One basic testing technique is random testing (RT), but many researchers have criticised its failure-detection effectiveness. Several researchers have proposed that an enhancement of the failure-detection effectiveness of RT is achieved if test cases are evenly spread within the input domain. Adaptive RT (ART) describes a family of algorithms that employ various strategies to evenly and randomly spread test cases. Fixed sized candidate set ART (FSCS-ART) is an ART algorithm that has gained many research studies far and wide; however, the high distance computations make its algorithm computationally expensive. The authors propose a new ART method that restricts distance computations to only test cases inside an exclusion zone. The experimental results show that the new ART method not only improves RT but also provides failure-detection effectiveness similar to FSCS-ART, while significantly minimising computation overhead.

References

    1. 1)
      • 1. Hamlet, R.: ‘Random testing’ in Marciniak, J. (Ed.): ‘Encyclopedia of software engineering’ (John Wiley & Sons, New York, 2002, 2nd edn.), pp. 970978.
    2. 2)
      • 2. Chen, T.Y., Leung, H., Mak, I.: ‘Adaptive random testing’. Advances in Computer Science – Asian 2004. Higher-Level Decision Making, Chiang Mai, Thailand, 2004, pp. 320329.
    3. 3)
      • 3. Chen, T.Y., Kuo, F.-C., Merkel, R.G., et al: ‘Adaptive random testing: the art of test case diversity’, J. Syst. Softw., 2010, 83, (1), pp. 6066.
    4. 4)
      • 4. Feldt, R., Poulding, S., Clark, D., et al: ‘Test set diameter: quantifying the diversity of sets of test cases’. Proc. of the 9th IEEE Int. Conf. on Software Testing, Verification and Validation (ICST'16), Chicago, IL, USA, 2016, pp. 223233.
    5. 5)
      • 5. Chen, J., Kuo, F.-C., Chen, T.Y., et al: ‘A similarity metric for the inputs of OO programs and its application in adaptive random testing’, IEEE Trans. Reliab., 2017, 66, (2), pp. 373402.
    6. 6)
      • 6. Chen, T.Y., Kuo, F.-C., Towey, D., et al: ‘A revisit of three studies related to random testing’, Sci. China Inf. Sci., 2015, 58, (5), pp. 19.
    7. 7)
      • 7. Chen, T.Y.: ‘Fundamentals of test case selection: diversity, diversity, diversity’. Proc. of the 2nd Int. Conf. on Software Engineering and Data Mining (SEDM'10), Chengdu, China, 2010, pp. 723724.
    8. 8)
      • 8. Mao, C., Chen, T.Y., Kuo, F.-C.: ‘Out of sight, out of mind: a distance-aware forgetting strategy for adaptive random testing’, Sci. China Inf. Sci., 2017, 60, (9), p. 092106.
    9. 9)
      • 9. Chen, T.Y., Kuo, F.-C., Zhou, Z.: ‘On the relationships between the distribution of failure-causing inputs and effectiveness of adaptive random testing’. Proc. of the 17th Int. Conf. on Software Engineering and Knowledge Engineering (SEKE'2005), Taipei, Taiwan, China, 2005, pp. 306311.
    10. 10)
      • 10. Chen, J., Zhu, L., Chen, T.Y., et al: ‘Test case prioritization for object-oriented software: an adaptive random sequence approach based on clustering’, J. Syst. Softw., 2018, 135, pp. 107125.
    11. 11)
      • 11. White, L.J., Cohen, E.I.: ‘A domain strategy for computer program testing’, IEEE Trans. Softw. Eng., 1980, 6, (3), pp. 247257.
    12. 12)
      • 12. Schneckenburger, C., Mayer, J.: ‘Towards the determination of typical failure patterns’. Proc. of the 4th Int. Workshop on Software Quality Assurance (SOQUA'07), Dubrovnik, Croatia, 2007, pp. 9093.
    13. 13)
      • 13. Bishop, P.G.: ‘The variation of software survival time for different operational input profiles (or why you can wait a long time for a big bug to fail)’. Proc. of the 23rd Int. Symp. on Fault-Tolerant Computing (FTCS-23), Toulouse, France, 1993, pp. 98107.
    14. 14)
      • 14. Huang, R., Liu, H., Xie, X., et al: ‘Enhancing mirror adaptive random testing through dynamic partitioning’, Inf. Softw. Technol., 2015, 67, pp. 1329.
    15. 15)
      • 15. Barus, A.C., Chen, T.Y., Kuo, F.-C., et al: ‘The impact of source test case selection on the effectiveness of metamorphic testing’. IEEE/ACM 1st Int. Workshop on Metamorphic Testing (MET), Austin, TX, USA, 2016, pp. 511.
    16. 16)
      • 16. Patrick, M., Jia, Y.: ‘KD-ART: should we intensify or diversify tests to kill mutants?’, Inf. Softw. Technol., 2017, 81, pp. 3651.
    17. 17)
      • 17. Chen, T.Y., Tse, T., Yu, Y.-T.: ‘Proportional sampling strategy: a compendium and some insights’, J. Syst. Softw., 2001, 58, (1), pp. 6581.
    18. 18)
      • 18. Ahmad, M.A., Oriol, M.: ‘Automated discovery of failure domain’, Lect. Notes Softw. Eng., 2013, 1, (3), p. 289.
    19. 19)
      • 19. Mak, I.K.: ‘On the effectiveness of random testing’, University of Melbourne, Faculty of Science, 1998.
    20. 20)
      • 20. Mayer, J., Schneckenburger, C.: ‘An empirical analysis and comparison of random testing techniques’. Int. Symp. on Empirical Software Engineering (ISESE 2006), Rio de Janeiro, Brazil, 2006, pp. 105114.
    21. 21)
      • 21. Chan, K.P., Chen, T., Towey, D.: ‘Forgetting test cases’. Proc. of the 30th Annual Int. Computer Software and Applications Conference (COMPSAC'06), Chicago, IL, USA, 2006, pp. 485494.
    22. 22)
      • 22. Geng, J., Zhang, J.: ‘A new method to solve the ‘boundary effect’ of adaptive random testing’. Int. Conf. on Educational and Information Technology (ICEIT), Chongqing, China, 2010, pp. V1-298V291-302.
    23. 23)
      • 23. Liu, H., Xie, X., Yang, J., et al: ‘Adaptive random testing by exclusion through test profile’. 10th Int. Conf. on Quality Software (QSIC), Zhangjiajie, China, 2010, pp. 92101.
    24. 24)
      • 24. Chan, K.P., Chen, T.Y., Towey, D.: ‘Restricted random testing: adaptive random testing by exclusion’, Int. J. Softw. Eng. Knowl. Eng., 2006, 16, (04), pp. 553584.
    25. 25)
      • 25. Qi, Y., Wang, Z., Yao, Y.: ‘Influence of the distance calculation error on the performance of adaptive random testing’. IEEE Int. Conf. on Software Quality, Reliability and Security Companion (QRS-C), Prague, Czech Republic, 2017, pp. 316319.
    26. 26)
      • 26. Chan, K.P., Chen, T.Y., Towey, D.P.: ‘Restricted random testing’. Software quality—ECSQ, Helsinki, Finland, 2002, pp. 321330.
    27. 27)
      • 27. Jia, Y., Harman, M.: ‘An analysis and survey of the development of mutation testing’, IEEE Trans. Softw. Eng., 2011, 37, (5), pp. 649678.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-sen.2017.0208
Loading

Related content

content/journals/10.1049/iet-sen.2017.0208
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address