http://iet.metastore.ingenta.com
1887

Prioritising abstract test cases: an empirical study

Prioritising abstract test cases: an empirical study

For access to this article, please select a purchase option:

Buy article PDF
$19.95
(plus tax if applicable)
Buy Knowledge Pack
10 articles for $120.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
IET Software — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

Test-case prioritisation (TCP) attempts to schedule the order of test-case execution such that faults can be detected as quickly as possible. TCP has been widely applied in many testing scenarios such as regression testing and fault localisation. Abstract test cases (ATCs) are derived from models of the system under test and have been applied to many testing environments such as model-based testing and combinatorial interaction testing. Although various empirical and analytical comparisons for some ATC prioritisation (ATCP) techniques have been conducted, to the best of the authors’ knowledge, no comparative study focusing on the most current techniques has yet been reported. In this study, they investigated 18 ATCP techniques, categorised into four classes. They conducted a comprehensive empirical study to compare 16 of the 18 ATCP techniques in terms of their testing effectiveness and efficiency. They found that different ATCP techniques could be cost-effective in different testing scenarios, allowing us to present recommendations and guidelines for which techniques to use under what conditions.

References

    1. 1)
      • 1. Rothermel, G., Untch, R.H., Chu, C., et al: ‘Prioritizing test cases for regression testing’, IEEE Trans. Softw. Eng., 2001, 27, (10), pp. 929948.
    2. 2)
      • 2. Yoo, S., Harman, M.: ‘Regression testing minimization, selection and prioritization: a survey’, Softw. Test. Verif. Reliab., 2012, 22, (2), pp. 67120.
    3. 3)
      • 3. Di Nardo, D., Alshahwan, N., Briand, L., et al: ‘Coverage-based regression test case selection, minimization and prioritization: a case study on an industrial system’, Softw. Test. Verif. Reliab., 2015, 25, (4), pp. 371396.
    4. 4)
      • 4. Li, Z., Harman, M., Hierons, R.M.: ‘Search algorithms for regression test case prioritization’, IEEE Trans. Softw. Eng., 2007, 33, (4), pp. 225237.
    5. 5)
      • 5. Parejo, J.A., Sánchez, A.B., Segura, S., et al: ‘Multi-objective test case prioritization in highly configurable systems: a case study’, J. Syst. Softw., 2016, 122, pp. 287310.
    6. 6)
      • 6. Chen, J., Zhu, L., Chen, T.Y., et al: ‘Test case prioritization for object-oriented software: an adaptive random sequence approach based on clustering’, J. Syst. Softw., 2018, 135, pp. 107125.
    7. 7)
      • 7. Jiang, B., Zhang, Z., Chan, W.K., et al: ‘Adaptive random test case prioritization’. Proc. 24th IEEE/ACM Int. Conf. Automated Software Engineering (ASE'09), Auckland, New Zealand, 2009, pp. 233244.
    8. 8)
      • 8. Zhang, X., Chen, T.Y., Liu, H.: ‘An application of adaptive random sequence in test case prioritization’. Proc. 26th Int. Conf. Software Engineering and Knowledge Engineering (SEKE'14), Vancouver, Canada, 2014, pp. 126131.
    9. 9)
      • 9. Zhang, X., Xie, X., Chen, T.Y.: ‘Test case prioritization using adaptive random sequence with category-partition-based distance’. Proc. 16th IEEE Int. Conf. Software Quality, Reliability and Security (QRS'16), Vienna, Austria, 2016, pp. 374385.
    10. 10)
      • 10. Fang, C., Chen, Z., Wu, K., et al: ‘Similarity-based test case prioritization using ordered sequences of program entities’, Softw. Qual. J., 2014, 22, (2), pp. 335361.
    11. 11)
      • 11. Noor, T.B., Hemmati, H.: ‘A similarity-based approach for test case prioritization using historical failure data’. Proc. 26th Int. Symp. Software Reliability Engineering (ISSRE'15), Ottawa, Canada, 2015, pp. 5868.
    12. 12)
      • 12. Catal, C., Mishra, D.: ‘Test case prioritization: a systematic mapping study’, Softw. Qual. J., 2013, 21, (3), pp. 445478.
    13. 13)
      • 13. Khatibsyarbini, M., Isa, M.A., Jawawi, D.N.A., et al: ‘Test case prioritization approaches in regression testing: a systematic literature review’, Inf. Softw. Technol., 2018, 93, pp. 7493.
    14. 14)
      • 14. Grindal, M., Lindström, B., Offutt, J., et al: ‘An evaluation of combination strategies for test case selection’, Empir. Softw. Eng., 2006, 11, (4), pp. 583611.
    15. 15)
      • 15. Henard, C., Papadakis, M., Harman, M., et al: ‘Comparing white-box and black-box test prioritization’. Proc. 38th Int. Conf. Software Engineering (ICSE'16), Austin, Texas, USA, 2016, pp. 523534.
    16. 16)
      • 16. Ostrand, T.J., Balcer, M.J.: ‘The category-partition method for specifying and generating functional tests’, Commun. ACM, 1988, 31, (6), pp. 676686.
    17. 17)
      • 17. Nie, C., Leung, H.: ‘A survey of combinatorial testing’, ACM Comput. Surv., 2011, 43, (2), pp. 11:111:29.
    18. 18)
      • 18. Utting, M., Legeard, B.: ‘Practical model-based testing – a tools approach’, Int. J. Adv. Softw., 2007, 2, (1), pp. 1419.
    19. 19)
      • 19. Bryce, R.C., Colbourn, C.J.: ‘Prioritized interaction testing for pairwise coverage with seeding and constraints’, Inf. Softw. Technol., 2006, 48, (10), pp. 960970.
    20. 20)
      • 20. Huang, R., Chen, J., Towey, D., et al: ‘Aggregate-strength interaction test suite prioritization’, J. Syst. Softw., 2015, 99, pp. 3651.
    21. 21)
      • 21. Petke, J., Cohen, M.B., Harman, M., et al: ‘Practical combinatorial interaction testing: empirical findings on efficiency and early fault detection’, IEEE Trans. Softw. Eng., 2015, 41, (9), pp. 901924.
    22. 22)
      • 22. Al-Hajjaji, M., Thüm, T., Meinicke, J., et al: ‘Similarity-based prioritization in software product-line testing’. Proc. 18th Int. Software Product Line Conf. (SPLC'14), Florence, Italy, 2014, pp. 197206.
    23. 23)
      • 23. Henard, C., Papadakis, M., Perrouin, G., et al: ‘Bypassing the combinatorial explosion: using similarity to generate and prioritize t-wise test configurations for software product lines’, IEEE Trans. Softw. Eng., 2014, 40, (7), pp. 650670.
    24. 24)
      • 24. Petke, J., Cohen, M.B., Harman, M., et al: ‘Efficiency and early fault detection with lower and higher strength combinatorial interaction testing’. Proc. 12th Joint Meeting on European Software Engineering Conf. ACM SIGSOFT Symp. Foundations of Software Engineering (ESEC/FSE'13), Saint Petersburg, Russia, 2013, pp. 2636.
    25. 25)
      • 25. Papadakis, M., Henard, C., Traon, Y.L.: ‘Sampling program inputs with mutation analysis: going beyond combinatorial interaction testing’. Proc. Seventh Int. Conf. Software Testing, Verification and Validation (ICST'14), Cleveland, Ohio, USA, 2014, pp. 110.
    26. 26)
      • 26. Zhang, Z., Zhang, J.: ‘Characterizing failure-causing parameter interactions by adaptive testing’. Proc. 20th Int. Symp. Software Testing and Analysis (ISSTA'11), Toronto, Canada, 2011, pp. 331341.
    27. 27)
      • 27. Cohen, M.B., Dwyer, M.B., Shi, J.: ‘Constructing interaction test suites for highly configurable systems in the presence of constraints: a greedy approach’, IEEE Trans. Softw. Eng., 2008, 34, (5), pp. 633650.
    28. 28)
      • 28. Yilmaz, C., Dumlu, E., Cohen, M.B., et al: ‘Reducing masking effects in combinatorial interaction testing: a feedback driven adaptive approach’, IEEE Trans. Softw. Eng., 2014, 40, (1), pp. 4366.
    29. 29)
      • 29. Thüm, T., Apel, S., Kästner, C., et al: ‘A classification and survey of analysis strategies for software product lines’, ACM Comput. Surv., 2014, 47, (1), pp. 6:16:45.
    30. 30)
      • 30. Barus, A.C., Chen, T.Y., Kuo, F.C., et al: ‘A cost-effective random testing method for programs with non-numeric inputs’, IEEE Trans. Comput., 2016, 65, (12), pp. 35093523.
    31. 31)
      • 31. Zhang, L., Hao, D., Zhang, L., et al: ‘Bridging the gap between the total and additional test-case prioritization strategies’. Proc. 35th Int. Conf. Software Engineering (ICSE'13), San Francisco, USA, 2013, pp. 192201.
    32. 32)
      • 32. Bryce, R.C., Memon, A.M.: ‘Test suite prioritization by interaction coverage’. Proc. Workshop on Domain Specific Approaches to Software Test Automation (DoSTA'07), Dubrovnik, Croatia, 2007, pp. 17.
    33. 33)
      • 33. Bryce, R.C., Sampath, S., Memon, A.M.: ‘Developing a single model and test prioritization strategies for event-driven software’, IEEE Trans. Softw. Eng., 2011, 37, (1), pp. 4864.
    34. 34)
      • 34. Qu, X., Cohen, M.B., Woolf, K.M.: ‘Combinatorial interaction regression testing: a study of test case generation and prioritization’. Proc. 23rd Int. Conf. Software Maintenance (ICSM'07), Paris, France, 2007, pp. 255264.
    35. 35)
      • 35. Qu, X., Cohen, M.B., Woolf, K.M.: ‘A study in prioritization for higher strength combinatorial testing’. Proc. Second Int. Workshop on Combinatorial Testing (IWCT'13), Luxembourg, 2013, pp. 285294.
    36. 36)
      • 36. Huang, R., Zong, W., Chen, J., et al: ‘Prioritizing interaction test suite using repeated base choice coverage’. Proc. IEEE 40th Annual Computer Software and Applications Conf. (COMPSAC'16), Atlanta, GA, USA, 2016, pp. 174184.
    37. 37)
      • 37. Huang, R., Chen, J., Zhang, T., et al: ‘Prioritizing variable-strength covering array’. Proc. IEEE 37th Annual Computer Software and Applications Conf. (COMPSAC'13), Kyoto, Japan, 2013, pp. 502601.
    38. 38)
      • 38. Huang, R., Xie, X., Towey, D., et al: ‘Prioritization of combinatorial test cases by incremental interaction coverage’, Int. J. Softw. Eng. Knowl. Eng., 2013, 23, (10), pp. 14271457.
    39. 39)
      • 39. Jia, Y., Harman, M.: ‘An analysis and survey of the development of mutation testing’, IEEE Trans. Softw. Eng., 2011, 37, (5), pp. 649678.
    40. 40)
      • 40. Do, H., Elbaum, S.G., Rothermel, G.: ‘Supporting controlled experimentation with testing techniques: an infrastructure and its potential impact’, Empir. Softw. Eng., 2005, 10, (4), pp. 405435.
    41. 41)
      • 41. Andrews, J.H., Briand, L.C., Labiche, Y., et al: ‘Using mutation analysis for assessing and comparing testing coverage criteria’, IEEE Trans. Softw. Eng., 2006, 32, (8), pp. 608624.
    42. 42)
      • 42. Do, H., Rothermel, G.: ‘On the use of mutation faults in empirical assessments of test case prioritization techniques’, IEEE Trans. Softw. Eng., 2006, 32, (9), pp. 733752.
    43. 43)
      • 43. Jia, Y., Harman, M.: ‘Higher order mutation testing’, Inf. Softw. Technol., 2009, 51, (10), pp. 13791393.
    44. 44)
      • 44. Ammann, P., Delamaro, M.E., Offutt, J.: ‘Establishing theoretical minimal sets of mutants’. Proc. Seventh Int. Conf. Software Testing, Verification and Validation (ICST'14), Cleveland, OH, USA, 2014, pp. 2130.
    45. 45)
      • 45. Kintis, M., Papadakis, M., Malevris, N.: ‘Evaluating mutation testing alternatives: a collateral experiment’. Proc. 17th Asia-Pacific Software Engineering Conf. (APSEC'10), Sydney, Australia, 2010, pp. 300309.
    46. 46)
      • 46. Papadakis, M., Henard, C., Harman, M., et al: ‘Threats to the validity of mutation-based test assessment’. Proc. 25th Int. Symp. Software Testing and Analysis (ISSTA'16), Saarbrücken, Germany, 2016, pp. 354365.
    47. 47)
      • 47. Huang, R., Chen, J., Chen, D., et al: ‘How to do tie-breaking in prioritization of interaction test suites?’. Proc. 26th Int. Conf. Software Engineering and Knowledge Engineering (SEKE'14), Vancouver, Canada, 2014, pp. 121125.
    48. 48)
      • 48. Wang, Z., Chen, L., Xu, B., et al: ‘Cost-cognizant combinatorial test case prioritization’, Int. J. Softw. Eng. Knowl. Eng., 2011, 21, (6), pp. 829854.
    49. 49)
      • 49. Arcuri, A., Briand, L.: ‘A Hitchhiker's guide to statistical tests for assessing randomized algorithms in software engineering’, Softw. Test. Verif. Reliab., 2014, 24, (3), pp. 219250.
    50. 50)
      • 50. Harman, M., McMinn, P., Souza, J., et al: ‘Search based software engineering: techniques, taxonomy, tutorial’, Empir. Softw. Eng. Verif., 2012, 7007, pp. 159.
    51. 51)
      • 51. Vargha, A., Delaney, H.D.: ‘A critique and improvement of the CL common language effect size statistics of McGraw and Wong’, J. Educ. Behav. Stat., 2000, 25, (2), pp. 101132.
    52. 52)
      • 52. Hemmati, H., Arcuri, A., Briand, L.: ‘Achieving scalable model-based testing through test case diversity’, ACM Trans. Softw. Eng. Methodol., 2013, 22, (1), pp. 139176.
    53. 53)
      • 53. Huang, R., Zong, W., Towey, D., et al: ‘An empirical examination of abstract test case prioritization techniques’. Proc. 39th Int. Conf. Software Engineering Companion (ICSE-C'17), Buenos Aires, Argentina, 2017, pp. 141143.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-sen.2018.5199
Loading

Related content

content/journals/10.1049/iet-sen.2018.5199
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address