access icon free Overlap-aware rapid type analysis for constructing one-to-one matched call graphs in regression test selection

Regression testing is an important but costly activity for verifying a programme with the changed code. Regression test selection (RTS) aims to reduce this cost by selecting only the test cases affected by the changes. Among the several ways of selecting such affected test cases, call graphs have been statically constructed to select the test cases at the method-level granularity. However, RTS techniques will reduce the cost of regression testing less than expected unless the call graphs are efficiently one-to-one matched with the test cases. In this study, the authors propose overlap-aware rapid type analysis (ORTA). ORTA is designed to minimise the redundant cost of creating the matched call graphs using rapid type analysis (RTA). The one-to-one matching and ORTA were evaluated on 1487 commits selected from 30 Java projects. RTA-based RTS with the one-to-one matching selected 46.90% fewer test cases with 2.76% longer end-to-end time of regression testing than without the one-to-one matching. The time increased with the one-to-one matching was reduced by 22.58% when ORTA substituted for RTA. ORTA achieved the cost reduction while removing 82.77% of the duplicate edges that RTA created on 993 commits.

Inspec keywords: regression analysis; Java; program testing; software maintenance

Other keywords: overlap-aware rapid type analysis; affected test cases; call graphs; redundant cost; ORTA; regression testing; important but costly activity; fewer test cases; matching selected 46; regression test selection

Subjects: Combinatorial mathematics; Diagnostic, testing, debugging and evaluating systems; Other topics in statistics; Object-oriented programming; Software engineering techniques

References

    1. 1)
      • 6. Carlson, R., Do, H., Denton, A.: ‘A clustering approach to improving test case prioritization: an industrial case study’. 27th IEEE Int. Conf. on Software Maintenance, Williamsburg, VI, USA, September 2011, pp. 382391.
    2. 2)
      • 32. ‘JUnit 5 User Guide’. Available at https://junit.org/junit5/docs/current/user-guide, accessed 23 July 2019.
    3. 3)
      • 46. Mei, H., Hao, D., Zhang, L., et al: ‘A static approach to prioritizing JUnit test cases’, IEEE Trans. Softw. Eng., 2012, 38, (6), pp. 12581275.
    4. 4)
      • 22. Ren, X., Shah, F., Tip, F., et al: ‘Chianti: A prototype change impact analysis tool for Java’ (Rutgers University Department of Computer Science, USA, 2003), pp. 111.
    5. 5)
      • 43. Wang, K., Zhu, C., Celik, A., et al: ‘Towards refactoring-aware regression test selection’. Proc. 40th Int. Conf. on Software Engineering, Gothenburg, Sweden, May 2018, pp. 233244.
    6. 6)
      • 15. Reif, M., Eichberg, M., Hermann, B., et al: ‘Call graph construction for Java libraries’. Proc. 24th ACM SIGSOFT Int. Symp. on Foundations of Software Engineering, Seattle, WA, USA, November 2016, pp. 474486.
    7. 7)
      • 10. Gligoric, M., Eloussi, L., Marinov, D.: ‘Practical regression test selection with dynamic file dependencies’. Proc. Int. Symp. on Software Testing and Analysis, Baltimore, USA, July 2015, pp. 211222.
    8. 8)
      • 20. Dean, J., Grove, D., Chambers, C.: ‘Optimization of object-oriented programs using static class hierarchy analysis’. Proc. 9th European Conf. on Object-Oriented Programming, Åarhus, Denmark, August 1995, pp. 77101.
    9. 9)
      • 28. Lindholm, T., Yellin, F., Bracha, G., et al: ‘5.4. Linking’, in ‘The Java virtual machine specification, Java SE 12 edition’ (Oracle America, Inc., USA, 2019), pp. 368389.
    10. 10)
      • 18. Bacon, D.F., Sweeney, P.F.: ‘Fast static analysis of C + + virtual function calls’. Proc. 11th ACM SIGPLAN Conf. on Object-Oriented Programming, Systems, Languages, and Applications, San Jose, California, USA, October 1996, pp. 324341.
    11. 11)
      • 42. Grech, N., Fourtounis, G., Francalanza, A., et al: ‘Heaps don't lie: countering unsoundness with heap snapshots’. Proc. ACM on Programming Languages, New York, NY, USA, October 2017, pp. 6894.
    12. 12)
      • 21. Shivers, O.: ‘Control-flow analysis of higher-order languages of taming lambda’. PhD thesis, Carnegie Mellon University, 1991.
    13. 13)
      • 35. ‘Maven – Introduction’. Available at https://maven.apache.org/what-is-maven.html, accessed 25 November 2019.
    14. 14)
      • 27. Yan, D., Xu, G., Rountev, A.: ‘Rethinking soot for summary-based whole-program analysis’. Proc. ACM SIGPLAN Int. Workshop on State of the Art in Java Program Analysis, Beijing, China, June 2012, pp. 914.
    15. 15)
      • 29. Detlefs, D., Agesen, O.: ‘Inlining of virtual methods’. European Conf. on Object-Oriented Programming, Lisbon, Portugal, June 1999, pp. 258277.
    16. 16)
      • 45. Vahabzadeh, A., Stocco, A., Mesbah, A.: ‘Fine-grained test minimization’. Proc. 40th Int. Conf. on Software Engineering, Gothenburg, Sweden, June 2018, pp. 210221.
    17. 17)
      • 36. ‘AlDanial/cloc: cloc counts blank lines, comment lines, and physical lines of source code in many programming languages’. Available at https://github.com/AlDanial/cloc, accessed 25 November 2019.
    18. 18)
      • 4. Elbaum, S., Rothermel, G., Penix, J.: ‘Techniques for improving regression testing in continuous integration development environments’. Proc. 22nd ACM SIGSOFT Int. Symp. on Foundations of Software Engineering, Hong Kong, China, November 2014, pp. 235245.
    19. 19)
      • 41. Bodden, E., Sewe, A., Sinschek, J., et al: ‘Taming reflection: aiding static analysis in the presence of reflection and custom class loaders’. Proc. 33rd Int. Conf. on Software Engineering, Waikiki, Honolulu, HI, USA, May 2011, pp. 241250.
    20. 20)
      • 17. Leung, H.K.N., White, L.: ‘A cost model to compare regression test strategies’. Proc. Conf. on Software Maintenance, Sorrento, Italy, October 1991, pp. 201208.
    21. 21)
      • 16. Arzt, S., Rasthofer, S., Fritz, C., et al: ‘Flowdroid: precise context, flow, field, object-sensitive and lifecycle-aware taint analysis for android apps’. Proc. 35th ACM SIGPLAN Conf. on Programming Language Design and Implementation, Edinburgh, United Kingdom, June 2009, pp. 259269.
    22. 22)
      • 38. Conover, W.J.: ‘Practical nonparametric statistics’ (Wiley, New York, NY, 1999, 3rd edn.).
    23. 23)
      • 7. Srikanth, H., Cohen, M.B.: ‘Regression testing in software as a service: an industrial case study’. 27th IEEE Int. Conf. on Software Maintenance, Williamsburg, VI, USA, September 2011, pp. 372381.
    24. 24)
      • 37. Georges, A., Buytaert, D., Eeckhout, L.: ‘Statistically rigorous Java performance evaluation’. Proc. 22nd Annual ACM SIGPLAN Conf. on Object-Oriented Programming Systems and Applications, Montreal, Quebec, Canada, October 2007, pp. 5776.
    25. 25)
      • 14. Bible, J., Rothermel, G., Rosenblum, D.S.: ‘A comparative study of coarse- and fine-grained safe regression test-selection techniques’, ACM T. Softw. Eng. Meth., 2001, 10, (2), pp. 149182.
    26. 26)
      • 40. Landman, D., Serebrenik, A., Vinju, J.J.: ‘Challenges for static analysis of Java reflection – literature review and empirical study’. Proc. 39th Int. Conf. on Software Engineering, Buenos Aires, Argentina, May 2017, pp. 507518.
    27. 27)
      • 13. Kung, D.C., Gao, J., Hsia, P., et al: ‘Class firewall, test order, and regression testing’, J. Object-Oriented Program., 1995, 8, (2), pp. 5165.
    28. 28)
      • 31. Zhang, L.: ‘Hybrid regression test selection’. Proc. 40th Int. Conf. on Software Engineering, Gothenburg, Sweden, May 2018, pp. 199209.
    29. 29)
      • 39. Razali, N.M., Wah, Y.B.: ‘Power comparisons of Shapiro-Wilk, Kolmogorov-Smirnov, Lilliefors and Anderson-Darling tests’, (Universiti Teknologi MARA, Malaysia, 2011).
    30. 30)
      • 19. Cheng, J.: ‘Improving the software reusability in object-oriented programming’, ACM SIGSOFT Softw. Eng. Notes, 1993, 18, (4), pp. 7074.
    31. 31)
      • 26. Rountev, A., Sharp, M., Xu, G.: ‘IDE dataflow analysis in the presence of large object-oriented libraries’. Proc. Joint European Conf. on Theory and Practice of Software 17th Int. Conf. on Compiler Construction, Budapest, Hungary, March 2008, pp. 5368.
    32. 32)
      • 25. Ali, K., Lhoták, O.: ‘Averroes: whole-program analysis without the whole program’. Proc. 27th European Conf. on Object-Oriented Programming, Montpellier, France, July 2013, pp. 378400.
    33. 33)
      • 34. Vallée-Rai, R., Co, P., Gagnon, E., et al: ‘Soot: a Java bytecode optimization framework’. Proc. Conf. of the Centre for Advanced Studies on Collaborative Research, Mississauga, Ontario, Canada, November 1999, pp. 214224.
    34. 34)
      • 23. Tip, F., Palsberg, J.: ‘Scalable propagation-based call graph construction algorithms’. Proc. 15th ACM SIGPLAN Conf. on Object-Oriented Programming, Systems, Languages, and Applications, Minneapolis, Minnesota, USA, October 2000, pp. 281293.
    35. 35)
      • 8. Yoo, S., Harman, M.: ‘Regression testing minimization, selection and prioritization: a survey’, Softw. Test. Verif. Reliab., 2012, 22, (2), pp. 67120.
    36. 36)
      • 12. Ryder, B.G., Tip, F.: ‘Change impact analysis for object-oriented programs’. Proc. ACM SIGPLAN-SIGSOFT Workshop on Program Analysis for Software Tools and Engineering, Snowbird, Utah, USA, June 2001, pp. 4653.
    37. 37)
      • 3. Memon, A., Gao, Z., Nguyen, B., et al: ‘Taming Google-scale continuous testing’. Proc. 39th Int. Conf. on Software Engineering: Software Engineering in Practice Track, Buenos Aires, Argentina, May 2017, pp. 233242.
    38. 38)
      • 30. Dilworth, R.P.: ‘A decomposition theorem for partially ordered sets’, Ann. Math., 1950, 51, (1), pp. 161166.
    39. 39)
      • 44. Elbaum, S., Mclaughlin, A., Penix, J.: ‘The Google dataset of testing results’, 2014. Available at https://code.google.com/p/google-shared-dataset-of-test-suite-results, accessed 23 July 2019.
    40. 40)
      • 5. Blondeau, V., Etien, A., Anquetil, N., et al: ‘Test case selection in industry: an analysis of issues related to static approaches’, Softw. Qual. J., 2017, 25, (1), pp. 12031237.
    41. 41)
      • 33. ‘Main Page – WalaWiki’. Available at http://wala.sourceforge.net/wiki, accessed 18 July 2019.
    42. 42)
      • 24. Ali, K., Lhoták, O.: ‘Application-only call graph construction’. Proc. 26th European Conf. on Object-Oriented Programming, Beijing, China, June 2012, pp. 688712.
    43. 43)
      • 11. Legunsen, O., Hariri, F., Shi, A., et al: ‘An extensive study of static regression test selection in modern software evolution’. Proc. 24th ACM SIGSOFT Int. Symp. on Foundations of Software Engineering, Seattle, WA, USA, November 2016, pp. 583594.
    44. 44)
      • 47. Au, K.W.Y., Zhou, Y.F., Huang, Z., et al: ‘PScout: analyzing the android permission specification’. Proc. ACM Conf. on Computer and Communications Security, Raleigh, North Carolina, USA, October 2012, pp. 217228.
    45. 45)
      • 2. Tassey, G.: ‘The economic impacts of inadequate infrastructure for software testing’ (National Institute of Standards and Technology, USA, 2002), pp. 1309.
    46. 46)
      • 9. Kapfhammer, G.M.: ‘Regression testing’, in Laplante, P.A. (Ed.): ‘Encyclopedia of software engineering three-volume set’ (Auerbach Publications, USA, 2010, 1st edn.), pp. 893915.
    47. 47)
      • 1. Torres, E.: ‘Inadequate software testing can be disastrous’, IEEE Potentials, 2018, 37, (1), pp. 947.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-sen.2018.5442
Loading

Related content

content/journals/10.1049/iet-sen.2018.5442
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading