http://iet.metastore.ingenta.com
1887

Bayes-TDG: effective test data generation using Bayesian belief network: toward failure-detection effectiveness and maximum coverage

Bayes-TDG: effective test data generation using Bayesian belief network: toward failure-detection effectiveness and maximum coverage

For access to this article, please select a purchase option:

Buy article PDF
£12.50
(plus tax if applicable)
Buy Knowledge Pack
10 articles for £75.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
IET Software — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

This study presents a novel test data generation method called Bayes-TDG. It is based on principles of Bayesian networks and provides the possibility of making inference from probabilistic data in the model to increase the prime path-coverage ratio for a given programme under test (PUT). In this regard, a new programme structure-based probabilistic network, TDG-NET, is proposed that is capable of modelling the conditional dependencies among the programme basic blocks (BBs) on one hand and conditional dependencies of the transitions between its BBs and input parameters on the other hand. To achieve failure-detection effectiveness, the authors propose a path selection strategy that works based on the predicted outcome of generated test cases. So, they mitigate the need for a human oracle, and the generated test suite could be directly used in fault localisation. Several experiments are conducted to evaluate the performance of Bayes-TDG. The results reveal that the method is promising and the generated test suite could be quite effective.

References

    1. 1)
      • 1. Myers, G.J., Sandler, C., Badgett, T.: ‘The art of software testing’ (John Wiley & Sons, New York, 2011).
    2. 2)
      • 2. Mansour, N., Salame, M.: ‘Data generation for path testing’, Softw. Qual. J., 2004, 12, (2), pp. 121136.
    3. 3)
      • 3. Xu, X., Chen, Y., Li, X., et al: ‘A path-oriented test data generation approach for automatic software testing’. 2008 Second Int. Conf. Anti-counterfeiting, Security and Identification, 2008, pp. 6366.
    4. 4)
      • 4. Nikravan, E., Feyzi, F., Parsa, S.: ‘Enhancing path-oriented test data generation using adaptive random testing techniques’. Second Int. Conf. Knowledge-Based Engineering and Innovation (KBEI), 2015, pp. 510513.
    5. 5)
      • 5. Zhang, J., Chen, X., Wang, X.: ‘Path-oriented test data generation using symbolic execution and constraint solving techniques’.  Proc. Second Int. Conf. IEEE Software Engineering and Formal Methods 2004 SEFM 2004, 2004, pp. 242250.
    6. 6)
      • 6. Gong, D., Zhang, Y.: ‘Generating test data for both path coverage and fault detection using genetic algorithms’, Front. Comput. Sci., 2013, 7, (6), pp. 822837.
    7. 7)
      • 7. Zhang, Y., Gong, D.: ‘Generating test data for both paths coverage and faults detection using genetic algorithms: multi-path case’, Front. Comput. Sci., 2014, 8, (5), pp. 726740.
    8. 8)
      • 8. Gong, D., Zhang, W., Yao, X.: ‘Evolutionary generation of test data for many paths coverage based on grouping’, J. Syst. Softw., 2011, 84, (12), pp. 22222233.
    9. 9)
      • 9. Yao, X., Gong, D., Zhang, G.: ‘Constrained multi-objective test data generation based on set evolution’, IET Softw., 2015, 9, (4), pp. 103108.
    10. 10)
      • 10. Wegener, J., Baresel, A., Sthamer, H.: ‘Evolutionary test environment for automatic structural testing’, Inf. Softw. Technol., 2001, 43, (14), pp. 841854.
    11. 11)
      • 11. Baldoni, R., Coppa, E., D'Elia, D.C., et al: ‘A survey of symbolic execution techniques’, 2016, arXiv preprint arXiv:1610.00502.
    12. 12)
      • 12. Xie, T., Tillmann, N., de Halleux, J., et al: ‘Fitness-guided path exploration in dynamic symbolic execution’. IEEE/IFIP Int. Conf. IEEE Dependable Systems & Networks 2009 DSN'09, 2009, pp. 359368.
    13. 13)
      • 13. Lakhotia, K., Tillmann, N., Harman, M., et al: ‘FloPSy-search-based floating point constraint solving for symbolic execution’. Int. Conf. Testing Software and Systems (ICTSS), 2010, vol. 10, pp. 142157.
    14. 14)
      • 14. Binkley, D.W., Harman, M., Lakhotia, K.: ‘Flagremover: a testability transformation for transforming loop-assigned flags’, ACM Trans. Softw. Eng. Methodol. (TOSEM), 2011, 20, (3), p. 12.
    15. 15)
      • 15. Ammann, P.E., Knight, J.C.: ‘Data diversity: an approach to software fault tolerance’, IEEE Trans. Comput., 1988, 37, (4), pp. 418425.
    16. 16)
      • 16. Feyzi, F., Parsa, S.: ‘Inference: effective fault localization based on information-theoretic analysis and statistical causal inference’, Front. Comput. Sci., 2017, doi: 10.1007/s11704–017-6512-z.
    17. 17)
      • 17. Van der Meulen, M.J.P., Bishop, P.G., Villa, R.: ‘An exploration of software faults and failure behaviour in a large population of programs’. 15th Int. Symp. IEEE Software Reliability Engineering 2004 ISSRE 2004, 2004, pp. 101112.
    18. 18)
      • 18. White, L.J., Cohen, E.I.: ‘A domain strategy for computer program testing’, IEEE Trans. Softw. Eng., 1980, 6, (3), pp. 247257.
    19. 19)
      • 19. Chen, T.Y., Tse, T.H., Yu, Y.T.: ‘Proportional sampling strategy: a compendium and some insights’, J. Syst. Softw., 2001, 58, (1), pp. 6581.
    20. 20)
      • 20. Namin, A.S., Sridharan, M.: ‘Bayesian reasoning for software testing’. Proc. FSE/SDP Workshop on Future of Software Engineering Research ACM, 2010, pp. 349354.
    21. 21)
      • 21. Ammann, P., Offutt, J.: ‘Introduction to software testing’ (Cambridge University Press, Cambridge, 2008).
    22. 22)
      • 22. Darwiche, A.: ‘Modeling and reasoning with Bayesian networks’ (Cambridge University Press, Cambridge, 2009).
    23. 23)
      • 23. Neapolitan, R.E.: ‘Learning Bayesian networks’, vol. 38 (Pearson Prentice-Hall, Upper Saddle River, NJ, 2004).
    24. 24)
      • 24. Weyuker, E.J., Jeng, B.: ‘Analyzing partition testing strategies’, IEEE Trans. Softw. Eng., 1991, 17, (7), pp. 703711.
    25. 25)
      • 25. Hamlet, D., Taylor, R.: ‘Partition testing does not inspire confidence’, IEEE Trans. Softw. Eng., 1990, 16, (12), p. 1402.
    26. 26)
      • 26. Gao, R., Wong, W.E., Chen, Z., et al: ‘Effective software fault localization using predicted execution results’, Softw. Qual. J., 2015, 25, (1), pp. 131169.
    27. 27)
      • 27. Bookstein, A., Kulyukin, V.A., Raita, T.: ‘Generalized hamming distance’, Inf. Retr., 2002, 5, (4), pp. 353375.
    28. 28)
      • 28. Ahmad, M.A., Oriol, M.: ‘Dirt spot sweeping random strategy’, Lect. Notes Softw. Eng., 2014, 2, (4), p. 294.
    29. 29)
      • 29. Chen, T.Y., Kuo, F.C., Merkel, R.G., et al: ‘Adaptive random testing: the art of test case diversity’, J. Syst. Softw., 2010, 83, (1), pp. 6066.
    30. 30)
      • 30. Wong, W.E., Horgan, J.R., London, S., et al: ‘Effect of test set size and block coverage on the fault detection effectiveness’. Proc. Fifth Int. Symp. IEEE Software Reliability Engineering, 1994, 1994, pp. 230238.
    31. 31)
      • 31. Hedley, D., Hennell, M.A.: ‘The causes and effects of infeasible paths in computer programs’. Presented at Proc. Eighth Int. Conf. Software Engineering (Cat. No. 85CH2139–4), London, UK, 28–30 August 1985.
    32. 32)
      • 32. Zhang, J., Wang, X.: ‘A constraint solver and its application to path feasibility analysis’, Int. J. Softw. Eng. Knowl. Eng., 2001, 11, pp. 139156.
    33. 33)
      • 33. Gong, D., Yao, X.: ‘Automatic detection of infeasible paths in software testing’, IET Softw.., 2010, 4, (5), pp. 361370.
    34. 34)
      • 34. Bodik, R., Gupta, R., Soffa, M.L.: ‘Refining data flow information using infeasible paths’. Software Engineering Notes ESEC/FSE ‘97 Sixth European Software Engineering Conf. Held Jointly with the Fifth ACM SIGSOFT Symp. Foundations of Software Engineering, 22–25 September 1997, vol. 22, pp. 361377.
    35. 35)
      • 35. Forgacs, I., Bertolino, A.: ‘Feasible test path selection by principal slicing’. Presented at ESEC/FSE ‘97’ Sixth European Software Engineering Conf. Held Jointly with the Fifth ACM SIGSOFT Symp. Foundations of Software Engineering, Software Engineering Notes, Zurich, Switzerland, 22–25 September 1997.
    36. 36)
      • 36. Ngo, M.N., Tan, H.B.K.: ‘Heuristics-based infeasible path detection for dynamic test data generation’, Inf. Softw. Technol., 2008, 50, (7), pp. 641655.
    37. 37)
      • 37. Andrews, J.H., Briand, L.C., Labiche, Y.: ‘Is mutation an appropriate tool for testing experiments?’. Proc. 27th Int. Conf. Software Engineering 2005 ICSE 2005 IEEE, 2005, pp. 402411.
    38. 38)
      • 38. Elbaum, S., Malishevsky, A.G., Rothermel, G.: ‘Test case prioritization: a family of empirical studies’, IEEE Trans. Softw. Eng., 2002, 28, (2), pp. 159182.
    39. 39)
      • 39. Feyzi, F., Parsa, S.: ‘A program slicing-based method for effective detection of coincidentally correct test cases’, Computing2018, https://doi.org/10.1007/s00607-018-0591-z.
    40. 40)
      • 40. Feyzi, F., Parsa, S.: ‘FPA-FL: incorporating static fault-proneness analysis into statistical fault localization’, J. Syst. Softw., 2017, 136, pp. 3958.
    41. 41)
      • 41. Jia, Y., Harman, M.: ‘MILU: a customizable, runtime-optimized higher order mutation testing tool for the full C language’. Practice and Research Techniques 2008 TAIC PART'08 Testing: Academic & Industrial Conf. IEEE, 2008, pp. 9498.
    42. 42)
      • 42. Ding, Z., Zhang, K., Hu, J.: ‘A rigorous approach towards test case generation’, Inf. Sci., 2008, 178, (21), pp. 40574079.
    43. 43)
      • 43. Godefroid, P., Klarlund, N., Sen, K.: ‘DART: directed automated random testing’. ACM Sigplan Notices, 2005, vol. 40, no. 6, pp. 213223.
    44. 44)
      • 44. Sen, K., Marinov, D., Agha, G.: ‘CUTE: a concolic unit testing engine for C’. ACM SIGSOFT Software Engineering Notes, 2005, vol. 30, no. 5, pp. 263272.
    45. 45)
      • 45. Islam, M., Csallner, C.: ‘Dsc + mock: a test case + mock class generator in support of coding against interfaces’. Proc. Eighth Int. Workshop on Dynamic Analysis, 2010, pp. 2631.
    46. 46)
      • 46. Tillmann, N., De Halleux, J.: ‘Pex – white box test generation for. Net’. Int. Conf. Tests and Proofs, 2008, pp. 134153.
    47. 47)
      • 47. Ali, S., Briand, L.C., Hemmati, H., et al: ‘A systematic review of the application and empirical investigation of search-based test case generation’, IEEE Trans. Softw. Eng., 2010, 36, (6), pp. 742762.
    48. 48)
      • 48. McMinn, P.: ‘Search-based software test data generation: a survey’, Softw. Test. Verif. Reliab., 2004, 14, (2), pp. 105156.
    49. 49)
      • 49. Holland, J.H.: ‘Adaptation in natural and artificial systems: an introductory analysis with applications to biology, control, and artificial intelligence’ (University of Michigan Press, Ann Arbor, 1975).
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-sen.2017.0112
Loading

Related content

content/journals/10.1049/iet-sen.2017.0112
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address