http://iet.metastore.ingenta.com
1887

access icon free Progress on approaches to software defect prediction

Loading full text...

Full text loading...

/deliver/fulltext/iet-sen/12/3/IET-SEN.2017.0148.html;jsessionid=2m2ptxoegwwet.x-iet-live-01?itemId=%2fcontent%2fjournals%2f10.1049%2fiet-sen.2017.0148&mimeType=html&fmt=ahah

References

    1. 1)
      • 1. Hall, T., Beecham, S., Bowes, D., et al: ‘A systematic literature review on fault prediction performance in software engineering’, IEEE Trans. Softw. Eng., 2012, 38, (6), pp. 12761304.
    2. 2)
      • 2. Menzies, T., Milton, Z., Turhan, B., et al: ‘Defect prediction from static code features: current results, limitations, new approaches’, Autom. Softw. Eng., 2010, 17, (4), pp. 375407.
    3. 3)
      • 3. Catal, C., Diri, B.: ‘A systematic review of software fault prediction studies’, Expert Syst. Appl., 2009, 36, (4), pp. 73467354.
    4. 4)
      • 4. Catal, C.: ‘Software fault prediction: a literature review and current trends’, Expert Syst. Appl., 2011, 38, (4), pp. 46264636.
    5. 5)
      • 5. Malhotra, R.: ‘A systematic review of machine learning techniques for software fault prediction’, Appl. Soft Comput., 2015, 27, pp. 504518.
    6. 6)
      • 6. Naik, K., Tripathy, P.: ‘Software testing and quality assurance: theory and practice’ (John Wiley & Sons, Hoboken, NJ, 2011).
    7. 7)
      • 7. Menzies, T., Greenwald, J., Frank, A.: ‘Data mining static code attributes to learn defect predictors’, IEEE Trans. Softw. Eng., 2007, 33, (1), pp. 213.
    8. 8)
      • 8. Song, Q., Jia, Z., Shepperd, M., et al: ‘A general software defect proneness prediction framework’, IEEE Trans. Softw. Eng., 2011, 37, (3), pp. 356370.
    9. 9)
      • 9. Herzig, K.: ‘Using pre-release test failures to build early post-release defect prediction models’. Proc. IEEE 25th Int. Symp. Software Reliability Engineering, 2014, pp. 300311.
    10. 10)
      • 10. Bennin, K.E., Toda, K., Kamei, Y., et al: ‘Empirical evaluation of cross-release effort-aware defect prediction models’. Proc. IEEE Int. Conf. Software Quality, Reliability and Security, 2016, pp. 214221.
    11. 11)
      • 11. Zimmermann, T., Nagappan, N., Gall, H., et al: ‘Cross-project defect prediction: a large scale experiment on data vs. domain vs. process’. Proc. 7th joint Meeting of the European Software Engineering Conf. ACM SIGSOFT Int. Symp. Foundations of Software Engineering, 2009, pp. 91100.
    12. 12)
      • 12. Turhan, B., Menzies, T., Bener, A.B., et al: ‘On the relative value of cross-company and within-company data for defect prediction’, Empir. Softw. Eng., 2009, 14, (5), pp. 540578.
    13. 13)
      • 13. He, Z., Shu, F., Yang, Y., et al: ‘An investigation on the feasibility of cross-project defect prediction’, Autom. Softw. Eng., 2012, 19, (2), pp. 167199.
    14. 14)
      • 14. Kamei, Y., Shihab, E.: ‘Defect prediction: accomplishments and future challenges’. Proc. IEEE 23rd Int. Conf. Software Analysis, Evolution, and Reengineering, 2016, pp. 3345.
    15. 15)
      • 15. Kim, S., Whitehead, E.J., Zhang, Y.: ‘Classifying software changes: clean or buggy?IEEE Trans. Softw. Eng., 2008, 34, (2), pp. 181196.
    16. 16)
      • 16. Nam, J., Pan, S.J., Kim, S.: ‘Transfer defect learning’. Proc. 35th Int. Conf. Software Engineering, 2013, pp. 382391.
    17. 17)
      • 17. Turhan, B., Mısırlı, A.T., Bener, A.: ‘Empirical evaluation of the effects of mixed project data on learning defect predictors’, Inf. Softw. Technol., 2013, 55, (6), pp. 11011118.
    18. 18)
      • 18. Zhang, Y., Lo, D., Xia, X., et al: ‘An empirical study of classifier combination for cross-project defect prediction’. Proc. IEEE 39th Annual Computer Software and Applications Conf., 2015, pp. 264269.
    19. 19)
      • 19. Krishna, R., Menzies, T., Fu, W.: ‘Too much automation? The bellwether effect and its implications for transfer learning’. Proc. 31st Int. Conf. Automated Software Engineering, 2016, pp. 122131.
    20. 20)
      • 20. Andreou, A.S., Chatzis, S.P.: ‘Software defect prediction using doubly stochastic Poisson processes driven by stochastic belief networks’, J. Syst. Softw., 2016, 122, pp. 7282.
    21. 21)
      • 21. Rahman, F., Devanbu, P.: ‘How, and why, process metrics are better’. Proc. 2013 Int. Conf. Software Engineering, 2013, pp. 432441.
    22. 22)
      • 22. Madeyski, L., Jureczko, M.: ‘Which process metrics can significantly improve defect prediction models? An empirical study’, Softw. Qual. J., 2015, 23, (3), pp. 393422.
    23. 23)
      • 23. Radjenović, D., Heričko, M., Torkar, R., et al: ‘Software fault prediction metrics: a systematic literature review’, Inf. Softw. Technol., 2013, 55, (8), pp. 13971418.
    24. 24)
      • 24. Halstead, M.H.: ‘Elements of software science’. vol. 7 (Elsevier, New York, 1977).
    25. 25)
      • 25. McCabe, T.J.: ‘A complexity measure’, IEEE Trans. Softw. Eng., 1976, 2, (4), pp. 308320.
    26. 26)
      • 26. Chidamber, S.R., Kemerer, C.F.: ‘A metrics suite for object oriented design’, IEEE Trans. Softw. Eng., 1994, 20, (6), pp. 476493.
    27. 27)
      • 27. Abreu, F.B., Carapuça, R.: ‘Candidate metrics for object-oriented software within a taxonomy framework’, J. Syst. Softw., 1994, 26, (1), pp. 8796.
    28. 28)
      • 28. Nagappan, N., Ball, T.: ‘Use of relative code churn measures to predict system defect density’. Proc. 27th Int. Conf. Software Engineering, 2005, pp. 284292.
    29. 29)
      • 29. Moser, R., Pedrycz, W., Succi, G.: ‘A comparative analysis of the efficiency of change metrics and static code attributes for defect prediction’. Proc. 30th Int. Conf. Software Engineering, 2008, pp. 181190.
    30. 30)
      • 30. Hassan, A.E.: ‘Predicting faults using the complexity of code changes’. Proc. 31st Int. Conf. Software Engineering, 2009, pp. 7888.
    31. 31)
      • 31. D'Ambros, M., Lanza, M., Robbes, R.: ‘Evaluating defect prediction approaches: a benchmark and an extensive comparison’, Empir. Softw. Eng., 2012, 17, (4–5), pp. 531577.
    32. 32)
      • 32. Weyuker, E.J., Ostrand, T.J., Bell, R.M.: ‘Do too many cooks spoil the broth? Using the number of developers to enhance defect prediction models’, Empir. Softw. Eng., 2008, 13, (5), pp. 539559.
    33. 33)
      • 33. Pinzger, M., Nagappan, N., Murphy, B.: ‘Can developer-module networks predict failures?Proc. 16th ACM SIGSOFT Int. Symp. Foundations of Software Engineering, 2008, pp. 212.
    34. 34)
      • 34. Meneely, A., Williams, L., Snipes, W., et al: ‘Predicting failures with developer networks and social network analysis’. Proc. 16th ACM SIGSOFT Int. Symp. Foundations of Software Engineering, 2008, pp. 1323.
    35. 35)
      • 35. Bird, C., Nagappan, N., Murphy, B., et al: ‘Don't touch my code!: examining the effects of ownership on software quality’. Proc. 19th ACM SIGSOFT Symp. and the 13th European Conf. Foundations of Software Engineering, 2011, pp. 414.
    36. 36)
      • 36. Rahman, F.: ‘Ownership, experience and defects: a fine-grained study of authorship’. Proc. 33rd Int. Conf. Software Engineering, 2011, pp. 491500.
    37. 37)
      • 37. Posnett, D., Devanbu, P., Filkov, V.: ‘Dual ecological measures of focus in software development’. Proc. 35th Int. Conf. Software Engineering, 2013, pp. 452461.
    38. 38)
      • 38. Jiang, T., Tan, L., Kim, S.: ‘Personalized defect prediction’. Proc. IEEE/ACM 28th Int. Conf. Automated Software Engineering, 2013, pp. 279289.
    39. 39)
      • 39. Lee, T., Nam, J., Han, D., et al: ‘Developer micro interaction metrics for software defect prediction’, IEEE Trans. Softw. Eng., 2016, 42, (11), pp. 10151035.
    40. 40)
      • 40. Zimmermann, T., Nagappan, N.: ‘Predicting defects using network analysis on dependency graphs’. Proc. 30th Int. Conf. Software Engineering, 2008, pp. 531540.
    41. 41)
      • 41. Bird, C., Nagappan, N., Gall, H., et al: ‘Using socio-technical networks to predict failures’, Proc. 20th IEEE Int. Symp. Software Reliability Engineering, 2009.
    42. 42)
      • 42. D'Ambros, M., Lanza, M., Robbes, R.: ‘On the relationship between change coupling and software defects’. Proc. 16th Working Conf. Reverse Engineering, 2009, pp. 135144.
    43. 43)
      • 43. Hu, W., Wong, K.: ‘Using citation influence to predict software defects’. Proc. 10th Working Conf. Mining Software Repositories, 2013, pp. 419428.
    44. 44)
      • 44. Herzig, K., Just, S., Rau, A., et al: ‘Predicting defects using change genealogies’. Proc. IEEE 24th Int. Symp. Software Reliability Engineering, 2013, pp. 118127.
    45. 45)
      • 45. Nagappan, N., Murphy, B., Basili, V.: ‘The influence of organizational structure on software quality’. ACM/IEEE 30th Int. Conf. Software Engineering, 2008, pp. 521530.
    46. 46)
      • 46. Mockus, A.: ‘Organizational volatility and its effects on software defects’. Proc. Eighteenth ACM SIGSOFT Int. Symp. Foundations of Software Engineering, 2010, pp. 117126.
    47. 47)
      • 47. Caglayan, B., Turhan, B., Bener, A., et al: ‘Merits of organizational metrics in defect prediction: an industrial replication’. Proc. IEEE/ACM 37th IEEE Int. Conf. Software Engineering, 2015, pp. 8998.
    48. 48)
      • 48. Bacchelli, A., D'Ambros, M., Lanza, M.: ‘Are popular classes more defect prone?Proc. 13th Int. Conf. Fundamental Approaches to Software Engineering, 2010, pp. 5973.
    49. 49)
      • 49. Taba, S.E.S., Khomh, F., Zou, Y., et al: ‘Predicting bugs using antipatterns’. Proc. 29th Int. Conf. Software Maintenance, 2013, pp. 270279.
    50. 50)
      • 50. Zhang, H.: ‘An investigation of the relationships between lines of code and defects’. Proc. IEEE Int. Conf. Software Maintenance, 2009, pp. 274283.
    51. 51)
      • 51. Wu, R., Zhang, H., Kim, S., et al: ‘Relink: recovering links between bugs and changes’. Proc. 19th ACM SIGSOFT Symp. the Foundations of Software Engineering and 13th European Software Engineering Conf., 2011, pp. 1525.
    52. 52)
      • 52. Kamei, Y., Shihab, E., Adams, B., et al: ‘A large-scale empirical study of just-in-time quality assurance’, IEEE Trans. Softw. Eng., 2013, 39, (6), pp. 757773.
    53. 53)
      • 53. Zimmermann, T., Premraj, R., Zeller, A.: ‘Predicting defects for eclipse’. Proc. Third Int. Workshop on Predictor Models in Software Engineering, 2007, pp. 915.
    54. 54)
      • 54. Kim, S., Zhang, H., Wu, R., et al: ‘Dealing with noise in defect prediction’. Proc. 33rd Int. Conf. Software Engineering, 2011, pp. 481490.
    55. 55)
      • 55. Altinger, H., Siegl, S., Dajsuren, Y., et al: ‘A novel industry grade dataset for fault prediction based on model-driven developed automotive embedded software’. Proc. 12th Working Conf. Mining Software Repositories, 2015, pp. 494497.
    56. 56)
      • 56. Shepperd, M., Song, Q., Sun, Z., et al: ‘Data quality: some comments on the NASA software defect datasets’, IEEE Trans. Softw. Eng., 2013, 39, (9), pp. 12081215.
    57. 57)
      • 57. Jureczko, M., Madeyski, L.: ‘Towards identifying software project clusters with regard to defect prediction’. Proc. 6th Int. Conf. Predictive Models in Software Engineering, 2010, pp. 110.
    58. 58)
      • 58. Lessmann, S., Baesens, B., Mues, C., et al: ‘Benchmarking classification models for software defect prediction: a proposed framework and novel findings’, IEEE Trans. Softw. Eng., 2008, 34, (4), pp. 485496.
    59. 59)
      • 59. Jiang, Y., Cukic, B., Ma, Y.: ‘Techniques for evaluating fault prediction models’, Empir. Softw. Eng., 2008, 13, (5), pp. 561595.
    60. 60)
      • 60. Mende, T., Koschke, R.: ‘Revisiting the evaluation of defect prediction models’. Proc. 5th Int. Conf. Predictor Models in Software Engineering, 2009, pp. 110.
    61. 61)
      • 61. Arisholm, E., Briand, L.C., Johannessen, E.B.: ‘A systematic and comprehensive investigation of methods to build and evaluate fault prediction models’, J. Syst. Softw., 2010, 83, (1), pp. 217.
    62. 62)
      • 62. Xiao, X., Lo, D., Xin, X., et al: ‘Evaluating defect prediction approaches using a massive set of metrics: an empirical study’. Proc. 30th Annual ACM Symp. Applied Computing, 2015, pp. 16441647.
    63. 63)
      • 63. Menzies, T., Dekhtyar, A., Distefano, J., et al: ‘Problems with precision: a response to ‘comments on ‘data mining static code attributes to learn defect predictors’’’, IEEE Trans. Softw. Eng., 2007, 33, (9), pp. 635636.
    64. 64)
      • 64. Peters, F., Menzies, T., Gong, L., et al: ‘Balancing privacy and utility in cross-company defect prediction’, IEEE Trans. Softw. Eng., 2013, 39, (8), pp. 10541068.
    65. 65)
      • 65. Wang, S., Yao, X.: ‘Using class imbalance learning for software defect prediction’, IEEE Trans. Reliab., 2013, 62, (2), pp. 434443.
    66. 66)
      • 66. Jing, X.Y., Wu, F., Dong, X., et al: ‘Heterogeneous cross-company defect prediction by unified metric representation and CCA-based transfer learning’. Proc. 10th Joint Meeting on Foundations of Software Engineering, 2015, pp. 496507.
    67. 67)
      • 67. Zhang, F., Mockus, A., Keivanloo, I., et al: ‘Towards building a universal defect prediction model with rank transformed predictors’, Empir. Softw. Eng., 2016, 21, (5), pp. 139.
    68. 68)
      • 68. Rahman, F., Posnett, D., Devanbu, P.: ‘Recalling the imprecision of cross-project defect prediction’. Proc. ACM SIGSOFT 20th Int. Symp. the Foundations of Software Engineering, 2012, pp. 111.
    69. 69)
      • 69. Ryu, D., Choi, O., Baik, J.: ‘Value-cognitive boosting with a support vector machine for cross-project defect prediction’, Empir. Softw. Eng., 2016, 21, (1), pp. 4371.
    70. 70)
      • 70. Tantithamthavorn, C., Mcintosh, S., Hassan, A., et al: ‘An empirical comparison of model validation techniques for defect prediction models’, IEEE Trans. Softw. Eng., 2017, 43, (1), pp. 118.
    71. 71)
      • 71. Jing, X.Y., Ying, S., Zhang, Z.W., et al: ‘Dictionary learning based software defect prediction’. Proc. 36th Int. Conf. Software Engineering, 2014, pp. 414423.
    72. 72)
      • 72. Jing, X.Y., Zhang, Z.W., Ying, S., et al: ‘Software defect prediction based on collaborative representation classification’. Companion Proc. 36th Int. Conf. Software Engineering, 2014, pp. 632633.
    73. 73)
      • 73. Wang, T., Zhang, Z., Jing, X.Y., et al: ‘Multiple kernel ensemble learning for software defect prediction’, Autom. Softw. Eng., 2016, 23, (4), pp. 569590.
    74. 74)
      • 74. Wang, S., Liu, T., Tan, L.: ‘Automatically learning semantic features for defect prediction’. Proc. 38th Int. Conf. Software Engineering, 2016, pp. 297308.
    75. 75)
      • 75. Yang, X., Lo, D., Xia, X., et al: ‘Deep learning for just-in-time defect prediction’. Proc. IEEE Int. Conf. Software Quality, Reliability and Security, 2015, pp. 1726.
    76. 76)
      • 76. Chen, L., Fang, B., Shang, Z., et al: ‘Negative samples reduction in cross-company software defects prediction’, Inf. Softw. Technol., 2015, 62, pp. 6777.
    77. 77)
      • 77. Xia, X., Lo, D., Pan, S.J., et al: ‘Hydra: massively compositional model for cross-project defect prediction’, IEEE Trans. Softw. Eng., 2016, 42, (10), pp. 977998.
    78. 78)
      • 78. Canfora, G., Lucia, A.D., Penta, M.D., et al: ‘Defect prediction as a multiobjective optimization problem’, Softw. Test. Verif. Reliab., 2015, 25, (4), pp. 426459.
    79. 79)
      • 79. Ryu, D., Jang, J.I., Baik, J.: ‘A transfer cost-sensitive boosting approach for cross-project defect prediction’, Softw. Qual. J., 2017, 25, (1), pp. 235272.
    80. 80)
      • 80. Yang, X., Lo, D., Xia, X., et al: ‘Tlel: a two-layer ensemble learning approach for just-in-time defect prediction’, Inf. Softw. Technol., 2017, 87, pp. 206220.
    81. 81)
      • 81. Wang, T., Zhang, Z., Jing, X.Y., et al: ‘Non-negative sparse-based semiboost for software defect prediction’, Softw. Test. Verif. Reliab., 2016, 26, (7), pp. 498515.
    82. 82)
      • 82. Zhang, Z., Jing, X.Y., Wang, T.: ‘Label propagation based semi-supervised learning for software defect prediction’, Autom. Softw. Eng., 2017, 24, (1), pp. 4769.
    83. 83)
      • 83. Nam, J., Kim, S.: ‘Clami: defect prediction on unlabeled datasets’. Proc. 30th IEEE/ACM Int. Conf. Automated Software Engineering, 2015, pp. 112.
    84. 84)
      • 84. Zhang, F., Zheng, Q., Zou, Y., et al: ‘Cross-project defect prediction using a connectivity-based unsupervised classifier’. Proc. 38th Int. Conf. Software Engineering, 2016, pp. 309320.
    85. 85)
      • 85. Okutan, A., Yıldız, O.T.: ‘Software defect prediction using Bayesian networks’, Empir. Softw. Eng., 2014, 19, (1), pp. 154181.
    86. 86)
      • 86. Bowes, D., Hall, T., Harman, M., et al: ‘Mutation-aware fault prediction’. Proc. 25th Int. Symp. Software Testing and Analysis, 2016, pp. 330341.
    87. 87)
      • 87. Chen, T.H., Shang, W., Nagappan, M., et al: ‘Topic-based software defect explanation’, J. Syst. Softw., 2017, 129, pp. 79106.
    88. 88)
      • 88. Shivaji, S., Whitehead, E.J., Akella, R., et al: ‘Reducing features to improve code change-based bug prediction’, IEEE Trans. Softw. Eng., 2013, 39, (4), pp. 552569.
    89. 89)
      • 89. Gao, K., Khoshgoftaar, T.M., Wang, H., et al: ‘Choosing software metrics for defect prediction: an investigation on feature selection techniques’, Softw. Pract. Experience, 2011, 41, (5), pp. 579606.
    90. 90)
      • 90. Laradji, I.H., Alshayeb, M., Ghouti, L.: ‘Software defect prediction using ensemble learning on selected features’, Inf. Softw. Technol., 2015, 58, pp. 388402.
    91. 91)
      • 91. Liu, S., Chen, X., Liu, W., et al: ‘Fecar: A feature selection framework for software defect prediction’. Proc. IEEE 38th Annual Computer Software and Applications Conf., 2014, pp. 426435.
    92. 92)
      • 92. Liu, W., Liu, S., Gu, Q., et al: ‘Fecs: a cluster based feature selection method for software fault prediction with noises’. Proc. IEEE 39th Annual Computer Software and Applications Conf., 2015, pp. 276281.
    93. 93)
      • 93. Xu, Z., Xuan, J., Liu, J., et al: ‘Michac: defect prediction via feature selection based on maximal information coefficient with hierarchical agglomerative clustering’. Proc. IEEE 23rd Int. Conf. Software Analysis, Evolution, and Reengineering, 2016, pp. 370381.
    94. 94)
      • 94. Menzies, T., Butcher, A., Cok, D., et al: ‘Local versus global lessons for defect prediction and effort estimation’, IEEE Trans. Softw. Eng., 2013, 39, (6), pp. 822834.
    95. 95)
      • 95. Bettenburg, N., Nagappan, M., Hassan, A.E.: ‘Towards improving statistical modeling of software engineering data: think locally, act globally!’, Empir. Softw. Eng., 2015, 20, (2), pp. 294335.
    96. 96)
      • 96. Herbold, S., Trautsch, A., Grabowski, J.: ‘Global vs. local models for cross-project defect prediction’, Empir. Softw. Eng., 2017, 22, (4), pp. 18661902.
    97. 97)
      • 97. Mezouar, M.E., Zhang, F., Zou, Y.: ‘Local versus global models for effort-aware defect prediction’. Proc. 26th Annual Int. Conf. Computer Science and Software Engineering, 2016, pp. 178187.
    98. 98)
      • 98. Nam, J., Kim, S.: ‘Heterogeneous defect prediction’. Proc. 10th Joint Meeting on Foundations of Software Engineering, 2015, pp. 508519.
    99. 99)
      • 99. He, P., Li, B., Ma, Y.: ‘Towards cross-project defect prediction with imbalanced feature sets’, CoRR, 2014, abs/1411.4228. Available at http://arxiv.org/abs/1411.4228.
    100. 100)
      • 100. Cheng, M., Wu, G., Jiang, M., et al: ‘Heterogeneous defect prediction via exploiting correlation subspace’. Proc. 28th Int. Conf. Software Engineering and Knowledge Engineering, 2016, pp. 171176.
    101. 101)
      • 101. Zhang, H., Zhang, X.: ‘Comments on ‘data mining static code attributes to learn defect predictors’’, IEEE Trans. Softw. Eng., 2007, 33, (9), pp. 635637.
    102. 102)
      • 102. He, H., Garcia, E.A.: ‘Learning from imbalanced data’, IEEE Trans. Knowl. Data Eng., 2009, 21, (9), pp. 12631284.
    103. 103)
      • 103. Jing, X.Y., Wu, F., Dong, X., et al: ‘An improved SDA based defect prediction framework for both within-project and cross-project class-imbalance problems’, IEEE Trans. Softw. Eng., 2017, 43, (4), pp. 321339.
    104. 104)
      • 104. Tan, M., Tan, L., Dara, S., et al: ‘Online defect prediction for imbalanced data’. Proc. 37th Int. Conf. Software Engineering, 2015, pp. 99108.
    105. 105)
      • 105. Chen, L., Fang, B., Shang, Z., et al: ‘Tackling class overlap and imbalance problems in software defect prediction’, Softw. Qual. J., 2018, 26, (1), pp. 97125.
    106. 106)
      • 106. Wu, F., Jing, X.Y., Dong, X., et al: ‘Cost-sensitive local collaborative representation for software defect prediction’. Proc. Int. Conf. Software Analysis, Testing and Evolution, 2016, pp. 102107.
    107. 107)
      • 107. Liu, M., Miao, L., Zhang, D.: ‘Two-stage cost-sensitive learning for software defect prediction’, IEEE Trans. Reliab., 2014, 63, (2), pp. 676686.
    108. 108)
      • 108. Rodriguez, D., Herraiz, I., Harrison, R., et al: ‘Preliminary comparison of techniques for dealing with imbalance in software defect prediction’. Proc. 18th Int. Conf. Evaluation and Assessment in Software Engineering, 2014, pp. 110.
    109. 109)
      • 109. Malhotra, R., Khanna, M.: ‘An empirical study for software change prediction using imbalanced data’, Empir. Softw. Eng., 2017, 22, (6), pp. 28062851.
    110. 110)
      • 110. Herzig, K., Just, S., Zeller, A.: ‘It's not a bug, it's a feature: how misclassification impacts bug prediction’. Proc. 2013 Int. Conf. Software Engineering, 2013, pp. 392401.
    111. 111)
      • 111. Rahman, F., Posnett, D., Herraiz, I., et al: ‘Sample size vs. Bias in defect prediction’. Proc. 2013 9th Joint Meeting on Foundations of Software Engineering, 2013, pp. 147157.
    112. 112)
      • 112. Tantithamthavorn, C., McIntosh, S., Hassan, A.E., et al: ‘The impact of mislabelling on the performance and interpretation of defect prediction models’. Proc. 37th Int. Conf. Software Engineering, 2015, pp. 812823.
    113. 113)
      • 113. Herzig, K., Just, S., Zeller, A.: ‘The impact of tangled code changes on defect prediction models’, Empir. Softw. Eng., 2016, 21, (2), pp. 303336.
    114. 114)
      • 114. Peters, F., Menzies, T.: ‘Privacy and utility for defect prediction: experiments with morph’. Proc. 34th Int. Conf. Software Engineering, 2012, pp. 189199.
    115. 115)
      • 115. Qi, F., Jing, X.Y., Zhu, X., et al: ‘Privacy preserving via interval covering based subclass division and manifold learning based bi-directional obfuscation for effort estimation’. Proc. 31st IEEE/ACM Int. Conf. Automated Software Engineering, 2016, pp. 7586.
    116. 116)
      • 116. Peters, F., Menzies, T., Layman, L.: ‘Lace2: better privacy-preserving data sharing for cross project defect prediction’. Proc. 37th Int. Conf. Software Engineering, 2015, pp. 801811.
    117. 117)
      • 117. Mende, T., Koschke, R.: ‘Effort-aware defect prediction models’. Proc. 14th European Conf. Software Maintenance and Reengineering, 2010, pp. 107116.
    118. 118)
      • 118. Kamei, Y., Matsumoto, S., Monden, A., et al: ‘Revisiting common bug prediction findings using effort-aware models’. Proc. IEEE Int. Conf. Software Maintenance, 2010, pp. 110.
    119. 119)
      • 119. Zhou, Y., Xu, B., Leung, H., et al: ‘An in-depth study of the potentially confounding effect of class size in fault prediction’, ACM Trans. Softw. Eng. Methodol., 2014, 23, (1), p. 10.
    120. 120)
      • 120. Yang, Y., Zhou, Y., Lu, H., et al: ‘Are slice-based cohesion metrics actually useful in effort-aware post-release fault-proneness prediction? An empirical study’, IEEE Trans. Softw. Eng., 2015, 41, (4), pp. 331357.
    121. 121)
      • 121. Sarkar, S., Kak, A.C., Rama, G.M.: ‘Metrics for measuring the quality of modularization of large-scale object-oriented software’, IEEE Trans. Softw. Eng., 2008, 34, (5), pp. 700720.
    122. 122)
      • 122. Zhao, Y., Yang, Y., Lu, H., et al: ‘An empirical analysis of package-modularization metrics: implications for software fault-proneness’, Inf. Softw. Technol., 2015, 57, (1), pp. 186203.
    123. 123)
      • 123. Yang, Y., Zhou, Y., Liu, J., et al: ‘Effort-aware just-in-time defect prediction: simple unsupervised models could be better than supervised models’. Proc. 24th ACM SIGSOFT Int. Symp. Foundations of Software Engineering, 2016, pp. 157168.
    124. 124)
      • 124. Yang, Y., Harman, M., Krinke, J., et al: ‘An empirical study on dependence clusters for effort-aware fault-proneness prediction’. Proc. 31st IEEE/ACM Int. Conf. Automated Software Engineering, 2016, pp. 296307.
    125. 125)
      • 125. Ma, W., Chen, L., Yang, Y., et al: ‘Empirical analysis of network measures for effort-aware fault-proneness prediction’, Inf. Softw. Technol., 2016, 69, pp. 5070.
    126. 126)
      • 126. Zhao, Y., Yang, Y., Lu, H., et al: ‘Understanding the value of considering client usage context in package cohesion for fault-proneness prediction’, Autom. Softw. Eng., 2017, 24, (2), pp. 393453.
    127. 127)
      • 127. Bennin, K.E., Keung, J., Monden, A., et al: ‘Investigating the effects of balanced training and testing datasets on effort-aware fault prediction models’. Proc. IEEE 40th Annual Computer Software and Applications Conf., 2016, pp. 154163.
    128. 128)
      • 128. Panichella, A., Alexandru, C.V., Panichella, S., et al: ‘A search-based training algorithm for cost-aware defect prediction’. Proc. 2016 on Genetic and Evolutionary Computation Conf., 2016, pp. 10771084.
    129. 129)
      • 129. Shepperd, M., Bowes, D., Hall, T.: ‘Researcher bias: the use of machine learning in software defect prediction’, IEEE Trans. Softw. Eng., 2014, 40, (6), pp. 603616.
    130. 130)
      • 130. Tantithamthavorn, C., McIntosh, S., Hassan, A.E., et al: ‘Comments on ‘researcher bias: the use of machine learning in software defect prediction’‘, IEEE Trans. Softw. Eng., 2016, 42, (11), pp. 10921094.
    131. 131)
      • 131. Ghotra, B., McIntosh, S., Hassan, A.E.: ‘Revisiting the impact of classification techniques on the performance of defect prediction models’. Proc. 37th Int. Conf. Software Engineering, 2015, pp. 789800.
    132. 132)
      • 132. Tantithamthavorn, C., McIntosh, S., Hassan, A.E., et al: ‘Automated parameter optimization of classification techniques for defect prediction models’. Proc. 38th Int. Conf. Software Engineering, 2016, pp. 321332.
    133. 133)
      • 133. Bowes, D., Hall, T., Petrić, J.: ‘Software defect prediction: do different classifiers find the same defects?Softw. Qual. J., 2017, Available at https://doi.org/10.1007/s11219-016-9353-3.
    134. 134)
      • 134. Bowes, D., Hall, T., Gray, D.: ‘Dconfusion: a technique to allow cross study performance evaluation of fault prediction studies’, Autom. Softw. Eng., 2014, 21, (2), pp. 287313.
    135. 135)
      • 135. Malhotra, R., Khanna, M.: ‘An exploratory study for software change prediction in object-oriented systems using hybridized techniques’, Autom. Softw. Eng., 2017, 24, (3), pp. 673717.
    136. 136)
      • 136. Zhang, F., Hassan, A.E., McIntosh, S., et al: ‘The use of summation to aggregate software metrics hinders the performance of defect prediction models’, IEEE Trans. Softw. Eng., 2017, 43, (5), pp. 476491.
    137. 137)
      • 137. He, P., Li, B., Liu, X., et al: ‘An empirical study on software defect prediction with a simplified metric set’, Inf. Softw. Technol., 2015, 59, pp. 170190.
    138. 138)
      • 138. Yang, Y., Zhao, Y., Liu, C., et al: ‘An empirical investigation into the effect of slice types on slice-based cohesion metrics’, Inf. Softw. Technol., 2016, 75, pp. 90104.
    139. 139)
      • 139. Jaafar, F., Guéhéneuc, Y.G., Hamel, S., et al: ‘Evaluating the impact of design pattern and anti-pattern dependencies on changes and faults’, Empir. Softw. Eng., 2016, 21, (3), pp. 896931.
    140. 140)
      • 140. Chen, L., Ma, W., Zhou, Y., et al: ‘Empirical analysis of network measures for predicting high severity software faults’, Sci. China Inf. Sci., 2016, 59, (12), pp. 118.
    141. 141)
      • 141. Kamei, Y., Fukushima, T., McIntosh, S., et al: ‘Studying just-in-time defect prediction using cross-project models’, Empir. Softw. Eng., 2016, 21, (5), pp. 20722106.
    142. 142)
      • 142. Petrić, J., Bowes, D., Hall, T., et al: ‘Building an ensemble for software defect prediction based on diversity selection’. Proc. 10th ACM/IEEE Int. Symp. Empirical Software Engineering and Measurement, 2016, p. 46.
    143. 143)
      • 143. Liu, W., Liu, S., Gu, Q., et al: ‘Empirical studies of a two-stage data preprocessing approach for software fault prediction’, IEEE Trans. Reliab., 2016, 65, (1), pp. 3853.
    144. 144)
      • 144. Xu, Z., Liu, J., Yang, Z., et al: ‘The impact of feature selection on defect prediction performance: an empirical comparison’. Proc. IEEE 27th Int. Symp. Software Reliability Engineering, 2016, pp. 309320.
    145. 145)
      • 145. Mende, T.: ‘Replication of defect prediction studies: problems, pitfalls and recommendations’. Proc. 6th Int. Conf. Predictive Models in Software Engineering, 2010, pp. 110.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-sen.2017.0148
Loading

Related content

content/journals/10.1049/iet-sen.2017.0148
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address