IET Software
Volume 8, Issue 3, June 2014
Volumes & issues:
Volume 8, Issue 3
June 2014
Fuzzy entropy-based framework for multi-faceted test case classification and selection: an empirical study
- Author(s): Manoj Kumar ; Arun Sharma ; Rajesh Kumar
- Source: IET Software, Volume 8, Issue 3, p. 103 –112
- DOI: 10.1049/iet-sen.2012.0198
- Type: Article
- + Show details - Hide details
-
p.
103
–112
(10)
Software testing is complex, ambiguous, labour-intensive, costly, error prone and a core activity of software development. Devising the cost-effective and adequate strategies for software test cases optimisation has been one of the research issues in software testing for a long time. Existing techniques of test case optimisation are not providing the optimal solution to the test cases optimisation problem in terms of precision, completeness, cost and adequacy. The authors have already proposed a fuzzy logic-based multi-faceted measurement framework for test cases classification and fitness evaluation. Though, it reduces testing efforts, cost, incompleteness and increases adequacy, but, still there is ambiguity in classification and selection of some test cases due to ambiguity in fitness of test cases. Hence, there is a strong need to devise a technique to measure suitably and resolve the ambiguity in software test cases classification and selection problem. In this paper, the authors have unified their earlier proposed framework by introducing fuzzy entropy-based approach. The proposed unified framework chunks out the high ambiguity test cases and selects low ambiguity test cases for exercising on SUT (Software under Test). The proposed unified framework is tested on artefacts of benchmark applications, and the results show that the proposed unified framework enhances the classification accuracy by reducing ambiguity, and increases the number of test cases classified accurately.
Empirical study of fault prediction for open-source systems using the Chidamber and Kemerer metrics
- Author(s): Raed Shatnawi
- Source: IET Software, Volume 8, Issue 3, p. 113 –119
- DOI: 10.1049/iet-sen.2013.0008
- Type: Article
- + Show details - Hide details
-
p.
113
–119
(7)
Software testers are usually provoked with projects that have faults. Predicting a class's fault-proneness is vital for minimising cost and improving the effectiveness of the software testing. Previous research on software metrics has shown strong relationships between software metrics and faults in object-oriented systems using a binary variable. However, these models do not consider the history of faults in classes. In this work, a dependent variable is proposed that uses fault history to rate classes into four categories (none, low risk, medium risk and high risk) and to improve the predictive capability of fault models. The study is conducted on many releases of four open-source systems. The study tests the statistical differences in seven machine learning algorithms to find whether the proposed variable can be used to build better prediction models. The performance of the classifiers using the four categories is significantly better than the binary variable. In addition, the results show improvements on the reliability of the prediction models as the software matures. Therefore the fault history improves the prediction of fault-proneness of classes in open-source systems.
Computation of alias sets from shape graphs for comparison of shape analysis precision
- Author(s): Viktor Pavlu ; Markus Schordan ; Andreas Krall
- Source: IET Software, Volume 8, Issue 3, p. 120 –133
- DOI: 10.1049/iet-sen.2012.0049
- Type: Article
- + Show details - Hide details
-
p.
120
–133
(14)
Various shape analyses have been introduced, but their precision often cannot be compared because they use different representations of analysis results. The aim of the authors work was to compare the precision of two well-known graph-based shape analyses, those presented by Sagiv, Reps and Wilhem (SRW); and by Nielson, Nielson and Hankin (NNH). Rather than comparing the shape graphs directly, their comparison uses alias information extracted from the graphs: for every pair (e 1, e 2) of pointer expressions in a programme, and for every programme point pt the authors determine the aliasing between e 1 and e 2. In their experiments, they use a new algorithm for extracting this alias information called the ‘common tails’ algorithm that is strictly more precise than the technique introduced by Reps, Sagiv and Wilhem (RSW). They present two interesting results: (i) they show that using their common tails algorithm, they are able to reduce the number of conservative results (strict may-aliases) by a factor of up to 5 (compared with the original RSW algorithm) while incurring an overhead of no more than 10% of analysis run-time. (ii) They show that NNH is more precise than SRW by a factor of 1.62 on average for their set of benchmarks.
Automatic trust calculation for service-oriented systems
- Author(s): Bo Ye ; Maziar Nekovee ; Anjum Pervez ; Mohammad Ghavami
- Source: IET Software, Volume 8, Issue 3, p. 134 –142
- DOI: 10.1049/iet-sen.2013.0056
- Type: Article
- + Show details - Hide details
-
p.
134
–142
(9)
Among various service providers providing identical or similar services with varying quality of service, trust is essential for service consumers to find the right one. Manually assigning feedback costs much time and suffers from several drawbacks. Only automatic trust calculation is feasible for large-scale service-oriented applications. Therefore an automatic method of trust calculation is proposed. To make the calculation accurate, the Kalman filter is adopted to filter out malicious non-trust quality criterion (NTQC) values instead of malicious trust values. To offer higher detection accuracy, it is further improved by considering the relationship between NTQC values and variances. Since dishonest or inaccurate values can still influence trust values, the similarity between consumers is used to weight data from other consumers. As existing models only used the Euclidean function and ignored others, a collection of distance functions is modified to calculate the similarity. Finally, experiments are carried out to access the robustness of the proposed model. The results show that the improved algorithm can offer higher detection accuracy, and it was discovered that another equation outperformed the Euclidean function.
Most viewed content
Most cited content for this Journal
-
Progress on approaches to software defect prediction
- Author(s): Zhiqiang Li ; Xiao-Yuan Jing ; Xiaoke Zhu
- Type: Article
-
Systematic review of success factors and barriers for software process improvement in global software development
- Author(s): Arif Ali Khan and Jacky Keung
- Type: Article
-
Empirical investigation of the challenges of the existing tools used in global software development projects
- Author(s): Mahmood Niazi ; Sajjad Mahmood ; Mohammad Alshayeb ; Ayman Hroub
- Type: Article
-
Feature extraction based on information gain and sequential pattern for English question classification
- Author(s): Yaqing Liu ; Xiaokai Yi ; Rong Chen ; Zhengguo Zhai ; Jingxuan Gu
- Type: Article
-
Early stage software effort estimation using random forest technique based on use case points
- Author(s): Shashank Mouli Satapathy ; Barada Prasanna Acharya ; Santanu Kumar Rath
- Type: Article