

IET Software
Volume 7, Issue 2, April 2013
Volumes & issues:
Volume 7, Issue 2
April 2013
Empirical study of software component integration process activities
- Author(s): Sajjad Mahmood
- Source: IET Software, Volume 7, Issue 2, p. 65 –75
- DOI: 10.1049/iet-sen.2012.0120
- Type: Article
- + Show details - Hide details
-
p.
65
–75
(11)
The component integration phase is key to component-based system (CBS) success because of its profound impact on the quality of a software product. However, CBS integration is a complex phase because it is rarely the case that components are perfectly matched and ready for ‘plug and play’. The component integration phase involves assembling pre-existing software components usually developed by different parties, and writing glue-code to handle the mismatches between CBS-to-be requirements and available component features. The objective of the study is to gain an in-depth understanding of the impact of integration process activities on the overall success of a CBS. The empirical study also investigates the inter-dependency between the CBS integration process activities. A survey was developed and data from CBS practitioners working in small-to-medium-sized organisations were collected. The results show that ‘component functional specification’, ‘structural compatibility analysis’, ‘architectural model development’ and ‘early glue-code specification’ are integration process activities that have positive correlation with the successful development of a CBS. However, the results indicate that the ‘quality properties analysis’ is not carried out as an integration process activity by the majority of CBS practitioners during development of a CBS. Furthermore, the results of the survey also provide empirical evidence that there is a positive association between various key CBS integration process activities.
Does software error/defect identification matter in the Italian industry?
- Author(s): Giuseppe Scanniello ; Fausto Fasano ; Andrea De Lucia ; Genoveffa Tortora
- Source: IET Software, Volume 7, Issue 2, p. 76 –84
- DOI: 10.1049/iet-sen.2011.0170
- Type: Article
- + Show details - Hide details
-
p.
76
–84
(9)
The authors present the results of a descriptive survey to ascertain the relevance and the typology of the software error/defect identification methods/approaches used in the industrial practice. This study involved industries/organisations that develop and sell software as a main part of their business or develop software as an integral part of their products or services. The results indicated that software error/defect identification is very relevant and regard almost the totality of the interviewed companies. The most widely used and popular practice is testing. An increasing interest has been also manifested in distributed inspection methods.
Test suite prioritisation using trace events technique
- Author(s): Kavitha Rajarathinam and Sureshkumar Natarajan
- Source: IET Software, Volume 7, Issue 2, p. 85 –92
- DOI: 10.1049/iet-sen.2011.0203
- Type: Article
- + Show details - Hide details
-
p.
85
–92
(8)
The size of the test suite and the duration of time determines the time taken by the regression testing. Conversely, the testers can prioritise the test cases by the use of a competent prioritisation technique to obtain an increased rate of fault detection in the system, allowing for earlier corrections, and getting higher overall confidence that the software has been tested suitably. A prioritised test suite is more likely to be more effective during that time period than would have been achieved via a random ordering if execution needs to be suspended after some time. An enhanced test case ordering may be probable if the desired implementation time to run the test cases is proven earlier. This research work's main intention is to prioritise the regression-testing test cases. In order to prioritise the test cases some factors are considered here. These factors are employed in the prioritisation algorithm. The trace events are one of the important factors, used to find the most significant test cases in the projects. The requirement factor value is calculated and subsequently a weightage is calculated and assigned to each test case in the software based on these factors by using a thresholding technique. Later, the test cases are prioritised according to the weightage allocated to them. Executing the test cases based on the prioritisation will greatly decreases the computation cost and time. The proposed technique is efficient in prioritising the regression test cases. The new prioritised subsequences of the given unit test suites are executed on Java programs after the completion of prioritisation. Average of the percentage of faults detected is an evaluation metric used for evaluating the ‘superiority’ of these orderings.
Validating dimension hierarchy metrics for the understandability of multidimensional models for data warehouse
- Author(s): Anjana Gosain ; Sushama Nagpal ; Sangeeta Sabharwal
- Source: IET Software, Volume 7, Issue 2, p. 93 –103
- DOI: 10.1049/iet-sen.2012.0095
- Type: Article
- + Show details - Hide details
-
p.
93
–103
(11)
Structural properties including hierarchies have been recognised as important factors influencing quality of a software product. Metrics based on structural properties (structural complexity metrics) have been popularly used to assess the quality attributes like understandability, maintainability, fault-proneness etc. of a software artefact. Although few researchers have considered metrics based on dimension hierarchies to assess the quality of multidimensional models for data warehouse, there are certain aspects of dimension hierarchies like those related to multiple hierarchies, shared dimension hierarchies among various dimensions etc. which have not been considered in the earlier works. In the authors’ previous work, they identified the metrics based on these aspects which may contribute towards the structural complexity and in turn the quality of multidimensional models for data warehouse. However, the work lacks theoretical and empirical validation of the proposed metrics and any metric proposal is acceptable in practice, if it is theoretically and empirically valid. In this study, the authors provide thorough validation of the metrics considered in their previous work. The metrics have been validated theoretically on the basis of Briand's framework – a property-based framework and empirically on the basis of controlled experiment using statistical techniques like correlation and linear regression. The results of these validations indicate that these metrics are either size or length measure and hence, contribute significantly towards structural complexity of multidimensional models and have considerable impact on understandability of these models.
Metaheuristic approach for constructing functional test-suites
- Author(s): Himer Avila-George ; Jose Torres-Jimenez ; Loreto Gonzalez-Hernandez ; Vicente Hernández
- Source: IET Software, Volume 7, Issue 2, p. 104 –117
- DOI: 10.1049/iet-sen.2012.0074
- Type: Article
- + Show details - Hide details
-
p.
104
–117
(14)
Today, software systems are complex and have many possible configurations. A deficient software testing process often leads to unfortunate consequences, including data losses, large economic losses, security breaches, and even bodily harm. Thus, the problem of performing effective and economical testing is a key issue. Combinatorial testing is a method that can reduce cost and increase the effectiveness of software testing for many applications. It is based on constructing economical sized test-suites that provide coverage of the most prevalent configurations. Mixed covering arrays (MCAs) are combinatorial structures that can be used to represent these test-suites. MCAs are combinatorial objects represented as matrices having a test case per row. MCAs are small, in comparison to an exhaustive approach, and guarantee a level of interaction coverage among the parameters involved. This study presents a metaheuristic approach based on a simulated annealing (SA) algorithm for constructing MCAs. This algorithm incorporates several distinguishing features, including an efficient heuristic to generate good quality initial solutions, and a compound neighbourhood function that combines two carefully designed neighbourhood functions. The experimental design involved a benchmark reported in the literature and two real cases of software components. The experimental evidence showed that the SA algorithm equals or improves the obtained results by other approaches reported in the literature, and also finds the optimal solution in some of the solved cases.
Early performance assessment in component-based software systems
- Author(s): Jaber Karimpour ; Ayaz Isazadeh ; Habib Izadkhah
- Source: IET Software, Volume 7, Issue 2, p. 118 –128
- DOI: 10.1049/iet-sen.2011.0143
- Type: Article
- + Show details - Hide details
-
p.
118
–128
(11)
Most techniques used to assess the qualitative characteristics of software are done in testing phase of software development. Assessment of performance in the early software development process is particularly important to risk management. Software architecture, as the first product, plays an important role in the development of the complex software systems. Using software architecture, quality attributes (such as performance, reliability and security) can be evaluated at the early stages of the software development. In this study, the authors present a framework for taking the advantages of architectural description to evaluate software performance. To do so, the authors describe static structure and architectural behaviour of a software system as the sequence diagram and the component diagram of the Unified Modelling Language (UML), respectively; then, the described model is automatically converted into the ‘interface automata’, which provides the formal foundation for the evaluation. Finally, the evaluation of architectural performance is performed using ‘queuing theory’. The proposed framework can help the software architect to choose an appropriate architecture in terms of quality or remind him/her of making necessary changes in the selected architecture. The main difference among the proposed method and other methods is that the proposed method benefits the informal description methods, such as UML, to describe the architecture of software systems; it also enjoys a formal and lightweight language, called ‘interface automata’ to provide the infrastructure for verification and evaluation.
Most viewed content for this Journal

Most cited content for this Journal
-
Which battery model to use?
- Author(s): M.R. Jongerden and B.R. Haverkort
- Type: Article
-
Agile methods in European embedded software development organisations: a survey on the actual use and usefulness of Extreme Programming and Scrum
- Author(s): O. Salo and P. Abrahamsson
- Type: Article
-
Shaping human capital in software development teams: the case of mentoring enabled by semantics
- Author(s): P. Soto-Acosta ; C. Casado-Lumbreras ; F. Cabezas-Isla
- Type: Article
-
Validation of web service compositions
- Author(s): L. Baresi ; D. Bianculli ; C. Ghezzi ; S. Guinea ; P. Spoletini
- Type: Article
-
Automated software test optimisation framework – an artificial bee colony optimisation-based approach
- Author(s): D. Jeya Mala ; V. Mohan ; M. Kamalapriya
- Type: Article