Online ISSN
1751-8814
Print ISSN
1751-8806
IET Software
Volume 5, Issue 2, April 2011
Volumes & issues:
Volume 5, Issue 2
April 2011
-
- Author(s): D. Dranidis ; H. Zhu ; S. Masticola
- Source: IET Software, Volume 5, Issue 2, p. 111 –112
- DOI: 10.1049/iet-sen.2011.9016
- Type: Article
- + Show details - Hide details
-
p.
111
–112
(2)
- Author(s): A. Avritzer ; E. de Souza e Silva ; R.M.M. Leão ; E.J. Weyuker
- Source: IET Software, Volume 5, Issue 2, p. 113 –119
- DOI: 10.1049/iet-sen.2010.0035
- Type: Article
- + Show details - Hide details
-
p.
113
–119
(7)
The authors present a new approach for the automated generation of test cases to be used for demonstrating the reliability of large industrial mission-critical systems. In this study they extend earlier work by using a performability model to track resource usage and resource failures. Results from the transient Markov chain analysis are used to estimate the software reliability at a given system execution time. - Author(s): P. Bunyakiati and A. Finkelstein
- Source: IET Software, Volume 5, Issue 2, p. 120 –131
- DOI: 10.1049/iet-sen.2010.0032
- Type: Article
- + Show details - Hide details
-
p.
120
–131
(12)
Software modelling standards such as the unified modelling language (UML) provide complex visual languages for producing the artefacts of software systems. Software tools support the production of these artefacts by providing model constructs and their usage rules. Owing to the size and complexity of these standards specifications, establishing the compliance of software modelling tools to the standards can be difficult. As a result, many software tools that advertise standards compliance may fail to live up to their claims. This study presents a compliance testing framework to determine the conditions of compliance of tools and to diagnose the causes of non-compliance issues. The Java-UML lightweight enumerator (JULE) tool realises this framework by providing a powerful technology to create a compliance test suite for modelling tools. JULE generates test cases only up to non-isomorphism to avoid combinatorial explosion. An experiment with respect to the UML 1.4 is presented in this study. The authors test ArgoUML for its compliance with the UML 1.4 specification. The authors also report some findings on four UML 2.x tools, including Eclipse Galileo UML2, Enterprise Architect 7.5, Poseidon for UML 8.0 and MagicDraw 16.6. - Author(s): M.X. Lin ; Y.L. Chen ; K. Yu ; G.S. Wu
- Source: IET Software, Volume 5, Issue 2, p. 132 –141
- DOI: 10.1049/iet-sen.2010.0029
- Type: Article
- + Show details - Hide details
-
p.
132
–141
(10)
In the context of test data generation, symbolic execution gets more attention as computing power increases continuously. Experiments show that test generation tools based on symbolic execution can get high coverage and find bugs on real applications. However, symbolic execution still has limitations in handling some complex program structures such as pointers, arrays and library functions. To address the problem, this study proposes a technique called lazy symbolic execution, which combines symbolic execution with a lazy evaluation strategy. The authors approach is motivated by the observation that some program structures can be reasoned about symbolically and the others have to be evaluated concretely. Traditional symbolic execution can cope with the former well, whereas lazy symbolic evaluation is used to handle the latter. However, lazy symbolic evaluation introduces intermediate variables into path constraints. To eliminate those variables, concrete values for some input variables are first obtained by constraint solving or searching processes. Then, the given path is executed again using inputs consisting of concrete and symbolic values. The procedure is repeated until all intermediate variables are wiped out. The authors have implemented a prototype tool and performed some experiments. The empirical results show the effectiveness of their approach.
Editorial: Automation of Software Test (AST '09)
Automated generation of test cases using a performability model
Standards compliance testing for unified modelling language tools
Lazy symbolic execution for test data generation
-
- Author(s): X. Li ; X. Qiu ; L. Wang ; X. Chen ; Z. Zhou ; L. Yu ; J. Zhao
- Source: IET Software, Volume 5, Issue 2, p. 142 –156
- DOI: 10.1049/iet-sen.2009.0009
- Type: Article
- + Show details - Hide details
-
p.
142
–156
(15)
The authors use unified modelling language (UML) 2.0 interaction overview diagrams (IODs) and sequence diagrams to construct simple and expressive scenario-based specifications, and present an approach to runtime verification of Java programs for exceptional consistency and mandatory consistency. The exceptional consistency requires that any forbidden scenario described by a given IOD never happens during the execution of a program, and the mandatory consistency requires that if a reference scenario described by a given sequence diagram occurs during the execution of a program, it must immediately adhere to a scenario described by a given IOD. In the approach, the authors first instrument a program under verification so as to gather the program execution traces related to a given scenario-based specification; then they drive the instrumented program to execute for generating the program execution traces; finally they check if the collected program execution traces satisfy the given specification. The approach leads to a supporting tool for testing in which UML interaction models are used as automatic test oracles to detect the wrong temporal ordering of message interaction in programs. - Author(s): R.R. Palacio ; A. Vizcaíno ; A.L. Morán ; V.M. González
- Source: IET Software, Volume 5, Issue 2, p. 157 –171
- DOI: 10.1049/iet-sen.2009.0097
- Type: Article
- + Show details - Hide details
-
p.
157
–171
(15)
Distributed software development is a new working philosophy that the software industry is currently facing. Organisations may benefit from the situations that this shift has created, although they must also confront new challenges related to them. In this study, the authors focused on the lack of timely adequate opportunities for informal interaction, which has been identified as an important issue to overcome coordination, communication and trust limitations. The authors attempted to confront this problem through obtaining information from the personal activities of remote colleagues. In this respect, the authors propose introducing and defining collaborative working spheres (CWS) because the authors argue that CWS permit the identification of opportunities for interaction at appropriate moments. This concept is illustrated with the design of CWS-instant messaging (IM), an extended IM tool that supports the CWS concept. This tool was tested by 16 distributed software development (DSD) workers during an initial scenario-based evaluation. The results show favourable evidence towards both the perceived usefulness and ease of use of CWS-IM. - Author(s): T. Martı́nez-Ruiz ; F. Garcı́a ; M. Piattini ; J. Münch
- Source: IET Software, Volume 5, Issue 2, p. 172 –187
- DOI: 10.1049/iet-sen.2010.0020
- Type: Article
- + Show details - Hide details
-
p.
172
–187
(16)
Variability in software process models justifies tailoring them to meet the specific goals and characteristics of organisations and projects. Existing process modelling notations typically do not have constructs which are appropriate for expressing process variability. To fill this gap, the authors have extended software process engineering metamodel (SPEM) to vSPEM, by adding new variability constructs (such as variants and variation points). This article presents an empirical validation to check whether the variability constructs supported through vSPEM are more appropriate for modelling variant-rich processes than SPEM, in terms of understandability of the notation, as well as of their variability mechanisms. The results indicate that the vSPEM variability mechanisms understandability is a 126.99% higher than for SPEM. On the other hand, process diagram understandability is a 34.87% lower with vSPEM than with SPEM. If we compare the relative results obtained with regard to understandability of diagrams and understandability of variation mechanisms of both vSPEM and SPEM, the enhancement just mentioned is 3.64 times that fall. The results indicate accepting that a slight decrease in understandability of the diagrams might lead to a large increase in understandability when using variability mechanisms of vSPEM. - Author(s): N. Upadhyay ; B.M. Deshpande ; V.P. Agrawal
- Source: IET Software, Volume 5, Issue 2, p. 188 –200
- DOI: 10.1049/iet-sen.2010.0049
- Type: Article
- + Show details - Hide details
-
p.
188
–200
(13)
Software component usability is a critical characteristic which represents the significant difference between conventional software quality models and component quality models. Current procedures and techniques are mostly concentrated on usability aspects of software systems as per end-user point of view (who interacts with the system). Very less research has been done to analyse usability characteristics from the viewpoint of a system designer, component selector, component acquirer and system integrator. This study provides a methodology based on digraph and matrix approach to provide in depth analysis of a component considering the usability characteristic. Digraph and (permanent) matrix are utilised to analyse component usability by considering all sub-characteristics and attributed factors along with all levels of interactive complexity (inter-intra) based on the concurrent approach. The proposed methodology will provide benefits to the component designer, component developer, system developer and decision maker. The concept of formation of hypothetical maximum (best) and hypothetical minimum (worst) index is proposed. Based on these, users can take the decision for selection, evaluation and ranking of potential candidates and wherever possible attain improvements in the component design and development. The applicability of the proposed methodology is demonstrated with an illustrative example. - Author(s): D. Evangelin Geetha ; T.V. Suresh Kumar ; K. Rajani Kanth
- Source: IET Software, Volume 5, Issue 2, p. 201 –215
- DOI: 10.1049/iet-sen.2010.0075
- Type: Article
- + Show details - Hide details
-
p.
201
–215
(15)
Performance is an important non-functional attribute to be considered for producing quality software. Software performance engineering (SPE) is a methodology having significant role in software engineering to assess the performance of software systems early in the lifecycle. Gathering performance data is an essential aspect of SPE approach. The authors have proposed a methodology to gather data during feasibility study by exploiting the use case point approach, gearing factor and COCOMO model. The proposed methodology is used to estimate the performance data required for performance assessment in the integrated performance prediction process (IP3) model. The gathered data is used as the input for solving the two models, (i) use case performance model and (ii) system model. The methodology is illustrated with a case study of airline reservation application. A regression analysis is carried out to validate the response time obtained in the use case performance model. The analysis shows the proposed estimation can be used along with performance walkthrough in data gathering. The performance metrics are obtained by solving the system model, and the behaviour of the hardware resources is observed. Bottleneck resources are identified and the performance parameters are optimised using sensitivity analysis. - Author(s): T. Wijayasiriwardhane ; R. Lai ; K.C. Kang
- Source: IET Software, Volume 5, Issue 2, p. 216 –228
- DOI: 10.1049/iet-sen.2009.0051
- Type: Article
- + Show details - Hide details
-
p.
216
–228
(13)
Effort estimation of software development is an important sub-discipline in software engineering. It has been the focus of much research mostly over the last couple of decades. In recent years, software development turned into engineering through the introduction of component-based software development (CBSD). The industry has reported significant advantages in using CBSD over traditional software development paradigms. However, the introduction of CBSD has also brought a host of unique challenges to software effort estimation which are quite different from those associated with traditional software development. Owing to the increasing tendency to use the CBSD approach in recent years, it is clear that effort estimation of CBSD is particularly an important area of research with a direct relevance to industry. In this study, the authors survey the most up-to-date research work published on predicting the effort of CBSD. The authors analyse the surveyed approaches in terms of modelling technique, the type of data required for their use, the type of estimation provided, lifecycle activities covered and their level of acceptability with regard to any validation. The aim of this survey is to provide a better understanding of the cost and schedule estimation approaches for CBSD. - Author(s): H.A. Duran-Limon ; M. Siller ; G.S. Blair ; A. Lopez ; J.F. Lombera-Landa
- Source: IET Software, Volume 5, Issue 2, p. 229 –237
- DOI: 10.1049/iet-sen.2009.0091
- Type: Article
- + Show details - Hide details
-
p.
229
–237
(9)
Current middleware does not offer enough support to cover the demands of emerging application domains, such as embedded systems or those featuring distributed multimedia services. These kinds of applications often have timeliness constraints and yet are highly susceptible to dynamic and unexpected changes in their environment. There is then a clear need to introduce adaptation in order for these applications to deal with such unpredictable changes. Resource adaptation can be achieved by using scheduling or allocation algorithms, for large-scale applications, but such a task can be complex and error-prone. Virtual machines (VMs) represent a higher-level approach, whereby resources can be managed without dealing with lower-level details, such as scheduling algorithms, scheduling parameters and so on. However, the overhead penalty imposed by traditional VMs is unsuitable for real-time applications. On the other hand, virtualisation has not been previously exploited as a means to achieve resource adaptation. This study presents a lightweight VM framework that exploits application-level virtualisation to achieve resource adaptation in middleware for soft real-time applications. Experimental results are presented to validate the approach. - Author(s): R.G. Crespo
- Source: IET Software, Volume 5, Issue 2, p. 238 –245
- DOI: 10.1049/iet-sen.2010.0143
- Type: Article
- + Show details - Hide details
-
p.
238
–245
(8)
The introduction and modification of features in Internet applications may result in undesired behaviours, and this effect is known as feature interaction (FI). We advocate that constraint logic programming (CLP) is suitable enough to detect and resolve FIs, within a non-monotonic system modelled in layers, with interfaces defined by predicate negation. We illustrate the specification of Email basic services and the ten most widely used Email features. CLP provides mechanisms to detect FIs through model checking. FI resolution is implemented above following priority and tail elimination strategies. - Author(s): L. Fürst ; M. Mernik ; V. Mahnič
- Source: IET Software, Volume 5, Issue 2, p. 246 –261
- DOI: 10.1049/iet-sen.2010.0081
- Type: Article
- + Show details - Hide details
-
p.
246
–261
(16)
Graph grammars and graph grammar parsers are to visual languages what string grammars and parsers are to textual languages. A graph grammar specifies a set of valid graphs and can thus be used to formalise the syntax of a visual language. A graph grammar parser is a tool for recognising valid programs in such a formally defined visual language. A parser for context-sensitive graph grammars, which have proved to be suitable for formalising real-world visual languages, was developed by Rekers and Schürr. We propose three improvements of this parser. One of them enlarges the class of parsable graph grammars, while the other two increase the parser's computational efficiency. Experimental results show that for some (meaningful) graph grammars, our improvements can enhance the parser's performance by orders of magnitude. The proposed improvements will hopefully increase both the parser's applicability and the interest in visual language parsing in general.
UML interaction model-driven runtime verification of Java programs
Tool to facilitate appropriate interaction in global software development
Modelling software process variability: an empirical study
Concurrent usability evaluation and design of software component: a digraph and matrix approach
Predicting the software performance during feasibility study
Effort estimation of component-based software development – a survey
Using lightweight virtual machines to achieve resource adaptation in middleware
Detecting and resolving email feature interactions through constraints
Improving the graph grammar parser of Rekers and Schürr
Most viewed content for this Journal
Article
content/journals/iet-sen
Journal
5
Most cited content for this Journal
-
Progress on approaches to software defect prediction
- Author(s): Zhiqiang Li ; Xiao-Yuan Jing ; Xiaoke Zhu
- Type: Article
-
Systematic review of success factors and barriers for software process improvement in global software development
- Author(s): Arif Ali Khan and Jacky Keung
- Type: Article
-
Empirical investigation of the challenges of the existing tools used in global software development projects
- Author(s): Mahmood Niazi ; Sajjad Mahmood ; Mohammad Alshayeb ; Ayman Hroub
- Type: Article
-
Feature extraction based on information gain and sequential pattern for English question classification
- Author(s): Yaqing Liu ; Xiaokai Yi ; Rong Chen ; Zhengguo Zhai ; Jingxuan Gu
- Type: Article
-
Early stage software effort estimation using random forest technique based on use case points
- Author(s): Shashank Mouli Satapathy ; Barada Prasanna Acharya ; Santanu Kumar Rath
- Type: Article