Online ISSN
1751-8814
Print ISSN
1751-8806
IET Software
Volume 6, Issue 4, August 2012
Volumes & issues:
Volume 6, Issue 4
August 2012
-
- Author(s): F. Siddique and O. Maqbool
- Source: IET Software, Volume 6, Issue 4, p. 283 –295
- DOI: 10.1049/iet-sen.2012.0027
- Type: Article
- + Show details - Hide details
-
p.
283
–295
(13)
As requirements of organisations change, so do the software systems within them. When changes are carried out under tough deadlines, software developers often do not follow software engineering principles, which results in deteriorated structure of the software. A badly structured system is difficult to understand for further changes. To improve structure, re-modularisation may be carried out. Clustering techniques have been used to facilitate automatic re-modularisation. However, clusters produced by clustering algorithms are difficult to comprehend unless they are labelled appropriately. Manual assignment of labels is tiresome, thus efforts should be made towards automatic cluster label assignment. In this study, the authors focus on facilitating comprehension of software clustering results by automatically assigning meaningful labels to clusters. To assign labels, the authors use term weighting schemes borrowed from the domain of information retrieval and text categorisation. Although some term weighting schemes have been used by researchers for software cluster labelling, there is a need to analyse the term weighting schemes and related issues to identify the strengths and weaknesses of these schemes for software cluster labelling. In this context, the authors analyse the behaviour of seven well-known term weighting schemes. Also, they perform the experiments on five software systems to identify software characteristics which affect the labelling behaviour of the term weighting schemes. - Author(s): M. Wang ; V. Holub ; T. Parsons ; P. O'Sullivan ; J. Murphy
- Source: IET Software, Volume 6, Issue 4, p. 296 –306
- DOI: 10.1049/iet-sen.2011.0091
- Type: Article
- + Show details - Hide details
-
p.
296
–306
(11)
Enterprise systems produce a vast amount of logging data. This critical and valuable information must be processed automatically for timely system analysis and recovery. As a result of industry demands, a standard database containing known issues has been introduced – a symptom database. Each symptom consists of a rule pattern and corresponding solutions. Patterns used for symptom identification are encoded as a XPath expression and matched against a stream of events in a standardised WSGI format common base event. The ability of an efficient matching for symptom patterns has been raised as an important requirement by industries. The authors present a real-time symptom identification in a stream of events. The implementation will allow multiple autonomic computing components such as self-monitoring sensors to effectively match known patterns in large datasets in run time. Unlike current state of the art approaches, the proposed solution allows users to define patterns using all the complex XPath functions in addition to standard numeric and Boolean operators. In particular, it was aimed at efficient simultaneous matching of a large set of XPath-based symptom patterns against a high-volume event stream, which is crucial for symptom identification but was not addressed efficiently by currently available XPath-matching engines. - Author(s): Y. Liu and A.S. Fong
- Source: IET Software, Volume 6, Issue 4, p. 307 –312
- DOI: 10.1049/iet-sen.2011.0144
- Type: Article
- + Show details - Hide details
-
p.
307
–312
(6)
Dynamic compilation increases Java virtual machine (JVM) performance because running compiled codes is faster than interpreting Java bytecodes. However, inappropriate decision on dynamic compilation may degrade performance owing to compilation overhead. A good heuristic algorithm for dynamic compilation should achieve an appropriate balance between compilation overhead and performance gain in each method invocation sequence. A method-size and execution-time heuristic algorithm is proposed in the study. The key principle of the algorithm is that different method-sizes necessitate different compile thresholds for optimal performance. A parameter search mechanism using a genetic algorithm for dynamic compilation is proposed to find optimised multi-thresholds in the algorithm. This heuristic algorithm is evaluated in an openJDK Java Server JVM using SPEC JVM98 benchmark suite. The algorithm shows an overall advantage in performance speedup when testing benchmarks and gain speedup by 19.1% on average. The algorithm also increases the performance of original openJDK by 10.2% when extended to the whole benchmark suite. - Author(s): D. Gavalas ; M. Kenteris ; C. Konstantopoulos ; G. Pantziou
- Source: IET Software, Volume 6, Issue 4, p. 313 –322
- DOI: 10.1049/iet-sen.2011.0156
- Type: Article
- + Show details - Hide details
-
p.
313
–322
(10)
This study deals with the problem of deriving personalised recommendations for daily sightseeing itineraries for tourists visiting any destination. The authors’ approach considers selected places of interest that a traveller would potentially wish to visit and derives a near-optimal itinerary for each day of visit; the places of potential interest are selected based on stated or implied user preferences. The authors’ method enables the planning of customised daily personalised tourist itineraries considering user preferences, time available for visiting sights on a daily basis, opening days of sights and average visiting times for these sights. Herein, the authors propose a heuristic solution to this problem addressed to both web and mobile web users. Evaluation and simulation results verify the competence of the authors’ approach against an alternative method. - Author(s): S. Misra ; I. Akman ; R. Colomo-Palacios
- Source: IET Software, Volume 6, Issue 4, p. 323 –334
- DOI: 10.1049/iet-sen.2011.0206
- Type: Article
- + Show details - Hide details
-
p.
323
–334
(12)
This study proposes a framework for the evaluation and validation of software complexity measure. This framework is designed to analyse whether or not software metric qualifies as a measure from different perspectives. Unlike existing frameworks, it takes into account the practical usefulness of the measure and includes all the factors that are important for theoretical and empirical validation including measurement theory. The applicability of the framework is tested by using cognitive functional size measure. The testing process shows that in the same manner the proposed framework can be applied to any software measure. A comparative study with other frameworks has also been performed. The results reflect that the present framework is a better representation of most of the parameters that are required to evaluate and validate a new complexity measure. - Author(s): H. Azath and R.S.D. Wahidabanu
- Source: IET Software, Volume 6, Issue 4, p. 335 –341
- DOI: 10.1049/iet-sen.2011.0146
- Type: Article
- + Show details - Hide details
-
p.
335
–341
(7)
Software development effort estimation is important for quality management in the software development industry, yet its automation still remains a challenging issue. Accurate estimation of software effort is critical in software engineering. Existing methods for software cost estimation will use very few quality factors for the estimation. So, in order to overcome this drawback, the authors proposed an efficient effort estimation system based on quality assurance coverage. This study is a basis for the improvement of software effort estimation research through a series of quality attributes along with constructive cost model (COCOMO). The classification of software system for which the effort estimation is to be calculated based on COCOMO classes. For this quality assurance ISO 9126 quality factors are used and for the weighing factors the function point metric is used as an estimation approach. Effort is estimated for MS word 2007 using the following models: Albrecht and Gaffney model, Kemerer model, SMPEEM model (Software Maintenance Project Effort Estimation Model and FP Matson, Barnett and Mellichamp model. - Author(s): H. Yu ; Z.-H. Deng ; N. Gao
- Source: IET Software, Volume 6, Issue 4, p. 342 –349
- DOI: 10.1049/iet-sen.2011.0082
- Type: Article
- + Show details - Hide details
-
p.
342
–349
(8)
The ability to compute top-k matches to eXtensible Markup Language (XML) queries is gaining importance owing to the increasing of large XML repositories. Current work on top-k match to XML queries mainly focuses on employing XPath, XQuery or NEXI as the query language, whereas little work has concerned on top-k match to XML keyword search. In this study, the authors propose a novel two-layer-based index construction and associated algorithm for efficiently computing top-k results for XML keyword search. Our core contribution, the two-layer-based inverted Index and associated algorithm for XML keyword search take both score-sorted-sequence and Dewey ID-sorted-sequence into consideration, and thus gain performance benefits during querying process. The authors have conducted expensive experiments and our experimental results show efficiency advantages compared with existing approaches. - Author(s): M. Abdellatief ; A.B.M. Sultan ; A.A. Abdul Ghani ; M.A. Jabar
- Source: IET Software, Volume 6, Issue 4, p. 350 –357
- DOI: 10.1049/iet-sen.2011.0122
- Type: Article
- + Show details - Hide details
-
p.
350
–357
(8)
The motivation of this study is to bridge the gap between component providers and component users, especially in the area of component evaluation, using component information flow (CIF) measurement and multidimensional approaches for measurement interpretation. By measuring the design of component-based software systems (CBSS), software designers, testers and maintainers may be able to locate weaknesses in the system design and to estimate the effort required to test as well as the cost of maintenance. This study proposes a CIF based on inter-component flow and intra-component flow. Moreover, a set of metrics based on the CIF was developed to characterise and evaluate the effect of the component design size on the quality of CBSS design. The theoretical evaluation results indicated that the proposed metrics are valid size measures. An application that demonstrates the intuitiveness of the mentioned approach is also presented. Results show that multidimensional analysis of design size appears promising as a means of capturing the quality of the CBSS design in question. - Author(s): M. Rizwan Jameel Qureshi
- Source: IET Software, Volume 6, Issue 4, p. 358 –363
- DOI: 10.1049/iet-sen.2011.0110
- Type: Article
- + Show details - Hide details
-
p.
358
–363
(6)
Extreme programming (XP) is one of the most widely used agile methodologies for software development. It intends to improve software quality and responsiveness to changing customer requirements. Despite the facts that the use of XP offers a number of benefits and it has been a widely used agile methodology, XP does not offer the same benefits when it comes to medium and large software projects. Some of the reasons for this are weak documentation, lack of strong architecture and ignorance to risk awareness during the software development. Owing to the ever-increasing demand of agile approaches, this study addresses the problem of XP's ability to handle medium and large projects. Most of the companies that employ XP as a development methodology for medium and large projects face this problem, which echoes the importance of this problem. To address this problem, in this study XP model is extended in such a way that it equally offers its benefits for medium- and large-scale projects. As an evaluation of the extended XP, three independent industrial case studies are conducted. The case studies are described and results are presented in the study. The results provide evidence that the extended XP can be beneficial for medium and large software development projects. - Author(s): J.S. Keränen and T.D. Räty
- Source: IET Software, Volume 6, Issue 4, p. 364 –376
- DOI: 10.1049/iet-sen.2011.0111
- Type: Article
- + Show details - Hide details
-
p.
364
–376
(13)
The evolution of software testing technologies has significantly reduced software testing execution times, but the test design and generation are still often implemented with slow manual-oriented methods. Model-based testing (MBT) offers automation to test design and generation, and different MBT solutions are familiar from research, but more effort needs to be done to adopt MBT for industrial use. Hardware in the loop (HIL) is a simulation and testing technique used in the development and testing of embedded systems. HIL is a challenging application field for MBT due to complex and non-deterministic nature of some embedded systems. To tackle this problem, the authors present a novel prototype platform in which online and offline MBT is applied to HIL environment. MBT in general has been introduced for HIL in scientific literature before, but the application of online MBT in HIL is a novel approach. The whole novel MBT in HIL prototype platform along with the used MBT tool, platform architecture and MBT process are presented accompanied by experimental results and analysis of two case studies with an example embedded system under test. - Author(s): L.K. Shar and H.B.K. Tan
- Source: IET Software, Volume 6, Issue 4, p. 377 –390
- DOI: 10.1049/iet-sen.2011.0084
- Type: Article
- + Show details - Hide details
-
p.
377
–390
(14)
Cross site scripting (XSS) vulnerability is mainly caused by the failure of web applications in sanitising user inputs embedded in web pages. Even though state-of-the-art defensive coding methods and vulnerability detection methods are often used by developers and security auditors, XSS flaws still remain in many applications because of (i) the difficulty of adopting these methods, (ii) the inadequate implementation of these methods, and/or (iii) the lack of understanding of XSS problem. To address this issue, this study proposes a code-auditing approach that recovers the defence model implemented in program source code and suggests guidelines for checking the adequacy of recovered model against XSS attacks. On the basis of the possible implementation patterns of defensive coding methods, our approach extracts all such defences implemented for securing each potentially vulnerable HTML output. It then introduces a variant of control flow graph, called tainted-information flow graph, as a model to audit the adequacy of XSS defence artefacts. The authors evaluated the proposed method based on the experiments on seven Java-based web applications. In the auditing experiments, our approach was effective in recovering all the XSS defence features implemented in the test subjects. The extracted artefacts were also shown to be useful for filtering the false-positive cases reported by a vulnerability detection method and helpful in fixing the vulnerable code sections. - Author(s): J. Guo ; Y. Wang ; Z. Zhang ; J. Nummenmaa ; N. Niu
- Source: IET Software, Volume 6, Issue 4, p. 391 –401
- DOI: 10.1049/iet-sen.2010.0072
- Type: Article
- + Show details - Hide details
-
p.
391
–401
(11)
Existing product requirements form a rich source for domain requirements analysis in software product lines (SPLs). Most existing domain analysis techniques depend on domain experts’ experience and manual operation to identify the commonalities and variabilities of product requirements. They often demand a high level of manual effort and a large up-front investment, which can present a prohibitive barrier for SPL adoption. This study proposes a model-driven approach to semi-automatically derive domain functional requirements (DFRs) from product functional requirements (PFRs). Based on the linguistic characterisation of a domain's action-oriented concerns, the authors apply Fillmore's semantic framework to functional requirements and define metamodels for PFRs and DFRs. Functional requirements of existing products are constructed as corresponding PFR models. Following the proposed merging and refinement rules, the authors approach automates the transformation from PFR models into DFR models by merging the same or similar PFRs and analysing their commonality and variability. The resulting DFR models can serve as an initial basis of the SPL. The authors demonstrate the authors approach using an example of a home security system (HSS) SPL and give a preliminary evaluation. The authors approach provides a rigorous model-based support for DFRs development and complements existing domain analysis techniques with less time and effort.
Enhancing comprehensibility of software clustering results
Symptom matching for event streams
Heuristic optimisation algorithm for Java dynamic compilation
Web application for recommending personalised mobile tourist routes
Framework for evaluation and validation of software complexity measures
Efficient effort estimation system viz. function points and quality assurance coverage
Efficient top-k algorithm for eXtensible Markup Language keyword search
Multidimentional size measure for design of component-based software system
Agile software development methodology for medium and large projects
Model-based testing of embedded systems in hardware in the loop environment
Auditing the XSS defence features implemented in web application programs
Model-driven approach to developing domain functional requirements in software product lines
Most viewed content for this Journal
Article
content/journals/iet-sen
Journal
5
Most cited content for this Journal
-
Progress on approaches to software defect prediction
- Author(s): Zhiqiang Li ; Xiao-Yuan Jing ; Xiaoke Zhu
- Type: Article
-
Systematic review of success factors and barriers for software process improvement in global software development
- Author(s): Arif Ali Khan and Jacky Keung
- Type: Article
-
Empirical investigation of the challenges of the existing tools used in global software development projects
- Author(s): Mahmood Niazi ; Sajjad Mahmood ; Mohammad Alshayeb ; Ayman Hroub
- Type: Article
-
Feature extraction based on information gain and sequential pattern for English question classification
- Author(s): Yaqing Liu ; Xiaokai Yi ; Rong Chen ; Zhengguo Zhai ; Jingxuan Gu
- Type: Article
-
Early stage software effort estimation using random forest technique based on use case points
- Author(s): Shashank Mouli Satapathy ; Barada Prasanna Acharya ; Santanu Kumar Rath
- Type: Article