IET Software
Volume 14, Issue 4, August 2020
Volumes & issues:
Volume 14, Issue 4
August 2020
-
- Author(s): Muhammad Usman Ashraf ; Fathy Alboraei Eassa ; Aiiad Ahmad ; Abdullah Algarni
- Source: IET Software, Volume 14, Issue 4, p. 319 –327
- DOI: 10.1049/iet-sen.2018.5062
- Type: Article
- + Show details - Hide details
-
p.
319
–327
(9)
Exascale computing systems (ECS) are anticipated to perform at Exaflop speed (1018 operations per second) using power consumption <20 MW. This ultrascale performance requires the speedup in the system by thousand-fold enhancement in current Petascale. For future high-performance computing (HPC), power consumption is one of the vital challenges faced to achieve Exaflops through the traditional way of increasing clock-speed. One standard way to attain such significant performance is through massive parallelism. In the early stages, it is hard to decide the promising parallel programming approach that can provide massive parallelism to attain ExaFlops. This article commences with a short description and implementation of algorithms of various hybrid parallel programming models (PPMs) for homogeneous and heterogeneous cluster systems. Furthermore, the authors evaluated performance and power consumption in these hybrid models by implementing in two HPC benchmarking applications such as square matrix multiplication and Jacobi iterative solver for two-dimensional Laplace equation. The results demonstrated that the hybrid of heterogeneous (MPI + X) outperformed to homogeneous parallel programming (MPI + OpenMP) model. This empirical investigation of hybrid PPMs is a leading step for researchers and development communities to select a promising model for emerging ECS.
- Author(s): Carlos Eduardo Carbonera ; Kleinner Farias ; Vinicius Bischoff
- Source: IET Software, Volume 14, Issue 4, p. 328 –344
- DOI: 10.1049/iet-sen.2018.5334
- Type: Article
- + Show details - Hide details
-
p.
328
–344
(17)
The field of software-development effort estimation explores ways of defining effort through prediction approaches. Even though this field has a crucial impact on budgeting and project planning in industry, the number of works classifying and examining currently available approaches is still small. This article, therefore, presents a comprehensive overview of these approaches, and pinpoints research gaps, challenges and trends. A systematic mapping of the literature was designed and performed based on well-established practical guidelines. In total, 120 primary studies were selected, analysed and categorised, after applying a careful filtering process from a sample of 3746 candidate studies to answer six research questions. Over 70% of the selected studies adopted multiple effort estimation approaches; over 45% adopted evaluation research as a research method; over 90% of the participants were students, rather than professionals; most studies had their quality assessed as high, and were most commonly published in journals. Our study benefits practitioners and researchers by providing a body of knowledge about the current literature, serving as a starting point for upcoming studies. This article reports challenges worth investigating, regarding the use of cognitive load and team interaction.
- Author(s): Johan Silvander ; Krzysztof Wnuk ; Mikael Svahnberg
- Source: IET Software, Volume 14, Issue 4, p. 345 –357
- DOI: 10.1049/iet-sen.2018.5338
- Type: Article
- + Show details - Hide details
-
p.
345
–357
(13)
An intent-driven system is a compositional system of human actors and machine actors. The aim of intent-driven systems is to capture stakeholders’ intents and transform these into a form that enables computer processing of the intents. Only then are different machine actors able to negotiate with each other on behalf of their respective stakeholders and their intents, and suggest a mutually beneficial collaboration. The aim is to find existing methods/techniques which could be used as building blocks to construct intent-driven systems. This is used to provide insight into what is needed to enable intent-driven systems with the help of these methods/techniques. As a part of a design science study, a Systematic Literature Review is conducted. The existences of methods/techniques which can be used as building blocks to construct intent-driven systems exist in the literature. How these methods/techniques can interact in order to enable realisations of intent-driven systems is not evident in the existing literature. The synthesis shows a need for further research regarding the semantic interchange of information, actor interaction in intent-driven systems, and the governance of intent-driven systems.
Empirical investigation: performance and power-consumption based dual-level model for exascale computing systems
Software development effort estimation: a systematic mapping study
Systematic literature review on intent-driven systems
-
- Author(s): Adriana Lopes Damian ; Anna Beatriz Marques ; Williamson Silva ; Simone Diniz Junqueira Barbosa ; Tayana Conte
- Source: IET Software, Volume 14, Issue 4, p. 358 –368
- DOI: 10.1049/iet-sen.2019.0171
- Type: Article
- + Show details - Hide details
-
p.
358
–368
(11)
Interaction models specify the structure and content of the user interface, the allowed user actions, and the corresponding system responses. There is a need to inspect interaction models, as it avoids the propagation of defects to other artefacts. We created two inspection techniques for interaction models, called MoLVERIC Cards (MCards) and MoLVERIC Check (MCheck). MCards employs a gamification mechanism to motivate practitioners during the inspection. MCheck is a simple technique to be used by practitioners in a traditional way. Both techniques have questions, whose answers assist in identifying defects. We performed three studies to verify whether these techniques support the inspection of interaction models. In the first and second studies, we evaluated MCards and MCheck in comparison with an ad-hoc technique supported by the conventional inspection approach based on defect types. The results of these studies showed that both techniques support the inspection of interaction models. In the third study, we evaluated MCard in comparison with MCheck to understand the participants'; perceptions of both techniques. The study results showed that MCards was considered more suitable for practitioners interested in using dynamic activities, while MCheck was considered more suitable for practitioners who want to use a more traditional technique.
- Author(s): Kimia Rezaei Kalantari ; Ali Ebrahimnejad ; Homayun Motameni
- Source: IET Software, Volume 14, Issue 4, p. 369 –376
- DOI: 10.1049/iet-sen.2019.0018
- Type: Article
- + Show details - Hide details
-
p.
369
–376
(8)
Software rejuvenation is an effective technique to counteract software ageing in continuously-running applications such as web-service-based systems. In a client-server application, where the server is intended to run perpetually, rejuvenation of the server process periodically during the server idle times increases the availability of that service. In these systems, web services are allocated based on the receiver's requirements and server's facilities. Since the selection of a server among candidates while maintaining the optimal quality of service is an NP-hard problem, meta-heuristics seems to be suitable. In this study, the proposed dynamic software rejuvenation as a proactive fault-tolerance technique based on a combination of ant colony optimisation (ACO) and gravitational emulation local search (GELS) so as to determine the optimal times when rejuvenation can be performed and failure rate can be minimised. The newly proposed method combined the public search capabilities of ACO with local search of GELS algorithm in an effort to create a stable algorithm, which can make reaching the global optimum largely possible in the proposed work. The simulation results revealed that the proposed strategy can decrease the failure rate of web services averagely by 28% in comparison with genetic algorithm and decision-tree strategies.
- Author(s): Mahmoud Ghorbanzadeh and Hamid Reza Shahriari
- Source: IET Software, Volume 14, Issue 4, p. 377 –388
- DOI: 10.1049/iet-sen.2019.0186
- Type: Article
- + Show details - Hide details
-
p.
377
–388
(12)
Logic vulnerabilities are due to defects in the application logic implementation such that the application logic is not the logic that was expected. Indeed, such vulnerabilities pattern depends on the design and business logic of the application. There are no specific and common patterns for application logic vulnerabilities in commercial applications. In this study, a method named FINAD is introduced to detect application logic vulnerabilities using an activity flow graph (AFG) to find the incompatibilities of an implemented application with its design. In this work, the AFG, consisting of the activity diagram (AD) and control flow graph (CFG), is presented for the first time. Investigation of different common types of application logic vulnerabilities indicated that the majority of such vulnerabilities could be detected through conducting a static analysis on an AFG. The FINAD method is independent of the language and can be used for vulnerability detection for any programming language, provided that the AD is available, and the CFG of the program can be created. Implementation of FINAD for PHP language showed its effectiveness in detecting known logic vulnerabilities in CVE vulnerability database.
- Author(s): Mohammad Shameem ; Arif Ali Khan ; Md. Gulzarul Hasan ; Muhammad Azeem Akbar
- Source: IET Software, Volume 14, Issue 4, p. 389 –401
- DOI: 10.1049/iet-sen.2019.0196
- Type: Article
- + Show details - Hide details
-
p.
389
–401
(13)
Global software development (GSD) organisations are currently adopting agile frameworks in order to efficiently develop a software product. The main objective of this study is to identify the success factors (SFs), which could possibly have a positive impact on scaling agile practices in a GSD environment and develop their taxonomy based on their prioritisation using the analytic hierarchy process (AHP) approach. This study is conducted in four stages: problem identification and goal of the study (1), identification of SFs and their categorisations (2), validation of the SFs using questionnaire survey (3), and application of AHP to prioritise the SFs and develop the taxonomy of the SFs and their respective categories (4). The results of this study indicated that ‘technology’ is the most significant category as compared to the other categories of the SFs. Similarly, rich technological infrastructure is identified as a most important factor. Based on the findings of this study, authors can conclude that the contribution of this study is not only limited to development of the taxonomy of the SFs, but also their proper prioritisation by introducing AHP approach, which assists software organisations to scale agile methods effectively in the GSD environment.
- Author(s): Masoud Kargar ; Ayaz Isazadeh ; Habib Izadkhah
- Source: IET Software, Volume 14, Issue 4, p. 402 –410
- DOI: 10.1049/iet-sen.2019.0138
- Type: Article
- + Show details - Hide details
-
p.
402
–410
(9)
Clustering (modularisation) techniques are often employed for the meaningful decomposition of a program aiming to understand it. In the software clustering context, several external metrics are presented to evaluate and validate the resultant clustering obtained by an algorithm. These metrics use a ground-truth decomposition to evaluate a resultant clustering. When there exists no ground-truth decomposition for a software system, internal metrics are utilised to validate clustering algorithms. Due to the comparison with a reference decomposition, external metrics are preferred to internal metrics. Available internal metrics used to measure the clustering quality are not appropriate for evaluating because they do not consider the purpose of software clustering, which is to understand a software system. In this study, the authors present six criteria that influence the understanding of a program. Then the authors design an internal metric for estimating the software clustering quality considering those criteria. They selected ten folders of Mozilla Firefox with different sizes and functionalities to assess the reliability of the proposed metric. The experimental results confirm that the proposed internal metric is more accurate than the existing internal metrics in terms of proximity to expert decomposition. The proposed internal metric can be a substitute for external metrics.
- Author(s): Na-Yeon Bak ; Byeong-Mo Chang ; Kwanghoon Choi
- Source: IET Software, Volume 14, Issue 4, p. 411 –422
- DOI: 10.1049/iet-sen.2019.0344
- Type: Article
- + Show details - Hide details
-
p.
411
–422
(12)
SmartThings is one of the most widely used smart home platforms for the internet of things (IoT). SmartApps are IoT applications on the SmartThings platform that enables automation of home devices. SmartApps are event-driven; inputs are received from device events, and outputs are issued to control devices. Understanding the behaviour of IoT applications is a challenge because the inputs and outputs are rarely visible. To tackle the challenge, the proposed approach is to visualise IoT applications as a set of IoT services. The authors propose an event-flow-based visualisation method where a flow from an event to action is viewed as an IoT service. The authors implement a tool called SmartVisual that performs a static analysis on SmartApps to generate a diagram of event flows. The tool also provides a tree model of the static structure of SmartApps and software metrics relevant to the event-driven nature. The tool was applied to 64 SmartApp samples provided by SmartThings. Each SmartApp had four event flows on average, although the most complex SmartApp had 58 event flows, and two inputs and two outputs, and the average length of the event flows was 1.43 methods.
- Author(s): Mingwan Kim ; Jongwook Jeong ; Neunghoe Kim ; Hoh Peter In
- Source: IET Software, Volume 14, Issue 4, p. 423 –432
- DOI: 10.1049/iet-sen.2018.5442
- Type: Article
- + Show details - Hide details
-
p.
423
–432
(10)
Regression testing is an important but costly activity for verifying a programme with the changed code. Regression test selection (RTS) aims to reduce this cost by selecting only the test cases affected by the changes. Among the several ways of selecting such affected test cases, call graphs have been statically constructed to select the test cases at the method-level granularity. However, RTS techniques will reduce the cost of regression testing less than expected unless the call graphs are efficiently one-to-one matched with the test cases. In this study, the authors propose overlap-aware rapid type analysis (ORTA). ORTA is designed to minimise the redundant cost of creating the matched call graphs using rapid type analysis (RTA). The one-to-one matching and ORTA were evaluated on 1487 commits selected from 30 Java projects. RTA-based RTS with the one-to-one matching selected 46.90% fewer test cases with 2.76% longer end-to-end time of regression testing than without the one-to-one matching. The time increased with the one-to-one matching was reduced by 22.58% when ORTA substituted for RTA. ORTA achieved the cost reduction while removing 82.77% of the duplicate edges that RTA created on 993 commits.
- Author(s): Sorada Prathan and Siew Hock Ow
- Source: IET Software, Volume 14, Issue 4, p. 433 –442
- DOI: 10.1049/iet-sen.2018.5440
- Type: Article
- + Show details - Hide details
-
p.
433
–442
(10)
A data mining-based technique is proposed for the selection and employment of the best-fit programmers to meet the needs of software companies. The proposed technique incorporates Bayes' theorem and artificial neural network (ANN). The datasets used were from two software companies (Company 1 and Company 2) in India, covering the years 2010–2015. Bayes' theorem is used for identifying the prognostic attributes of the best-fit programmers, while the ANN classifier is used for predicting the best-fit programmers. Using a confusion matrix, the ANN classifier performance is 97.2 and 87.3%, 95.8 and 54.5%, and 100 and 75% with regard to accuracy, precision, and recall on the two test datasets of Company 1 and Company 2, respectively. The results show that the technique is effective for predicting the best-fit programmers. Software companies can use this technique in their recruitment and selection process to determine the best-fit employees for the programmer posts. The proposed technique can also be adapted for application in other disciplines such as sports, education, etc., to identify the most suitable person to fill a relevant position.
- Author(s): Jiehan Deng ; Lu Lu ; Shaojian Qiu
- Source: IET Software, Volume 14, Issue 4, p. 443 –450
- DOI: 10.1049/iet-sen.2019.0149
- Type: Article
- + Show details - Hide details
-
p.
443
–450
(8)
Software quality plays an important role in the software lifecycle. Traditional software defect prediction approaches mainly focused on using hand-crafted features to detect defects. However, like human languages, programming languages contain rich semantic and structural information, and the cause of defective code is closely related to its context. Failing to catch this significant information, the performance of traditional approaches is far from satisfactory. In this study, the authors leveraged a long short-term memory (LSTM) network to automatically learn the semantic and contextual features from the source code. Specifically, they first extract the program's Abstract Syntax Trees (ASTs), which is made up of AST nodes, and then evaluate what and how much information they can preserve for several node types. They traverse the AST of each file and fed them into the LSTM network to automatically the semantic and contextual features of the program, which is then used to determine whether the file is defective. Experimental results on several opensource projects showed that the proposed LSTM method is superior to the state-of-the-art methods.
Checklist-based techniques with gamification and traditional approaches for inspection of interaction models
Efficient improved ant colony optimisation algorithm for dynamic software rejuvenation in web services
Detecting application logic vulnerabilities via finding incompatibility between application design and implementation
Analytic Hierarchy Process Based Prioritisation and Taxonomy of Success Factors for Scaling Agile Methods in Global Software Development
New internal metric for software clustering algorithms validity
SmartVisual: a visualisation tool for SmartThings IoT Apps using static analysis
Overlap-aware rapid type analysis for constructing one-to-one matched call graphs in regression test selection
Determining the best-fit programmers using Bayes’ theorem and artificial neural network
Software defect prediction via LSTM
Most viewed content
Most cited content for this Journal
-
Progress on approaches to software defect prediction
- Author(s): Zhiqiang Li ; Xiao-Yuan Jing ; Xiaoke Zhu
- Type: Article
-
Systematic review of success factors and barriers for software process improvement in global software development
- Author(s): Arif Ali Khan and Jacky Keung
- Type: Article
-
Empirical investigation of the challenges of the existing tools used in global software development projects
- Author(s): Mahmood Niazi ; Sajjad Mahmood ; Mohammad Alshayeb ; Ayman Hroub
- Type: Article
-
Feature extraction based on information gain and sequential pattern for English question classification
- Author(s): Yaqing Liu ; Xiaokai Yi ; Rong Chen ; Zhengguo Zhai ; Jingxuan Gu
- Type: Article
-
Early stage software effort estimation using random forest technique based on use case points
- Author(s): Shashank Mouli Satapathy ; Barada Prasanna Acharya ; Santanu Kumar Rath
- Type: Article