CAAI Transactions on Intelligence Technology
Volume 5, Issue 1, March 2020
Volumes & issues:
Volume 5, Issue 1
March 2020
-
- Author(s): Chunbiao Zhu ; Wei Yan ; Xing Cai ; Shan Liu ; Thomas H. Li ; Ge Li
- Source: CAAI Transactions on Intelligence Technology, Volume 5, Issue 1, p. 1 –8
- DOI: 10.1049/trit.2019.0034
- Type: Article
- + Show details - Hide details
-
p.
1
–8
(8)
The artistic style transfer of images aims to synthesise novel images by combining the content of one image with the style of another, which is a long-standing research topic and already has been widely applied in real world. However, defining the aesthetic perception from the human visual system is a challenging problem. In this study, the authors propose a novel method for automatic visual perception style transfer. First, they render a novel saliency detection algorithm to automatically perceive the visual attention of an image. Then, different from conventional style transfer algorithm in which style transferring is applied uniformly across all image regions, the authors apply a saliency algorithm to guide the style transferring process, enabling different types of style transferring to occur in different regions. Extensive experiments show that the proposed saliency detection algorithm and the style transfer algorithm are superior in performance and efficiency.
- Author(s): Rasim M. Alguliyev ; Ramiz M. Aliguliyev ; Lyudmila V. Sukhostat
- Source: CAAI Transactions on Intelligence Technology, Volume 5, Issue 1, p. 9 –14
- DOI: 10.1049/trit.2019.0048
- Type: Article
- + Show details - Hide details
-
p.
9
–14
(6)
Big data analysis requires the presence of large computing powers, which is not always feasible. And so, it became necessary to develop new clustering algorithms capable of such data processing. This study proposes a new parallel clustering algorithm based on the k-means algorithm. It significantly reduces the exponential growth of computations. The proposed algorithm splits a dataset into batches while preserving the characteristics of the initial dataset and increasing the clustering speed. The idea is to define cluster centroids, which are also clustered, for each batch. According to the obtained centroids, the data points belong to the cluster with the nearest centroid. Real large datasets are used to conduct the experiments to evaluate the effectiveness of the proposed approach. The proposed approach is compared with k-means and its modification. The experiments show that the proposed algorithm is a promising tool for clustering large datasets in comparison with the k-means algorithm.
- Author(s): Romano Fantacci and Benedetta Picano
- Source: CAAI Transactions on Intelligence Technology, Volume 5, Issue 1, p. 15 –21
- DOI: 10.1049/trit.2019.0049
- Type: Article
- + Show details - Hide details
-
p.
15
–21
(7)
The continuous growth of smart devices needing processing has led to moving storage and computation from cloud to the network edges, giving rise to the edge computing paradigm. Owing to the limited capacity of edge computing nodes, the presence of popular applications in the edge nodes results in significant improvements in users’ satisfaction and service accomplishment. However, the high variability in the content requests makes prediction demand not trivial and, typically, the majority of the classical prediction approaches require the gathering of personal users' information at a central unit, giving rise to many users' privacy issues. In this context, federated learning gained attention as a solution to perform learning procedures from data disseminated across multiple users, keeping the sensitive data protected. This study applies federated learning to the demand prediction problem, to accurately forecast the more popular application types in the network. The proposed framework reaches high accuracy levels on the predicted applications demand, aggregating in a global and weighted model the feedback received by users, after their local training. The validity of the proposed approach is verified by performing a virtual machine replica copies and comparison with the alternative forecasting approach based on chaos theory and deep learning.
- Author(s): Hema Shekar Basavegowda and Guesh Dagnew
- Source: CAAI Transactions on Intelligence Technology, Volume 5, Issue 1, p. 22 –33
- DOI: 10.1049/trit.2019.0028
- Type: Article
- + Show details - Hide details
-
p.
22
–33
(12)
Analysis of microarray data is a highly challenging problem due to the inherent complexity in the nature of the data associated with higher dimensionality, smaller sample size, imbalanced number of classes, noisy data-structure, and higher variance of feature values. This has led to lesser classification accuracy and over-fitting problem. In this work, the authors aimed to develop a deep feedforward method to classify the given microarray cancer data into a set of classes for subsequent diagnosis purposes. They have used a 7-layer deep neural network architecture having various parameters for each dataset. The small sample size and dimensionality problems are addressed by considering a well-known dimensionality reduction technique namely principal component analysis. The feature values are scaled using the Min–Max approach and the proposed approach is validated on eight standard microarray cancer datasets. To measure the loss, a binary cross-entropy is used and adaptive moment estimation is considered for optimisation. The performance of the proposed approach is evaluated using classification accuracy, precision, recall, f-measure, log-loss, receiver operating characteristic curve, and confusion matrix. A comparative analysis with state-of-the-art methods is carried out and the performance of the proposed approach exhibit better performance than many of the existing methods.
- Author(s): Razieh Hosseini and Alireza Rezvanian
- Source: CAAI Transactions on Intelligence Technology, Volume 5, Issue 1, p. 34 –41
- DOI: 10.1049/trit.2019.0040
- Type: Article
- + Show details - Hide details
-
p.
34
–41
(8)
In social network analysis, community detection is one of the significant tasks to study the structure and characteristics of the networks. In recent years, several intelligent and meta-heuristic algorithms have been presented for community detection in complex social networks, among them label propagation algorithm (LPA) is one of the fastest algorithms for discovering community structures. However, due to the randomness of the LPA, its performance is not suitable for the general purpose of network analysis. In this study, the authors propose an improved version of the label propagation (called AntLP) algorithm using similarity indices and ant colony optimisation (ACO). The AntLP consists of two steps: in the first step, the algorithm assigns weights for edges of the input network using several similarity indices, and in the second step, the AntLP using ACO tries to propagate labels and optimise modularity measure by grouping similar vertices in each community based on the local similarities among the vertices of the network. In order to study the performance of the AntLP, several experiments are conducted on some well-known social network datasets. Experimental simulations demonstrated that the AntLP is better than some community detection algorithms for social networks in terms of modularity, normalised mutual information and running time.
- Author(s): Juš Kocijan ; Matija Perne ; Boštjan Grašic ; Marija Zlata Božnar ; Primož Mlakar
- Source: CAAI Transactions on Intelligence Technology, Volume 5, Issue 1, p. 42 –48
- DOI: 10.1049/trit.2019.0054
- Type: Article
- + Show details - Hide details
-
p.
42
–48
(7)
This study describes an application of hybrid modelling for an atmospheric variable in the Krško basin. The hybrid model is a combination of a physics-based and data-driven model and has some properties of both modelling approaches. In the authors’ case, it is used for the modelling of an atmospheric variable, namely relative humidity in a particular location for the purpose of using the predictions of the model as an input to the air-pollution-dispersion model for radiation exposure. The presented hybrid model is a combination of a physics-based atmospherical model and a Gaussian-process (GP) regression model. The GP model is a probabilistic kernel method that also enables evaluation of prediction confidence. The problem of poor scalability of GP modelling was solved using sparse GP modelling; in particular, the fully independent training conditional method was used. Two different approaches to dataset selection for empirical model training were used and multiple-step-ahead predictions for different horizons were assessed. It is shown in this study that the accuracy of the predicted relative humidity in the Krško basin improved when using hybrid models over using the physics-based model alone and that predictions for a considerable length of horizon can be used.
- Author(s): Weiyong Eng ; Voonchet Koo ; Tiensze Lim
- Source: CAAI Transactions on Intelligence Technology, Volume 5, Issue 1, p. 49 –54
- DOI: 10.1049/trit.2019.0052
- Type: Article
- + Show details - Hide details
-
p.
49
–54
(6)
Large displacement optical flow algorithms are generally categorised into descriptor-based matching and pixel-based matching. Descriptor-based approaches are robust to geometric variation, however they have inherent localisation precision limitation due to histogram nature. This work presents a novel method called improved precision dense descriptor flow (IPDDF). The authors introduce an additional pixel-based matching cost within an existing dense Daisy descriptor framework to improve the flow estimation precision. Pixel-based features such as pixel colour and gradient are computed on top of the original descriptor in the authors' matching cost formulation. The pixel-based cost only requires a light-weight pre-computation and can be adapted seamlessly into the matching cost formulation. The framework is built based on the Daisy Filter Flow work. In the framework, Daisy descriptor and a filter-based efficient flow inference technique, as well as a randomised fast patch match search algorithm, are adopted. Given the novel matching cost formulation, the framework enables efficiently solving dense correspondence field estimation in a high-dimensional search space, which includes scale and orientation. Experiments on various challenging image pairs demonstrate the proposed algorithm enhances flow estimation accuracy as well as generate a spatially coherent yet edge-aware flow field result efficiently.
- Author(s): Subhankar Ghosh ; Palaiahnakote Shivakumara ; Prasun Roy ; Umapada Pal ; Tong Lu
- Source: CAAI Transactions on Intelligence Technology, Volume 5, Issue 1, p. 55 –65
- DOI: 10.1049/trit.2019.0051
- Type: Article
- + Show details - Hide details
-
p.
55
–65
(11)
Graphology-based handwriting analysis to identify human behavior, irrespective of applications, is interesting. Unlike existing methods that use characters, words and sentences for behavioural analysis with human intervention, we propose an automatic method by analysing a few handwritten English lowercase characters from a to z to identify person behaviours. The proposed method extracts structural features, such as loops, slants, cursive, straight lines, stroke thickness, contour shapes, aspect ratio and other geometrical properties, from different zones of isolated character images to derive the hypothesis based on a dictionary of Graphological rules. The derived hypothesis has the ability to categorise the personal, positive, and negative social aspects of an individual. To evaluate the proposed method, an automatic system is developed which accepts characters from a to z written by different individuals across different genders and age groups. This automatic privacy projected system is available on the website (http://subha.pythonanywhere.com). For quantitative evaluation of the proposed method, several people are requested to use the system to check their characteristics with the system automatic response based on his/her handwriting by choosing to agree or disagree options. The automatic system receives 5300 responses from the users, for which, the proposed method achieves 86.70% accuracy.
Neural saliency algorithm guide bi-directional visual perception style transfer
Efficient algorithm for big data clustering on single machine
Federated learning framework for mobile edge computing networks
Deep learning approach for microarray cancer data classification
AntLP: ant-based label propagation algorithm for community detection in social networks
Sparse and hybrid modelling of relative humidity: the Krško basin case study
IPDDF: an improved precision dense descriptor based flow estimation
Graphology based handwritten character analysis for human behaviour identification
Most viewed content
Most cited content for this Journal
-
Self‐training maximum classifier discrepancy for EEG emotion recognition
- Author(s): Xu Zhang ; Dengbing Huang ; Hanyu Li ; Youjia Zhang ; Ying Xia ; Jinzhuo Liu
- Type: Article
-
A robust deformed convolutional neural network (CNN) for image denoising
- Author(s): Qi Zhang ; Jingyu Xiao ; Chunwei Tian ; Jerry Chun‐Wei Lin ; Shichao Zhang
- Type: Article
-
Boosting image watermarking authenticity spreading secrecy from counting‐based secret‐sharing
- Author(s): Adnan Gutub
- Type: Article
-
A survey on adversarial attacks and defences
- Author(s): Anirban Chakraborty ; Manaar Alam ; Vishal Dey ; Anupam Chattopadhyay ; Debdeep Mukhopadhyay
- Type: Article
-
Deep learning: Applications, architectures, models, tools, and frameworks: A comprehensive survey
- Author(s): Mehdi Gheisari ; Fereshteh Ebrahimzadeh ; Mohamadtaghi Rahimi ; Mahdieh Moazzamigodarzi ; Yang Liu ; Pijush Kanti Dutta Pramanik ; Mohammad Ali Heravi ; Abolfazl Mehbodniya ; Mustafa Ghaderzadeh ; Mohammad Reza Feylizadeh ; Saeed Kosari
- Type: Article