IET Image Processing
Volume 13, Issue 2, 07 February 2019
Volumes & issues:
Volume 13, Issue 2
07 February 2019
-
- Source: IET Image Processing, Volume 13, Issue 2, p. 233 –234
- DOI: 10.1049/iet-ipr.2018.6533
- Type: Article
- + Show details - Hide details
-
p.
233
–234
(2)
- Author(s): Chunmei Qing ; Jiawei Ruan ; Xiangmin Xu ; Jinchang Ren ; Jaime Zabalza
- Source: IET Image Processing, Volume 13, Issue 2, p. 235 –245
- DOI: 10.1049/iet-ipr.2018.5727
- Type: Article
- + Show details - Hide details
-
p.
235
–245
(11)
For the spatial-spectral classification of hyperspectral images (HSIs), a deep learning framework is proposed in this study, which consists of convolutional neural networks (CNNs) and Markov random fields (MRFs). Firstly, a CNN model to learn the deep spectral feature from the HSI is built and the class posterior probability distribution is estimated. The CNN with a dropout layer can relieve the overfitting in classification. The CNN is utilised as a pixel-classifier, so it only works in the spectral domain. Then, the spatial information will be encoded by MRF-based multilevel logistic prior for regularising the classification. To derive the correlation of both spectral and spatial features for improving algorithm performance, the marginal probability distribution in HSI is learned using MRF-based loopy belief propagation. In comparison with several state-of-the-art approaches for data classification on three publicly available HSI datasets, experimental results have demonstrated the superior performance of the proposed methodology.
- Author(s): Pourya Shamsolmoali ; Masoumeh Zareapoor ; Jie Yang
- Source: IET Image Processing, Volume 13, Issue 2, p. 246 –253
- DOI: 10.1049/iet-ipr.2017.1375
- Type: Article
- + Show details - Hide details
-
p.
246
–253
(8)
Classification is a principle technique in hyperspectral images (HSIs), where a label is assigned to each pixel based on its characteristics. However, due to lack of labelled training instances in HSIs and also its ultra-high dimensionality, deep learning approaches need a special consideration for HSI classification. As one of the first works in the HSI classification, this study proposes a novel network pipeline called convolutional neural network in network (which is deeper than the existing approaches) by jointly utilising the spatial and spectral information and produces high-level features from the original HSI. This can occur by using spatial–spectral relationships of individual pixel vector at the initial component of the proposed pipeline; the extracted features are then combined to form a joint spatial–spectral feature map. Finally, a recurrent neural network is trained on the extracted features which contain wealthy spectral and spatial properties of the HSI to predict the corresponding label of each vector. The model has been tested on two large scale hyperspectral datasets in terms of classification accuracy, training error, and computational time.
- Author(s): Sixiu Hu ; Chunhua Xu ; Jiangtao Peng ; Yan Xu ; Long Tian
- Source: IET Image Processing, Volume 13, Issue 2, p. 254 –260
- DOI: 10.1049/iet-ipr.2018.0124
- Type: Article
- + Show details - Hide details
-
p.
254
–260
(7)
Kernel joint sparse representation (KJSR) performs joint sparse representation in the feature space and has shown good performance for the hyperspectral image (HSI) classification. In order to distinguish spatial neighbouring pixels in the feature space, we propose two weighted KJSR (WKJSR) methods in this paper. The first one computes the weight directly based on the kernel similarity between neighbouring pixels. The second weighted scheme uses a nearest regularisation strategy to simultaneously optimise the weights of projected neighbouring pixels and joint sparse representation coefficients. The proposed WKJSR methods can exploit the similarities and differences among neighbouring pixels to obtain accurate weights for the joint sparse representation and classification. Experimental results on two benchmark HSI data sets demonstrate the effectiveness of the proposed methods.
- Author(s): Nan Huang and Liang Xiao
- Source: IET Image Processing, Volume 13, Issue 2, p. 261 –269
- DOI: 10.1049/iet-ipr.2018.5421
- Type: Article
- + Show details - Hide details
-
p.
261
–269
(9)
Clustering for hyperspectral images (HSIs) is a very challenging task because HSIs usually have large spectral variability, high dimensionality, and complex structures. The main issue of this study is to develop an improved sparse subspace clustering (SSC) method for HSIs. As an extension of spectral clustering, SSC algorithm has achieved great success; however, the direct self-representation dictionary which is created by raw samples has poor representation power and also the widely used dictionary learning (DL) such as K-Singular Value Decomposition (K-SVD) faces with the problems of high computational complexity. In this study, the authors propose a novel HSI clustering method based on sparse DL and anchored regression. The proposed method follows three stages: (i) sparse DL; (ii) anchored subspace construction and regression; and (iii) representation-based spectral clustering. Specifically, we adopt a fast sparse DL method under a double sparsity constrained optimising model to capture the intrinsic HSIs. To establish a compact subspace for collaborative representation, we present an anchored subspace construction method by using atoms clustering and grouping methods. Owing to the anchored subspace, we can fast compute the representation coefficients with a predefined projection matrix. Experimental results demonstrate that the proposed method achieves the best performance for the HSIs clustering.
- Author(s): Maryam Imani and Hassan Ghassemian
- Source: IET Image Processing, Volume 13, Issue 2, p. 270 –279
- DOI: 10.1049/iet-ipr.2017.1431
- Type: Article
- + Show details - Hide details
-
p.
270
–279
(10)
Incorporation of spatial information besides rich spectral information of hyperspectral image significantly enhances data classification accuracy. A morphology-based feature extraction and classification framework is proposed here, which includes the local neighbourhood information in a spatial window for extension of training set. The proposed method is morphology-based structure-preserving projection (MSPP) and tries to preserve the data structure in spectral–spatial feature space. Moreover, MSPP increases the class discrimination ability by defining a similarity matrix constructed by extended spectral–spatial training samples. The experimental results show the superiority of MSPP compared to some state-of-the-art classification methods from the classification accuracy point of view.
- Author(s): Aizhu Zhang ; Ping Ma ; Sihan Liu ; Genyun Sun ; Hui Huang ; Jaime Zabalza ; Zhenjie Wang ; Chengyan Lin
- Source: IET Image Processing, Volume 13, Issue 2, p. 280 –286
- DOI: 10.1049/iet-ipr.2018.5362
- Type: Article
- + Show details - Hide details
-
p.
280
–286
(7)
Band selection is an important data dimensionality reduction tool in hyperspectral images (HSIs). To identify the most informative subset band from the hundreds of highly corrected bands in HSIs, a novel hyperspectral band selection method using a crossover-based gravitational search algorithm (CGSA) is presented in this study. In this method, the discriminative capability of each band subset is evaluated by a combined optimisation criterion, which is constructed based on the overall classification accuracy and the size of the band subset. As the evolution of the criterion, the subset is updated using the V-shaped transfer function-based CGSA. Ultimately, the band subset with the best fitness value is selected. Experiments on two public hyperspectral datasets, i.e. the Indian Pines dataset and the Pavia University dataset, have been conducted to test the performance of the proposed method. Comparing experimental results against the basic GSA and the PSOGSA (hybrid PSO and GSA) revealed that all of the three GSA variants can considerably reduce the band dimensionality of HSIs without damaging their classification accuracy. Moreover, the CGSA shows superiority on both the effectiveness and efficiency compared to the other two GSA variants.
- Author(s): Wenbo Yu ; Miao Zhang ; Yi Shen
- Source: IET Image Processing, Volume 13, Issue 2, p. 287 –298
- DOI: 10.1049/iet-ipr.2018.5550
- Type: Article
- + Show details - Hide details
-
p.
287
–298
(12)
Feature selection, which is called band selection for hyperspectral data, is widely used for hyperspectral images. A novel hyperspectral band selection method based on combined fast and adaptive tridimensional empirical mode decomposition (cFATEMD) is proposed in this study. The hyperspectral data is decomposed into a set of tridimensional intrinsic mode functions (TIMFs) and a residual (RES) by FATEMD, which can reduce high-frequency noise and signal. A stop condition of the decomposition is proposed based on the k-means clustering algorithm and the Dunn validity index, which can prevent excessive decomposition and make generated RES contain as much useful information as possible. In consideration of the useful information in decomposition results, these TIMFs and the RES are combined into a new data based on the spectral similarity between themselves and the original data. Four state-of-the-art band selection methods, cooperating with the proposed cFATEMD, are used to select bands by the new combined data. Several experiments are conducted on three publicly available hyperspectral datasets and the results are compared with corresponding methods’ results using the original data. Experimental results demonstrate that the proposed method yields great classification appearance.
- Author(s): Weizhao Chen ; Zhijing Yang ; Faxian Cao ; Yijun Yan ; Meilin Wang ; Chunmei Qing ; Yongqiang Cheng
- Source: IET Image Processing, Volume 13, Issue 2, p. 299 –306
- DOI: 10.1049/iet-ipr.2018.5419
- Type: Article
- + Show details - Hide details
-
p.
299
–306
(8)
Dimensionality reduction is of high importance in hyperspectral data processing, which can effectively reduce the data redundancy and computation time for improved classification accuracy. Band selection and feature extraction methods are two widely used dimensionality reduction techniques. By integrating the advantages of the band selection and feature extraction, the authors propose a new method for reducing the dimension of hyperspectral image data. First, a new and fast band selection algorithm is proposed for hyperspectral images based on an improved determinantal point process (DPP). To reduce the amount of calculation, the dual-DPP is used for fast sampling representative pixels, followed by k-nearest neighbour-based local processing to explore more spatial information. These representative pixel points are used to construct multiple adjacency matrices to describe the correlation between bands based on mutual information. To further improve the classification accuracy, two-dimensional singular spectrum analysis is used for feature extraction from the selected bands. Experiments show that the proposed method can select a low-redundancy and representative band subset, where both data dimension and computation time can be reduced. Furthermore, it also shows that the proposed dimensionality reduction algorithm outperforms a number of state-of-the-art methods in terms of classification accuracy.
- Author(s): Ram Narayan Patro ; Subhashree Subudhi ; Pradyut Kumar Biswal
- Source: IET Image Processing, Volume 13, Issue 2, p. 307 –315
- DOI: 10.1049/iet-ipr.2018.5109
- Type: Article
- + Show details - Hide details
-
p.
307
–315
(9)
The hyperspectral images (HSIs) often suffer from Hughes effect, as it records information of a single scene in several spectral bands. This can be mitigated by reducing the dimension of HSI. A novel framework for hybrid band selection (BS) is proposed in this work. The proposed technique is a multi-objective approach, which incorporates clustering (spectral) and intra-band (spatially filtered) de-correlation measure (Frobenius norm) as maximisation of two cost functions. Heuristic optimisers are very sensitive to their associated hyperparameters. So, in the proposed architecture, Jaya optimisation is used for BS, as it does not possess any algorithm-specific control parameter. Spatial and spectral features are extracted for both BS and classification (using support vector machine) for evaluating the effectiveness of the proposed/ experimented approaches. The evaluated performance measures are overall accuracy, average accuracy, and kappa (K). The experimental result shows that the proposed BS approach is better or competent with other experimented state-of-the-art methods. The advantages of the proposed framework can be stated as: (i) spectrally distinct and spatially invariant objective formulation; (ii) Jaya optimisation with minimal control parameters; (iii) optimised ranking for more accurate BS; and (iv) performing classification using spatial–spectral features for further band reduction with the desired accuracy.
- Author(s): Xiaorong Zhang ; Zhibin Pan ; Bingliang Hu ; Xi Zheng ; Weihua Liu
- Source: IET Image Processing, Volume 13, Issue 2, p. 316 –322
- DOI: 10.1049/iet-ipr.2017.1173
- Type: Article
- + Show details - Hide details
-
p.
316
–322
(7)
Target detection of hyperspectral image (HSI) is a research hotspot in the field of remote sensing. It is of particular importance in many domains, especially in military application. Unsupervised target detection is usually more difficult because there is no prior information about target. Traditional algorithms exploit spectral information, only. This study introduces the idea of saliency detection from the visual technique into HSI processing domain and proposes a novel approach named spectral saliency target detection (SSD). It establishes a novel salient model, which utilises both spatial saliency and spectral saliency. In the framework of SSD, it combines the model with spectral matching algorithm to make it perform well even in situations where the target is concealed and small. A HSI set comprised of eight different scenes with complex background is setup to evaluate the performance of the proposed algorithm. The final visible detection results demonstrate that the SSD algorithm outperforms the others. The receiver operation characteristic (ROC) curve and area under the ROC curve are applied to evaluate the results. The proposed algorithm shows superior and stable performance.
- Author(s): Shahram Sharifi Hashjin ; Ali Darvishi Boloorani ; Safa Khazai ; Ata Abdollahi Kakroodi
- Source: IET Image Processing, Volume 13, Issue 2, p. 323 –331
- DOI: 10.1049/iet-ipr.2018.5324
- Type: Article
- + Show details - Hide details
-
p.
323
–331
(9)
Target detection at sub-pixel abundances is, in fact, one of the challenging issues of hyperspectral image processing. Selection of optimal bands to improve sub-pixel target detection (STD) performance is one of the common solutions, applied by many researchers. Nevertheless, the absence of sufficient training data is the main weakness of selecting optimal bands with regard to this approach. The present research introduces a new band selection method for STD in hyperspectral images, based on creating training data, in which the desired target spectrum is implanted randomly in a series of host pixels from the entire hyperspectral image. Afterwards, via running an optimisation algorithm twice, with the aim of minimising the false alarm rate (FAR) in local adaptive coherence estimator target detection algorithm, the number of optimal bands and optimal spectral bands are selected. In this study, the performance of three optimisation methods including the genetic algorithm (GA), Grey Wolf optimisation (GWO), and particle swarm optimisation (PSO) are compared. Experimental results on HyMap and Hyperion datasets show that the proposed method obtains the minimum FAR compared with the rest of the evaluated methods. Also, based on the results obtained, GWO outperforms GA and PSO optimisation methods in the STD domain.
- Author(s): Yuquan Gan ; Bingliang Hu ; Weihua Liu ; Shuang Wang ; Geng Zhang ; Xiangpeng Feng ; Desheng Wen
- Source: IET Image Processing, Volume 13, Issue 2, p. 332 –343
- DOI: 10.1049/iet-ipr.2018.5079
- Type: Article
- + Show details - Hide details
-
p.
332
–343
(12)
Hyperspectral images are mixtures of spectra of materials in a scene. Accurate analysis of hyperspectral image requires spectral unmixing. The result of spectral unmixing is the material spectral signatures and their corresponding fractions. The materials are called endmembers. Endmember extraction equals to acquire spectral signatures of the materials. In this study, the authors propose a new hyperspectral endmember extraction algorithm for hyperspectral image based on QR factorisation using Givens rotations (EEGR). Evaluation of the algorithm is demonstrated by comparing its performance with two popular endmember extraction methods, which are vertex component analysis (VCA) and maximum volume by householder transformation (MVHT). Both simulated mixtures and real hyperspectral image are applied to the three algorithms, and the quantitative analysis of them is presented. EEGR exhibits better performance than VCA and MVHT. Moreover, EEGR algorithm is convenient to implement parallel computing for real-time applications based on the hardware features of Givens rotations.
- Author(s): Wenfei Luo ; Lianru Gao ; Ruihao Zhang ; Andrea Marinoni ; Bing Zhang
- Source: IET Image Processing, Volume 13, Issue 2, p. 344 –354
- DOI: 10.1049/iet-ipr.2018.5458
- Type: Article
- + Show details - Hide details
-
p.
344
–354
(11)
Spectral unmixing (SU) is a useful tool for hyperspectral remote sensing image analysis. However, due to the interference of spectral variance and non-linearity caused by photon multiple-scattering, the result might be an inaccuracy. In addition, the unmixing performance of typically relies on the prior knowledge of endmembers. Although many classical endmember extraction algorithms have been presented, it is hard to obtain accurate endmembers in practical applications. This study presents a bilinear normal mixing model named as BNMM to tackle these issues. In fact, BNMM employs the polynomial post-non-linear mixing model to alleviate the effect of non-linearity and uses a normal distribution model to reduce the influence of endmembers variability. Based on the BNMM, the authors develop a Hamiltonian Monte Carlo algorithm for SU. The experimental results demonstrate that the proposed algorithm outperforms other classical unmixing algorithms in the case of simulated and benchmark datasets.
- Author(s): Shuang Huang ; Sheng-Bo Chen ; Yuan-Zhi Zhang
- Source: IET Image Processing, Volume 13, Issue 2, p. 355 –364
- DOI: 10.1049/iet-ipr.2018.5026
- Type: Article
- + Show details - Hide details
-
p.
355
–364
(10)
Locations used to validate extraction results from remote sensing images are typically ground rock samples. However, remotely sensed image pixels are 30 m2 with mixing spectra. Rock samples cannot fully represent extraction results on an image. Here, alteration information associated with the Águas Claras, Brazil iron deposit was analysed using a Landsat Enhanced Thematic Mapper plus (ETM+), a Terra Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) and Hyperion data, all with 30 m resolution. With improvements in spectral resolution, extracted results have changed from the alteration information to altered rocks and minerals. The accuracy of alteration information has improved continuously as data extracted from these sources verify one another. Minerals extracted using Hyperion corresponded to altered assemblages identified from ASTER. Both were consistent with alteration information extracted from ETM+ at the same location. Alteration information extracted from three remote images were located within ore formations, and controlled by the north-east trending faults and matched well with known deposits. Information extraction based on the integrated application of multi-source remote sensing data could compensate for deficiencies in using single data. These results can verify one another to improve data reliability particularly in the areas inaccessible and lacking sufficient field confirmation.
Guest Editorial: Hyperspectral Imaging and Applications
Spatial-spectral classification of hyperspectral images: a deep learning framework with Markov Random fields based modelling
Convolutional neural network in network (CNNiN): hyperspectral image classification and dimensionality reduction
Weighted Kernel joint sparse representation for hyperspectral image classification
Hyperspectral image clustering via sparse dictionary-based anchored regression
Morphology-based structure-preserving projection for spectral–spatial feature extraction and classification of hyperspectral data
Hyperspectral band selection using crossover-based gravitational search algorithm
Combined FATEMD-based band selection method for hyperspectral images
Dimensionality reduction based on determinantal point process and singular spectrum analysis for hyperspectral images
Spectral clustering and spatial Frobenius norm-based Jaya optimisation for BS of hyperspectral images
Target detection of hyperspectral image based on spectral saliency
Selecting optimal bands for sub-pixel target detection in hyperspectral images based on implanting synthetic targets
Endmember extraction from hyperspectral imagery based on QR factorisation using givens rotations
Bilinear normal mixing model for spectral unmixing
Comparison of altered mineral information extracted from ETM+, ASTER and Hyperion data in Águas Claras iron ore, Brazil
-
- Author(s): Hui Ying Khaw ; Foo Chong Soon ; Joon Huang Chuah ; Chee-Onn Chow
- Source: IET Image Processing, Volume 13, Issue 2, p. 365 –374
- DOI: 10.1049/iet-ipr.2018.5776
- Type: Article
- + Show details - Hide details
-
p.
365
–374
(10)
Most of the impulse denoisers are either median filter-based or fuzzy filter-based, which can only perform well in low noise conditions. This study presents an efficient convolutional neural network (CNN) with particle swarm optimisation (PSO) model for high-density impulse noise removal. The proposed high-density impulse noise detection and removal model mainly consists of two parts: the impulse noise removal and impulse noisy pixel detection for restoration. The authors’ model initially leverages the powerful ability of deep CNN architecture to separate noise from the noisy image, then adopts PSO to pinpoint the most optimised threshold values for detecting impulse noisy pixels. An ensemble of these algorithms is an intelligent and adaptive solution, producing a clean output while preserving significant pixel information. Targeting to solve high-density impulse noise problems, the authors have trained their model with a massive collection of natural images and 14 standard testing images are used for validation purposes. In order to validate the robustness of the proposed method, different levels of high-density impulse noise are considered. Based on the final denoised images, their model has proven its reliability, in terms of both visual quality and quantitative evaluation, on greyscale and colour images.
- Author(s): Yang Liu ; Dongmei Fu ; Zhicheng Huang ; Hejun Tong
- Source: IET Image Processing, Volume 13, Issue 2, p. 375 –381
- DOI: 10.1049/iet-ipr.2018.5922
- Type: Article
- + Show details - Hide details
-
p.
375
–381
(7)
Glaucoma is one of the leading causes of blindness in the world. Optic disc segmentation is an indispensable step for automatic detection of glaucoma with fundus images. In this study, the authors propose an automatic optic disc segmentation approach using adversarial training. The improved ‘U-Net’ is used as the segmentation network to detect optic disc from fundus images, and then the authors add a ‘Patch-level’ adversarial network to enhance higher-order consistency between ground truth and the output from segmentation network, which further boosts the performance of segmentation network. In addition, a new loss function is designed to solve the problem of pixel-level class imbalance in small target region extraction of medical images. All these improvements have effectively increased the segmentation accuracy on hard examples. Authors’ methods achieve Dice coefficient of 0.967 on Drishti-GS dataset and 0.951 on RIM-ONEv3 dataset, which outperform most of the existing methods.
- Author(s): Sheng Long Lee ; Mohammad Reza Zare ; Henning Muller
- Source: IET Image Processing, Volume 13, Issue 2, p. 382 –391
- DOI: 10.1049/iet-ipr.2018.5054
- Type: Article
- + Show details - Hide details
-
p.
382
–391
(10)
Much of medical knowledge is stored in the biomedical literature, collected in archives like PubMed Central that continue to grow rapidly. A significant part of this knowledge is contained in images with limited metadata available which makes it difficult to explore the visual knowledge in the biomedical literature. Thus, extraction of metadata from visual content is important. One important piece of metadata is the type of the image, which could be one of the various medical imaging modalities such as X-ray, computed tomography or magnetic resonance images and also of general graphs that are frequent in the literature. This study explores a late, score-based fusion of several deep convolutional neural networks with a traditional hand-crafted bag of visual words classifier to classify images from the biomedical literature into image types or modalities. It achieved a classification accuracy of 85.51% on the ImageCLEF 2013 modality classification task, which is better than the best visual methods in the challenge that the data were produced for, and similar compared to mixed methods that make use of both visual and textual information. It achieved similarly good results of 84.23 and 87.04% classification accuracy before and after augmentation, respectively, on the related ImageCLEF 2016 subfigure classification task.
- Author(s): Maissa Hamouda ; Karim Saheb Ettabaa ; Med Salim Bouhlel
- Source: IET Image Processing, Volume 13, Issue 2, p. 392 –398
- DOI: 10.1049/iet-ipr.2018.5063
- Type: Article
- + Show details - Hide details
-
p.
392
–398
(7)
Image classification by the convolutional neural network (CNN) has shown its great performances in recent years, in several areas, such as image processing and pattern recognition. However, there is still some improvement to do. The main problem in CNN is the initialisation of the number and size of the filters, which can obviously change the results. In this study, the authors assign three major contributions, based on the CNN model; (i) adaptive selection of the number of filters, (ii) using an adaptive size of the windows and (iii) using an adaptive size of the filters. The tests results, applied to different hyperspectral datasets (SalinasA, Pavia University, and Indian Pines), have proven that this framework is able to improve the accuracy of the hyperspectral image classification.
- Author(s): Xin Xu ; Jiuzhen Liang ; Chen Chen ; Zhenjie Hou
- Source: IET Image Processing, Volume 13, Issue 2, p. 399 –408
- DOI: 10.1049/iet-ipr.2018.6327
- Type: Article
- + Show details - Hide details
-
p.
399
–408
(10)
In this study, the authors focus on the challenging problem of verifying faces captured under unconstrained conditions. Unconstrained face images often vary largely in poses, illuminations, expressions, occlusions, and ages. To address these challenges, they combine face frontalisation method with metric learning. To deal with the variations of poses, they apply an improved 3D face frontalisation method to generate the frontal view of the face images. Recent studies observed that bilinear similarity and Mahalanobis distance have a promising performance on measuring the similarity of two images. Based on these studies, they propose a weighted similarity and distance metric learning method which balances the role of bilinear similarity and Mahalanobis distance to better measure the similarity of an image pair. All the experiments are conducted based on the labelled faces in the wild database, and the experimental results show the effectiveness of their method.
High-density impulse noise detection and removal using deep convolutional neural network with particle swarm optimisation
Optic disc segmentation in fundus images using adversarial training
Late fusion of deep learning and handcrafted visual features for biomedical image modality classification
Hyperspectral imaging classification based on convolutional neural networks by adaptive sizes of windows and filters
Weighted similarity and distance metric learning for unconstrained face verification with 3D frontalisation
Most viewed content
Most cited content for this Journal
-
Medical image segmentation using deep learning: A survey
- Author(s): Risheng Wang ; Tao Lei ; Ruixia Cui ; Bingtao Zhang ; Hongying Meng ; Asoke K. Nandi
- Type: Article
-
Block-based discrete wavelet transform-singular value decomposition image watermarking scheme using human visual system characteristics
- Author(s): Nasrin M. Makbol ; Bee Ee Khoo ; Taha H. Rassem
- Type: Article
-
Classification of malignant melanoma and benign skin lesions: implementation of automatic ABCD rule
- Author(s): Reda Kasmi and Karim Mokrani
- Type: Article
-
Digital image watermarking method based on DCT and fractal encoding
- Author(s): Shuai Liu ; Zheng Pan ; Houbing Song
- Type: Article
-
Tomato leaf disease classification by exploiting transfer learning and feature concatenation
- Author(s): Mehdhar S. A. M. Al‐gaashani ; Fengjun Shang ; Mohammed S. A. Muthanna ; Mashael Khayyat ; Ahmed A. Abd El‐Latif
- Type: Article