IET Image Processing
Volume 14, Issue 5, 17 April 2020
Volumes & issues:
Volume 14, Issue 5
17 April 2020
-
- Author(s): Hao Liu ; Ce Li ; Dong Zhang ; Yannan Zhou ; Shaoyi Du
- Source: IET Image Processing, Volume 14, Issue 5, p. 807 –817
- DOI: 10.1049/iet-ipr.2019.0856
- Type: Article
- + Show details - Hide details
-
p.
807
–817
(11)
In this study, the authors investigate the problem of enhanced image no-reference (NR) quality assessment. For resolving the problem of the enhanced images, it is difficult to obtain reference images, this study proposes an NR image quality assessment (IQA) model based on colour space distribution. Given an enhanced image, our method first uses a gist to select a clear target image in which the scene, colour and quality are similar to the hypothetical reference images. And then, the colour transfer is used between the input images and target images to construct the reference image. Next, the appropriate IQA method is used to assess enhanced image quality. The absolute colour difference and feature similarity (FSIM) are used to measure the colour and grey-scale image quality, respectively. Extensive experiments demonstrate that the proposed method is good at evaluating enhanced image quality for X-ray, dust, underwater and low-light images. The experimental results are consistent with human subjective evaluation and achieve good assessment effects.
- Author(s): Muhammad Attique Khan ; Tallha Akram ; Muhammad Sharif ; Nazeer Muhammad ; Muhammad Younus Javed ; Syed Rameez Naqvi
- Source: IET Image Processing, Volume 14, Issue 5, p. 818 –829
- DOI: 10.1049/iet-ipr.2018.5769
- Type: Article
- + Show details - Hide details
-
p.
818
–829
(12)
Human motion analysis has received a lot of attention in the computer vision community during the last few years. This research domain is supported by a wide spectrum of applications including video surveillance, patient monitoring systems, and pedestrian detection, to name a few. In this study, an improved cascaded design for human motion analysis is presented; it consolidates four phases: (i) acquisition and preprocessing, (ii) frame segmentation, (iii) features extraction and dimensionality reduction, and (iv) classification. The implemented architecture takes advantage of CIE-Lab and National Television System Committee colour spaces, and also performs contrast stretching using the proposed red–green–blue* colour space enhancement technique. A parallel design utilising attention-based motion estimation and segmentation module is also proposed in order to avoid the detection of false moving regions. In addition to these contributions, the proposed feature selection technique called entropy controlled principal components with weights minimisation, further improves the classification accuracy. The authors claims are supported with a comparison between six state-of-the-art classifiers tested on five standard benchmark data sets including Weizmann, KTH, UIUC, Muhavi, and WVU, where the results reveal an improved correct classification rate of 96.55, 99.50, 99.40, 100, and 100%, respectively.
- Author(s): Chengtao Zhu and Yau-Zen Chang
- Source: IET Image Processing, Volume 14, Issue 5, p. 830 –837
- DOI: 10.1049/iet-ipr.2019.0144
- Type: Article
- + Show details - Hide details
-
p.
830
–837
(8)
Infrared imaging is less susceptible to illumination conditions and haze than visible light imaging. The advantage makes infrared sensing suitable for providing remote visibility with reduced distortion. However, infrared images tend to have low resolution and lack rich textures that facilitate stereo matching. To enhance the applicability of infrared stereo imaging, the authors re-examine the guided-image techniques to include advanced edge-aware filters for aggregation and propose a novel guided-image filtering scheme here. Based on the exponential moving average, the weights are recursively calculated such that all pixels on the infrared image pair can contribute to a discrepancy cost. The arrangement allows additional pixels to be involved in the cost aggregation to reduce the demand for rich texture. Experimental results using the colour and thermal stereo (CATS) benchmark testbed demonstrate that the proposed approach outperforms several state-of-art approaches in generating accurate disparity maps.
- Author(s): Jian Ji ; Jiajie Wei ; Guoliang Fan ; Mengqi Bai ; Jingjing Huang ; Qiguang Miao
- Source: IET Image Processing, Volume 14, Issue 5, p. 838 –844
- DOI: 10.1049/iet-ipr.2018.5403
- Type: Article
- + Show details - Hide details
-
p.
838
–844
(7)
Image patch priors become a popular tool for image denoising. The Gaussian mixture model (GMM) is remarkably effective in modelling natural image patches. However, GMM prior learning using the expectation maximisation (EM) algorithm is sensitive to the initialisation, often leading to low convergence rate of parameter estimation. In this study, a novel sampling method called random neighbourhood resampling (RNR) is proposed to improve the accuracy and efficiency of parameter estimation. An enhanced GMM (EGMM) learning algorithm is further developed by incorporating RNR into the EM algorithm to initialise and update the GMM prior. The learned EGMM prior is applied in the expected patch log-likelihood (EPLL) framework for image denoising. The effectiveness and performance of the proposed RNR and EGMM algorithm are demonstrated via extensive experimental results comparing with the state-of-the-art image denoising methods, the experimental results show the higher PSNR result of the denoised images using the proposed method. Meanwhile, the authors verified that the proposed method can efficiently reduce the time of image denoising compared with the basic EPLL method.
- Author(s): Enes Cemiloglu and Gokce Nur Yilmaz
- Source: IET Image Processing, Volume 14, Issue 5, p. 845 –852
- DOI: 10.1049/iet-ipr.2019.0275
- Type: Article
- + Show details - Hide details
-
p.
845
–852
(8)
There is an urgent need for a robust video quality assessment (VQA) model that can efficiently evaluate the quality of a video content varying in terms of the distortion and content type in the absence of the reference video. Considering this need, a novel no reference (NR) model relying on the spatiotemporal statistics of the distorted video in a three-dimensional (3D)-discrete cosine transform (DCT) domain is proposed in this study. While developing the model, as the first contribution, the video contents are adaptively segmented into the cubes of different sizes and spatiotemporal contents in line with the human visual system (HVS) properties. Then, the 3D-DCT is applied to these cubes. Following that, as the second contribution, different efficient features (i.e. spectral behaviour, energy variation, distances between spatiotemporal frequency bands, and DC variation) associated with the contents of these cubes are extracted. After that, these features are associated with the subjective experimental results obtained from the EPFL-PoliMi video database using the linear regression analysis for building the model. The evaluation results present that the proposed model, unlike many top-performing NR-VQA models (e.g. V-BLIINDS, VIIDEO, and SSEQ), achieves high and stable performance across the videos with different contents and distortions.
- Author(s): Mohammad Hassan Maleki ; Ghosheh Abed Hodtani ; Seyed Hesam Odin Hashemi
- Source: IET Image Processing, Volume 14, Issue 5, p. 853 –861
- DOI: 10.1049/iet-ipr.2019.0613
- Type: Article
- + Show details - Hide details
-
p.
853
–861
(9)
Image classification is very important in pattern recognition and computer vision, where, for integrating final representation, feature pooling methods of the max-pooling, sum-pooling and average-pooling have been widely used. In this study, the authors propose a new method called K-strongest responses (KSRs) on the dictionary atoms for integrating the coding coefficients to generate the final representation that is compared with the previous pooling methods, produces better performance for the image classification task. On the basis of the KSR method, to improve classification accuracy and generate more compact and discriminative final representation, a new framework consisting of two-part KSR and bag-of-features is proposed. To evaluate the performance of the proposed method and framework, they apply it to locality-constrained linear coding, linear distance coding and sparse coding by using two datasets from benchmarks of scene classification: 19-class satellite scene and UC Merced Land. The results show that the coding coefficients integrated by their method and framework are more discriminative than other methods.
- Author(s): Pengfei Liu
- Source: IET Image Processing, Volume 14, Issue 5, p. 862 –873
- DOI: 10.1049/iet-ipr.2018.5930
- Type: Article
- + Show details - Hide details
-
p.
862
–873
(12)
As an important, challenging, and difficult problem in image processing, multiplicative noise removal (MNR) has attracted great attention. To this end, many variational methods have been effectively proposed in the past few decades. Among these variational methods, total variation (TV) and its higher-order extensions are very effective, where the former can preserve sharp edges but cause some undesirable staircase effects and the latter can better reduce the staircase effects but sometimes smooth the image details. To overcome the drawbacks while taking full use of their merits, the authors propose a novel hybrid higher-order TV regularisation model for MNR, in which the novelty of the proposed model consists of combining the image prior information of first-order and second-order derivatives to propose a novel higher-order regulariser, named as hybrid higher-order TV (HHTV). More specifically, a more preferable equivalent formulation of HHTV is derived. Then, they use the derived equivalent formulation to design an efficient alternating iterative algorithm to solve the proposed model. Finally, the experimental results demonstrate that the proposed HHTV method outperforms several state-of-the-art methods in terms of image quality and convergence speed.
- Author(s): Safaa Magdy ; Yasmine Abouelseoud ; Mervat Mikhail
- Source: IET Image Processing, Volume 14, Issue 5, p. 874 –881
- DOI: 10.1049/iet-ipr.2019.0575
- Type: Article
- + Show details - Hide details
-
p.
874
–881
(8)
Managing large personal image databases requires efficient privacy preserving indexing methods to allow for their outsourcing to possibly curious cloud servers. To construct a secure inverted index in this paper, first, visual words are extracted from stored images based on the Speeded-Up and Robust Features (SURF). Next, Order Preserving Encryption (OPE) is used to encipher the frequencies of occurrence of the extracted visual words. Another scale and rotation invariant feature, which is the local HSV histogram, is included for comparison. From the obtained results, it is apparent that SURF achieves more precise results. Aggregation of both features is considered to further improve the accuracy. The effects of the weighting scheme of the visual words and their number on the performance are investigated. Weighted term frequency inverse document frequency (tf-idf) together with the Jaccard similarity measure yield the best performance. OPE encryption is shown to have minor impact on the retrieval accuracy. To reduce encryption time, a lookup table is constructed. The inverted index reduces the search time significantly compared to a sequential search scheme as apparent from the results. A comparative study with recent related schemes demonstrates the competitiveness of the implemented system in terms of computational efficiency and accuracy.
- Author(s): Evgin Goceri
- Source: IET Image Processing, Volume 14, Issue 5, p. 882 –889
- DOI: 10.1049/iet-ipr.2019.0312
- Type: Article
- + Show details - Hide details
-
p.
882
–889
(8)
Visual evaluation of many magnetic resonance images is a difficult task. Therefore, computer-assisted brain tumor classification techniques have been proposed. These techniques have several drawbacks or limitations. Capsule based neural networks are new approaches that can preserve spatial relationships of learned features using dynamic routing algorithm. By this way, not only performance of tumor recognition increases but also sampling efficiency and generalisation capability improves. Therefore, in this work, a Capsule Network (CapsNet) is used to achieve fully automated classification of tumors from brain magnetic resonance images. In this work, prevalent three types of tumors (pituitary, glioma and meningioma) have been handled. The main contributions in this paper are as follows: 1) A comprehensive review on CapsNet based methods is presented. 2) A new CapsNet topology is designed by using a Sobolev gradient-based optimisation, expectation-maximisation based dynamic routing and tumor boundary information. 3) The network topology is applied to categorise three types of brain tumors. 4) Comparative evaluations of the results obtained by other methods are performed. According to the experimental results, the proposed CapsNet based technique can achieve extraction of desired features from image data sets and provides tumor classification automatically with 92.65% accuracy.
- Author(s): Yadhu Rajan Baby and Vinod Kumar Ramayyan Sumathy
- Source: IET Image Processing, Volume 14, Issue 5, p. 890 –900
- DOI: 10.1049/iet-ipr.2018.5748
- Type: Article
- + Show details - Hide details
-
p.
890
–900
(11)
Lung nodule segmentation is an interesting research topic, and it serves as an effective solution for the diagnosis of Lung cancer. The existing methods of lung nodule segmentation suffer from accuracy issues due to the heterogeneity of the nodules in the lungs and the presence of visual deviations in the nodules. Thus, there is a requirement for an effective lung nodule segmentation, which assists the physicians in making accurate decisions. Accordingly, this study proposes a lung nodule segmentation process based on the kernel-based Bayesian fuzzy clustering (BFC), which is the integration of kernel functions in the BFC. Initially, the input computed tomography image is pre-processed for ensuring the effective segmentation, and the lobes are identified using the adaptive thresholding strategy. Then, the dominant areas in the lobes are identified using a scale-invariant feature transform descriptor, and the significant nodules are extracted using the grid-based segmentation. Finally, the lung nodules are segmented using the proposed kernel-based BFC. The proposed algorithm is evaluated using the Lung Image Database Consortium and Image Database Resource Initiative database, and it acquires the accuracy, sensitivity, and false positive rate of 0.955, 0.999, and 0.025, respectively.
- Author(s): Zhenjun Tang ; Hanyun Zhang ; Chi-Man Pun ; Mengzhu Yu ; Chunqiang Yu ; Xianquan Zhang
- Source: IET Image Processing, Volume 14, Issue 5, p. 901 –908
- DOI: 10.1049/iet-ipr.2019.1157
- Type: Article
- + Show details - Hide details
-
p.
901
–908
(8)
Image hashing is an efficient technique of multimedia processing for many applications, such as image copy detection, image authentication, and social event detection. In this study, the authors propose a novel image hashing with visual attention model and invariant moments. An important contribution is the weighted DWT (discrete wavelet transform) representation by incorporating a visual attention model called Itti saliency model into LL sub-band. Since the Itti saliency model can efficiently extract saliency map reflecting regions of attention focus, perceptual robustness of the proposed hashing is achieved. In addition, as invariant moments are robust and discriminative features, hash construction with invariant moments extracted from the weighted DWT representation ensures good classification performance between robustness and discrimination. Extensive experiments with open image datasets are done to validate the performances of the proposed hashing. The results demonstrate that the proposed hashing is robust and discriminative. Performance comparisons with some hashing algorithms are also conducted, and the receiver operating characteristic results illustrate that the proposed hashing outperforms the compared hashing algorithms in classification performance between robustness and discrimination.
- Author(s): Prachi Sharma and Radhey Shyam Anand
- Source: IET Image Processing, Volume 14, Issue 5, p. 909 –920
- DOI: 10.1049/iet-ipr.2019.0230
- Type: Article
- + Show details - Hide details
-
p.
909
–920
(12)
In this study, the authors propose a novel methodology for static gesture recognition in a complex background using only depth map from Microsoft's Kinect camera. Four different types of features are extracted and analysed on two public static gesture datasets. The features extracted from the segmented hand are geometrical, local binary patterns, number of fingers (Num) raised in a gesture and distance of hand palm centre from the fingertips and the valley between the fingers. The hand region is first segmented from the image using depth data followed by the forearm removal. Four multi-class support vector machine (SVM) kernels are also compared and used for recognition of gestures with extracted feature vector as an input. The experimental results achieved recognition accuracy of 99 and on two public complex static gesture datasets using Gaussian SVM kernel function as a classifier. The proposed approach is found to be comparable and even outperforms some of the state-of-the-art techniques in terms of high recognition accuracies, even after using a single cue for hand segmentation and extraction of features in the complex background which results in non-dependency on too many cues and much hardware.
- Author(s): Haider Ali ; Awal Sher ; Maryam Saeed ; Lavdie Rada
- Source: IET Image Processing, Volume 14, Issue 5, p. 921 –928
- DOI: 10.1049/iet-ipr.2018.5987
- Type: Article
- + Show details - Hide details
-
p.
921
–928
(8)
Images captured in hazy or foggy weather conditions can be seriously degraded by scattering of atmospheric particles, which makes the objects and their features difficult to be identified by computer vision systems. In the past decades, image de-hazing is used to remove the influence of weather factors and improve image visualisation in hazy scenes by providing easy image post-processing towards human assistance systems benefit. In this study, the authors present a variational segmentation model equipped with de-hazing constraint terms in a new coupled dehazing-segmentation model. The proposed hybrid formulation not only recovers/restores the fog/haze degradation but at the same time segments image degraded object/objects by solving in this way the difficulties of simultaneously performed dehazing and segmentation pre/post-processing. This combination takes into account the image structure boundaries and the image quality, leading in this way to a robust dehazing segmentation scheme. The advantages of the proposed method are the suitability of the model for grey and vector-valued images, a small number of parameters involved, and a rather good speed of the algorithm. Experiments show that their approach outperforms the state-of-the-art algorithms in terms of segmentation accuracy while avoiding a dehazing preprocessing which reflects an extended CPU time.
- Author(s): Divya S V ; Sourabh Paul ; Umesh Chandra Pati
- Source: IET Image Processing, Volume 14, Issue 5, p. 929 –938
- DOI: 10.1049/iet-ipr.2019.0568
- Type: Article
- + Show details - Hide details
-
p.
929
–938
(10)
The scale-invariant feature transform (SIFT) algorithm is the most widely used feature extraction as well as a feature matching method in remote sensing image registration. However, the performance of this algorithm is affected by the influence of speckle noise in synthetic aperture radar (SAR) images. It reduces the number of correct matches as well as the correct matching rate in SAR image registration. Moreover, SAR image registration is considered to be a challenging task as the images generally have significant geometric as well as intensity variations. To address these problems, a structure tensor-based SIFT algorithm is proposed to register the SAR images. At first, the tensor diffusion technique is used to construct the scale layers. Then, the features are extracted in the scale layers. Finally, feature matching is performed between the input SAR images and correct matches are identified. The proposed method can increase the number of correct matches as well as position accuracy in registration. Experiments have been conducted on five SAR image pairs to verify the effectiveness of the method.
- Author(s): Ahmed Hechri and Abdellatif Mtibaa
- Source: IET Image Processing, Volume 14, Issue 5, p. 939 –946
- DOI: 10.1049/iet-ipr.2019.0634
- Type: Article
- + Show details - Hide details
-
p.
939
–946
(8)
Nowadays, traffic sign recognition is the most important task of advanced driver assistance systems since it improves the safety and comfort of drivers. However, it remains a challenging task due to the complexity of road traffic scenes. In this study, a novel two-stage approach for real-time traffic sign detection and recognition in a real traffic situation was proposed. The first stage aims to detect and classify the detected traffic signs into circular and triangular shape using HOG features and linear support vector machines (SVMs). The main objective of the second stage is to recognise the traffic signs using a convolutional neural network into their subclasses. The performance of the whole process is tested on German traffic sign detection benchmark (GTSDB) and German traffic sign recognition benchmark (GTSRB) datasets. Experimental results show that the obtained detection and recognition rate is comparable with those reported in the literature with much less complexity. Furthermore, the average processing time demonstrates its suitability for real-time processing applications.
- Author(s): Aditi Kohli ; Abhinav Gupta ; Divya Singhal
- Source: IET Image Processing, Volume 14, Issue 5, p. 947 –958
- DOI: 10.1049/iet-ipr.2019.0397
- Type: Article
- + Show details - Hide details
-
p.
947
–958
(12)
The location of the smallest object in a scene plays an essential role in the perception of a viewer. Any tampering with it, may evolve in adverse consequences especially with surveillance videos of banks, ATMs, traffic monitoring etc. Therefore, a scientific approach is required to thoroughly observe the fine details of tampering (forgery) in a video. A spatio-temporal detection method is proposed using convolutional neural network (CNN) to detect as well as localise the forged region in a forged video frame. The proposed method is employed in two stages. The first stage is detecting forged frames using proposed temporal CNN, while the second stage is localising the forged region in a novel way using proposed spatial CNN. The vital element of a video, i.e. motion residual is used to train the proposed network. Thus, making the network comprehensive in detecting the object-based forgery in HD videos. The performance of the proposed method is evaluated on SYSU-OBJFORG dataset (object-based video forgery dataset) and a derived test dataset of variable length and frame size videos. The results are compared with state-of-the-art methods to prove the efficacy of the proposed method.
- Author(s): Rahul Pramanik and Soumen Bag
- Source: IET Image Processing, Volume 14, Issue 5, p. 959 –972
- DOI: 10.1049/iet-ipr.2019.0208
- Type: Article
- + Show details - Hide details
-
p.
959
–972
(14)
Offline recognition of handwritten text in Indian regional scripts is a major area of research as nearly 910 million people use such scripts in India. Most of the reported research works on Indian script-based optical character recognition (OCR) system have focused on a single script only. Research for developing methodologies that are capable of handling more than one Indian script is yet to be focused. As such, this has motivated us to study and experiment on creating a recognition system that can handle two most popular Indian scripts, namely Bangla and Devanagari. The authors propose a system that first detects and corrects skew present in Bangla and Devanagari handwritten words, estimates the headline, and further segments the words into meaningful pseudo-characters. This is followed by extraction of three different statistical features and combination of these features with off-the-shelf classifiers to study and identify the exemplary combination. Moreover, they employ state-of-the-art convolutional neural network-based transfer learning architectures and delineate a comparison with the extracted hand-crafted features. Finally, they amalgamate the identified pseudo-characters to provide the final result. On experimentation, the proposed segmentation methodology is discerned to provide good accuracy when compared with existing methods.
- Author(s): Meng Jia
- Source: IET Image Processing, Volume 14, Issue 5, p. 973 –981
- DOI: 10.1049/iet-ipr.2019.0310
- Type: Article
- + Show details - Hide details
-
p.
973
–981
(9)
In this study, the critical factor of the chaos system has been analysed to improve the randomicity of the encryption keyspace. Several chaos systems have been integrated together with a linear function to form a much more efficient key sequence generator. This study also presents the cross colour field confusion method which scrambles the pixels among the R G B colour matrixes. In this way, the range of the pixels scrambling map is extended to three matrixes compared to the traditional pixels scrambling schemes which do pixels scrambling in its own colour matrix. The experiment results show that the improved cascade system has better bifurcation diagrams. The cross colour field diffusion algorithm makes the encrypted image has little pixel correlation and no information leakage. The experiment results justify that the novel cross colour confusion scheme with improved chaos systems has a good ability to resist brute force attack, known plain attack, chosen plain text attack, chosen ciphertext attack and differential attack. The security analysis demonstrates that the proposed approach has satisfactory properties in image encryption.
- Author(s): Shubhobrata Bhattacharya ; Anirban Dasgupta ; Aurobinda Routray
- Source: IET Image Processing, Volume 14, Issue 5, p. 982 –994
- DOI: 10.1049/iet-ipr.2019.0199
- Type: Article
- + Show details - Hide details
-
p.
982
–994
(13)
This paper presents new image descriptors for heterogeneous face recognition (HFR). The proposed descriptors combine directional and neighborhood information using a rotating spoke and concentric rings concept. We name the descriptors as multi-directional local adjacency descriptors (MDLAD). This family of descriptor captures the directional information through successive rotations of a pair of orthogonal spokes. Likewise, they capture the adjacency information through a comparison against the central pixel of a window with concentric rings around the central pixel. The MDLAD is found to describe the face images well for recognition purposes, which when matched using the chi-squared distance. The face recognition performance with MDLAD improves with its use as a layer in a deep neural network, which yields a robust classification for heterogeneous face recognition with respect to the state-of-the-art methods. The MDLADNET deep network is easily trainable with few hyperparameters and limited data samples as compared to existing similar deep networks. We have experimented on different heterogeneous modalities viz. Extended Yale B, CASIA, CUFSF, IIITD, LFW, Multi-PIE, and CARL, and have found proficient results.
Enhanced image no-reference quality assessment based on colour space distribution
Improved strategy for human action recognition; experiencing a cascaded design
Stereo matching for infrared images using guided filtering weighted by exponential moving average
Image patch prior learning based on random neighbourhood resampling for image denoising
Blind video quality assessment via spatiotemporal statistical analysis of adaptive cube size 3D-DCT coefficients
KSR-BOF: a new and exemplified method (as KSRs method) for image classification
Hybrid higher-order total variation model for multiplicative noise removal
Privacy preserving search index for image databases based on SURF and order preserving encryption
CapsNet topology to classify tumours from brain images and comparative evaluation
Kernel-based Bayesian clustering of computed tomography images for lung nodule segmentation
Robust image hashing with visual attention model and invariant moments
Depth data and fusion of feature descriptors for static gesture recognition
Active contour image segmentation model with de-hazing constraints
Structure tensor-based SIFT algorithm for SAR image registration
Two-stage traffic sign detection and recognition based on SVM and convolutional neural networks
CNN based localisation of forged region in object-based forgery for HD videos
Segmentation-based recognition system for handwritten Bangla and Devanagari words using conventional classification and transfer learning
Image encryption with cross colour field algorithm and improved cascade chaos systems
Multi-directional local adjacency descriptors (MDLAD) for heterogeneous face recognition
-
- Source: IET Image Processing, Volume 14, Issue 5, page: 995 –995
- DOI: 10.1049/iet-ipr.2020.0350
- Type: Article
- + Show details - Hide details
-
p.
995
(1)
Corrigendum: Automatic detection of acute lymphoblastic leukaemia based on extending the multifractal features
Most viewed content
Most cited content for this Journal
-
Medical image segmentation using deep learning: A survey
- Author(s): Risheng Wang ; Tao Lei ; Ruixia Cui ; Bingtao Zhang ; Hongying Meng ; Asoke K. Nandi
- Type: Article
-
Block-based discrete wavelet transform-singular value decomposition image watermarking scheme using human visual system characteristics
- Author(s): Nasrin M. Makbol ; Bee Ee Khoo ; Taha H. Rassem
- Type: Article
-
Classification of malignant melanoma and benign skin lesions: implementation of automatic ABCD rule
- Author(s): Reda Kasmi and Karim Mokrani
- Type: Article
-
Digital image watermarking method based on DCT and fractal encoding
- Author(s): Shuai Liu ; Zheng Pan ; Houbing Song
- Type: Article
-
Tomato leaf disease classification by exploiting transfer learning and feature concatenation
- Author(s): Mehdhar S. A. M. Al‐gaashani ; Fengjun Shang ; Mohammed S. A. Muthanna ; Mashael Khayyat ; Ahmed A. Abd El‐Latif
- Type: Article