IET Image Processing
Volume 10, Issue 11, November 2016
Volumes & issues:
Volume 10, Issue 11
November 2016
-
- Author(s): Anil Singh Parihar and Om Prakash Verma
- Source: IET Image Processing, Volume 10, Issue 11, p. 799 –808
- DOI: 10.1049/iet-ipr.2016.0242
- Type: Article
- + Show details - Hide details
-
p.
799
–808
(10)
This study presents a new contrast-enhancement approach called entropy-based dynamic sub-histogram equalisation. The proposed algorithm performs a recursive division of the histogram based on the entropy of the sub-histograms. Each sub-histogram is divided recursively into two sub-histograms with equal entropy. A stopping criterion is proposed to achieve an optimum number of sub-histograms. A new dynamic range is allocated to each sub-histogram based on the entropy and number of used and missing intensity levels in the sub-histogram. The final contrast-enhanced image is obtained by equalising each sub-histogram independently. The proposed algorithm is compared with conventional as well as state-of-the-art contrast-enhancement algorithms. The quantitative results for a large image data set are statistically analysed using a paired t-test. The quantitative and visual assessment shows that the proposed algorithm outperforms most of the existing contrast-enhancement algorithms. The proposed algorithm results in natural-looking, good contrast images with almost no artefacts.
- Author(s): Xiangguang Leng ; Kefeng Ji ; Xiangwei Xing ; Huanxin Zou ; Shilin Zhou
- Source: IET Image Processing, Volume 10, Issue 11, p. 809 –816
- DOI: 10.1049/iet-ipr.2015.0574
- Type: Article
- + Show details - Hide details
-
p.
809
–816
(8)
Bilateral filtering is a technique to smooth images while preserving edges; it employs both geometric closeness and intensity similarity of neighbouring pixels. When intensity similarity of neighbouring pixels is very high, however, bilateral filtering weakens into Gaussian filtering. The performance does not improve significantly while the computation is still expensive. Many existing accelerated algorithms, however, ignored this basic fact. In this study, a hybrid bilateral filtering algorithm based on edge detection is proposed. By making use of edge detection, the proposed algorithm combines bilateral filtering and Gaussian filtering and its degree can be controlled by a threshold. Experimental results show that the proposed algorithm is able to reduce the computation efficiently and achieve better performance. What is more, the proposed algorithm shows potential to speed up existing accelerated bilateral filtering algorithms.
- Author(s): Qingtang Su
- Source: IET Image Processing, Volume 10, Issue 11, p. 817 –829
- DOI: 10.1049/iet-ipr.2016.0048
- Type: Article
- + Show details - Hide details
-
p.
817
–829
(13)
In this study, a novel blind image watermarking technique using Hessenberg decomposition is proposed to embed colour watermark image into colour host image. In the process of embedding watermark, the watermark information of colour image is embedded into the second row of the second column element and the third row of the second column element in the orthogonal matrix obtained by Hessenberg decomposition. In the process of extracting watermark, neither the original host image nor the original watermark image is needed and it is impossible to retrieve them without the authorised keys. Experimental results show that the proposed colour image watermarking technique based on Hessenberg decomposition outperforms other watermarking methods and it is robust to resist a wide range of attacks, e.g. image compression, filtering, cropping, rotation, adding noise, blurring, scaling, sharpening and rotation and so on. Especially, the proposed method has lower computational complexity than other methods based on singular value decomposition or QR decomposition.
- Author(s): Manel Dridi ; Mohamed Ali Hajjaji ; Belgacem Bouallegue ; Abdellatif Mtibaa
- Source: IET Image Processing, Volume 10, Issue 11, p. 830 –839
- DOI: 10.1049/iet-ipr.2015.0868
- Type: Article
- + Show details - Hide details
-
p.
830
–839
(10)
This study presents a novel chaotic–neural network of image encryption and decryption image applied to the domain of medical. The main objective behind the proposed technique is to ensure the safety of medical images with a less complex algorithm compared with the existing methods. In order to improve the robustness, the totality of the pixels related to the host image is XORed with a generation key. After that, with a chaotic system (logistic map), the binary sequence is generated in order to set the weights wij and bias bi of neuron network with the goal of encrypting the pixels issued from the previous step. Simulation and experiments were carried out on medical images coded on 8 and 12 bits/pixel. The obtained results confirmed the performance and the efficiency of the proposed method, which is compliant with Digital Imaging and Communications in Medicine standards.
- Author(s): Liang Zhang ; Peiyi Shen ; Xilu Peng ; Guangming Zhu ; Juan Song ; Wei Wei ; Houbing Song
- Source: IET Image Processing, Volume 10, Issue 11, p. 840 –847
- DOI: 10.1049/iet-ipr.2015.0844
- Type: Article
- + Show details - Hide details
-
p.
840
–847
(8)
Images obtained under low-light conditions tend to have the characteristics of low-grey levels, high-noise levels, and indistinguishable details. Image degradation not only affects the recognition of images, but also influences the performance of the computer vision system. The low-light image enhancement algorithm based on the dark channel prior de-hazing technique can enhance the contrast of images effectively and can highlight the details of images. However, the dark channel prior de-hazing technique ignores the effects of noise, which leads to significant noise amplification after the enhancement process. In this study, a de-hazing-based simultaneous enhancement and noise reduction algorithm of are proposed by analysing the essence of the dark channel prior de-hazing technique and bilateral filter. First, the authors estimate the values of the initial parameters of the hazy image model by de-hazing technique. Then, they correct the parameters of the hazy image model alternately with the iterative joint bilateral filter. Experimental results indicate that the proposed algorithm can simultaneously enhance the low-light images and reduce noise effectively. The proposed algorithm could also perform quite well compared with the current common image enhancement and noise reduction algorithms in terms of the subjective visual effects and objective quality assessments.
- Author(s): Yan Na ; Mengmeng Liao ; Cheolkon Jung
- Source: IET Image Processing, Volume 10, Issue 11, p. 848 –864
- DOI: 10.1049/iet-ipr.2015.0528
- Type: Article
- + Show details - Hide details
-
p.
848
–864
(17)
Speed up robust features (SURF) image geometrical registration algorithm available tends to have a one-to-many association problem in feature association. One feature point in an image is associated with multiple feature points in another image, of which some or even all are mismatching points. Coordinates of these mismatching points are used to compute transformation matrix, making it difficult to obtain desirable registration. To solve this problem of SURF algorithm, super-SURF image geometrical registration algorithm is proposed, in which information richness areas are selected to detect feature points and to implement feature points association. The degree of closeness of multiple feature points from the one-to-many feature point pairs is analysed in order to remove feature point pairs with larger errors and retain those with smaller errors. Transformation matrix is then determined with coordinates of feature point pairs retained. Registration image can be obtained after transformation of floating image. Experimental results indicate that super-SURF image geometrical registration algorithm is of higher matching accuracy with less running time than SURF algorithm.
- Author(s): Yunjie Chen ; Jian Li ; Hui Zhang ; Yuhui Zheng ; Byeungwoo Jeon ; Qingming Jonathan Wu
- Source: IET Image Processing, Volume 10, Issue 11, p. 865 –876
- DOI: 10.1049/iet-ipr.2016.0271
- Type: Article
- + Show details - Hide details
-
p.
865
–876
(12)
Owing to the existence of noise and intensity inhomogeneity in brain magnetic resonance (MR) images, the existing segmentation algorithms are hard to find satisfied results. In this study, the authors propose an improved fuzzy C-mean clustering method (FCM) to obtain more accurate results. First, the authors modify the traditional regularisation smoothing term by using the non-local information to reduce the effect of the noise. Second, inspired by the mechanism of the Gaussian mixture model, the distance function of FCM is defined by using the form of certain exponential function consisting of not only the distance but also the covariance and the prior probability to improve the robustness. Meanwhile, the bias field is modelled by using orthogonal basis functions to reduce the effect of intensity inhomogeneity. Finally, they use the hierarchical strategy to construct a more flexibility function, which considers the improved distance function itself as a sub-FCM, to make the method more robust and accurate. Compared with the state-of-the-art methods, experiment results based on synthetic and real MR images demonstrate its accuracy and robustness.
- Author(s): Xin Liu ; He Zhang ; Yuan Yan Tang ; Ji-Xiang Du
- Source: IET Image Processing, Volume 10, Issue 11, p. 877 –884
- DOI: 10.1049/iet-ipr.2016.0138
- Type: Article
- + Show details - Hide details
-
p.
877
–884
(8)
Many traditional dark channel prior based haze removal schemes often suffer from the colour distortion and generate halo artefacts in the remote scenes. To tackle these issues, the authors present an efficient scene-adaptive single image dehazing approach via opening dark channel model (ODCM). First, the authors detect the image depth information and separate it into close view and distant view. Then, an ODCM is proposed to optimise the whole atmospheric veil, in which the values of close view are regularised by a minimum channel image while the distant parts are estimated by an appropriate lower constant. Accordingly, the transmission map can be further optimised by guide filter and smoothed by domain transform filter. Finally, the haze degraded image can be well restored by the atmosphere scattering model. The extensive experiments have shown that the proposed image dehazing approach has significantly increased the perceptual visibility of the scene and achieved a better colour fidelity visually.
- Author(s): Yingfeng Cai ; Hai Wang ; Xiaobo Chen ; Long Chen
- Source: IET Image Processing, Volume 10, Issue 11, p. 885 –892
- DOI: 10.1049/iet-ipr.2016.0176
- Type: Article
- + Show details - Hide details
-
p.
885
–892
(8)
This study proposes an efficient method to handle the object occlusions seen in monocular traffic image sequences. The motivation of this study is different methods perform differently in occlusion segmentation and the authors’ idea is to use a situation-driven approach to aggregate different methods in order to get a good performance. This study classifies occlusion into four categories according to the foreground situation and a multilevel occlusion handling framework is utilised. First, the image segmentation algorithm based on convex hull analysis is utilised for intra-frame level occlusion segmentation. The segmentation algorithm is established by the compactness ratio and interior distance ratio of the foreground. Second, an online sample-based classification algorithm is utilised for tracking level occlusion segmentation. Training samples are extracted from the historical frames before occlusion and testing samples are extracted from the current frame by an adaptive searching strategy. The segmentation of occlusion is transferred into the online classification of testing samples. Such algorithm is established by the similarity and coherence of target's property between continuous frames. Experiments on video sequences illustrate the good performance of the proposed method under different conditions with low computational cost.
- Author(s): Hu He
- Source: IET Image Processing, Volume 10, Issue 11, p. 893 –899
- DOI: 10.1049/iet-ipr.2016.0031
- Type: Article
- + Show details - Hide details
-
p.
893
–899
(7)
This study demonstrates an unsupervised segmentation algorithm for video sequences acquired from a moving camera with results comparable to semi-supervised (interactive) methods. The authors employ depth cues from multiple views stereo to enhance the hypothesis of a potential object based on saliency scores. The resulting object and background hypotheses are then used to model foreground and background distributions for a graph-cut-based segmentation. The authors’ graph-cut framework simultaneously optimises over depth and colour information to produce automatically segmented objects in challenging unstructured scenes. They refer to this saliency and depth-based segmentation method as ‘SDCut’. The proposed method is fully automatic without requiring any intervention. Experiments demonstrate that their method can achieve accurate segmentation results which are comparable with several well-known human interactive semi-supervised segmentation methods.
- Author(s): Xue Fan ; Irfan Riaz ; Yawar Rehman ; Hyunchul Shin
- Source: IET Image Processing, Volume 10, Issue 11, p. 900 –907
- DOI: 10.1049/iet-ipr.2016.0068
- Type: Article
- + Show details - Hide details
-
p.
900
–907
(8)
Variations in road types and its ambient environment make the single image based vanishing point detection a challenging task. In this study, a novel and efficient vanishing point detection method is proposed by using random forest and patch-wise weighted soft voting. To eliminate the noise votes introduced by background region and to reduce the workload of voting stage, random forest based valid patch extraction technique is developed, which distinguishes the informative road patches from the background noise. To prepare training data for the random forest, a training patch generation method is proposed, and a variety of road relevant features are introduced for training patch representation. Since the traditional pixel-wise voting scheme is time consuming and imprecise, a patch-wise weighted soft voting scheme is proposed to generate a more precise voting map and to further reduce the computational complexity of voting stage. The experimental results on the benchmark dataset show that the proposed method reveals a step forward in performance. The authors’ approach is about 6 times faster in detection speed and 5.6% better in detection accuracy than the generalised Laplacian of Gaussian filter based method, which is a well-known state-of-the-art approach.
- Author(s): Huasong Chen ; Qinghua Wang ; Chunyong Wang ; Zhenhua Li
- Source: IET Image Processing, Volume 10, Issue 11, p. 908 –925
- DOI: 10.1049/iet-ipr.2015.0734
- Type: Article
- + Show details - Hide details
-
p.
908
–925
(18)
Conventional blind restoration methods often take consideration of images as a whole. However, an image may have different types of components, and each component has different morphology and properties. Using one model only can capture one part of images effectively, but fail to represent the others; the processing results by using conventional methods would lose some important features. In this study, a new sparse prior-based blind image deconvolution model has been proposed by employing commonly considered image decomposition strategy which separates an image into cartoon (piecewise-smooth part) and texture (the oscillating pattern part). On the basis of the different properties of cartoon and texture, it, respectively, regularises the texture with the sparsity of discrete cosine transform domain, and the cartoon with a combined term including framelet-domain-based sparse prior and a quadratic regularisation. Then a double alternating split Bregman iteration is proposed to address the proposed minimisation problem. It has been demonstrated that images can be recovered with high quality and more abundant features by authors’ proposed algorithm than other popular deblurring methods.
- Author(s): Zhengwei Shen and Lishuang Cheng
- Source: IET Image Processing, Volume 10, Issue 11, p. 926 –935
- DOI: 10.1049/iet-ipr.2015.0787
- Type: Article
- + Show details - Hide details
-
p.
926
–935
(10)
In this study, the authors propose a coupled analysis-based image restoration model regularised by total variation(TV) and wavelet frame coefficients penalty terms imposed on non-convex non-smooth ℓp-norm (0<p<1). The highlighted contributions to this model are: (i) the intrinsic quality of preserving piecewise smooth areas of TV and the amazing sparse representing capability of the wavelet frame to the underlying image alternately interact, will lead to better experimental results; and (ii) the non-convex non-smooth ℓp-norm (0<p<1) regularisation is more amenable to the marginal distributions of gradients of natural images than ℓ1-norm, which will suppress staircase effects more effectively. By alternative direction method of multipliers, the objective function is first divided into three subproblems that are solved by the fast iterative shrinkage-thresholding algorithm (FISTA) and the generalised iterated shrinkage algorithm (GISA) respectively. The GISA solution is computationally more efficient than a diversity of algorithms such as iteratively reweighted L1( IRL1), iteratively reweighted least squares (IRLS) restricted to solve non-convex non-decreasing function; and the FISTA solution also has a faster convergence rate than iterative shrinkage-thresholding algorithm. The extensive experimental results show that the proposed model exhibit an amazing image restoration capability.
Contrast enhancement using entropy-based dynamic sub-histogram equalisation
Hybrid bilateral filtering algorithm based on edge detection
Novel blind colour image watermarking technique using Hessenberg decomposition
Cryptography of medical images based on a combination between chaotic and neural network
Simultaneous enhancement and noise reduction of a single low-light image
Super-speed up robust features image geometrical registration algorithm
Non-local-based spatially constrained hierarchical fuzzy C-means method for brain magnetic resonance imaging segmentation
Scene-adaptive single image dehazing via opening dark channel model
Multilevel framework to handle object occlusions for real-time tracking
Saliency and depth-based unsupervised object segmentation
Vanishing point detection using random forest and patch-wise weighted soft voting
Image decomposition-based blind image deconvolution model by employing sparse representation
Coupled image restoration model with non-convex non-smooth ℓ p wavelet frame and total variation regularisation
Most viewed content
Most cited content for this Journal
-
Medical image segmentation using deep learning: A survey
- Author(s): Risheng Wang ; Tao Lei ; Ruixia Cui ; Bingtao Zhang ; Hongying Meng ; Asoke K. Nandi
- Type: Article
-
Block-based discrete wavelet transform-singular value decomposition image watermarking scheme using human visual system characteristics
- Author(s): Nasrin M. Makbol ; Bee Ee Khoo ; Taha H. Rassem
- Type: Article
-
Classification of malignant melanoma and benign skin lesions: implementation of automatic ABCD rule
- Author(s): Reda Kasmi and Karim Mokrani
- Type: Article
-
Digital image watermarking method based on DCT and fractal encoding
- Author(s): Shuai Liu ; Zheng Pan ; Houbing Song
- Type: Article
-
Tomato leaf disease classification by exploiting transfer learning and feature concatenation
- Author(s): Mehdhar S. A. M. Al‐gaashani ; Fengjun Shang ; Mohammed S. A. Muthanna ; Mashael Khayyat ; Ahmed A. Abd El‐Latif
- Type: Article