IET Image Processing
Volume 8, Issue 12, December 2014
Volumes & issues:
Volume 8, Issue 12
December 2014
Lossless chaos-based crypto-compression scheme for image protection
- Author(s): Atef Masmoudi and William Puech
- Source: IET Image Processing, Volume 8, Issue 12, p. 671 –686
- DOI: 10.1049/iet-ipr.2013.0598
- Type: Article
- + Show details - Hide details
-
p.
671
–686
(16)
In this study, the authors proposed a new scheme which performs both lossless compression and encryption of images. Lossless compression is done by arithmetic coding (AC) while encryption is based on a chaos-based pseudorandom bit generator. Hence, they proposed to incorporate recent results of chaos theory into AC in order to shuffle the cumulative frequency vector of input symbols chaotically to make AC secure and the decoding process completely key-dependent. Many other techniques based on varying the statistical model used by AC have been proposed in literature, however, these techniques suffer from losses in compression efficiency that result from changes in entropy model statistics and are weak against known attacks. The proposed compression–encryption techniques were developed and discussed. The numerical simulation analysis indicates that the proposed scheme is highly satisfactory for image encryption without any AC compression efficiency loss. In addition, it can be incorporated into any image compression standard or algorithm employing AC as entropy coding stage, including static, adaptive and context-based adaptive models, and at any level, including bit, pixel and predictive error pixel levels.
Document image super-resolution using structural similarity and Markov random field
- Author(s): Xiaoxuan Chen and Chun Qi
- Source: IET Image Processing, Volume 8, Issue 12, p. 687 –698
- DOI: 10.1049/iet-ipr.2013.0412
- Type: Article
- + Show details - Hide details
-
p.
687
–698
(12)
Low-resolution (LR) document images may cause difficulties in reading or low recognition rates in computer vision. Thus, it is necessary to improve the resolution of an LR document image via some algorithms. In this study, a novel document image super-resolution (SR) method using structural similarity and Markov random field (MRF) is proposed. First, the non-local algorithm is utilised to find similar patches. Instead of using the Euclidian distance, a modified chi-square distance is proposed to measure the patch similarity because the bimodality characteristic of the document images can be better described by this modified chi-square distance. Finally, the structural similarity of similar patches is served as a constraint for the MRF-based SR method, which is proper to describe the neighbouring relationship between patches. The SR reconstruction for LR images of printed and handwritten documents are carried out by the proposed algorithm. Experimental results show that the reconstructed SR images obtain higher peak signal-to-noise ratio and structural similarity values than those of several state-of-the-art SR methods and visually pleasant SR images can be produced as well.
Multimodal non-rigid registration methods based on local variability measures in computed tomography and magnetic resonance brain images
- Author(s): Isnardo Reducindo ; Aldo R. Mejia-Rodriguez ; Edgar R. Arce-Santana ; Daniel U. Campos-Delgado ; Flavio Vigueras-Gomez ; Elisa Scalco ; Anna M. Bianchi ; Giovanni M. Cattaneo ; Giovanna Rizzo
- Source: IET Image Processing, Volume 8, Issue 12, p. 699 –707
- DOI: 10.1049/iet-ipr.2013.0705
- Type: Article
- + Show details - Hide details
-
p.
699
–707
(9)
This paper presents a novel non-rigid multimodal registration method that relies on three basic steps: first, an initial approximation of the deformation field is obtained by a parametric registration technique based on particle filtering; second, an intensity mapping based on local variability measures (LVM) is applied over the two images in order to overcome the multimodal restriction between them; and third, an optical flow method is used in an iterative way to find the remaining displacements of the deformation field. Hence the new methodology offers a solution for multimodal NRR by a quadratic optimisation over a convex surface, which allows independent motion of each pixel, in contrast to methods that parameterise the deformation space. To evaluate the proposed method, a set of magnetic resonance/computed tomography clinical studies (pre- and post-radiotherapy treatment) of three patients with cerebral tumour deformations of the brain structures was employed. The resulting registration was evaluated both qualitatively and quantitatively by standard indices of correspondence over anatomical structures of interest in radiotherapy (brain, tumour and cerebral ventricles). These results showed that one of the proposed LVM (entropy) offers a superior performance in estimating the non-rigid deformation field.
Intelligent hybrid watermarking ancient-document wavelet packet decomposition-singular value decomposition-based schema
- Author(s): Mohamed Neji Maatouk and Najoua Essoukri Ben Amara
- Source: IET Image Processing, Volume 8, Issue 12, p. 708 –717
- DOI: 10.1049/iet-ipr.2013.0546
- Type: Article
- + Show details - Hide details
-
p.
708
–717
(10)
The ancient documents represent one of the author's history pillars that preserve the historical events for the next generation. Many digitisation projects have been launched to preserve and to diffuse them to the public. However, ensuring security is a challenge since digital documents are susceptible to be hacked. Thus, watermarking the images of the documents seems to be a promising solution mainly for protect the copyright. Many watermarking algorithms have been proposed particularly in the medical field. However, no solution has been recommended for the images of ancient documents. In this study, the authors present a watermarking approach devoted to store images. Their algorithm is based on a wavelet packet decomposition, an intelligent choice of the best base carrier points by way of the singular value decomposition. In their approach, the insertion base and the carrier points of the signature are dynamic, varying from one image to another. For a better preservation of the watermark, they exploited a convolutional encoder to encode data. The results recorded in a set of images of ancient documents taken from the National Library of Tunisia and from the National Archives of Tunis have shown promising results.
Inversion attack resilient zero-watermarking scheme for medical image authentication
- Author(s): Seenivasagam Vellaisamy and Velumani Ramesh
- Source: IET Image Processing, Volume 8, Issue 12, p. 718 –727
- DOI: 10.1049/iet-ipr.2013.0558
- Type: Article
- + Show details - Hide details
-
p.
718
–727
(10)
Medical images are watermarked with patient data to enforce patient authentication and identification in radiology practices. In addition to common threats such as signal processing and geometric attacks, medical image watermarking systems are susceptible to a new class of threats called ‘inversion attack’, leading to ambiguities in establishing rightful ownership. This study presents an ‘inversion attack’ resilient zero-watermarking system, in the hybrid Contourlet transform – singular value decomposition domain for medical image authentication. This scheme preserves the fidelity of the host image without introducing any artefacts and employs triangular number generating function and Hu's image invariants to confront ‘inversion attacks’. The performance of the system is evaluated with medical images of different modalities and a quick response code watermark that contains patient data. The experimental results demonstrate the robustness of the system against ‘ambiguity attacks’ and signify its appropriateness for secured medical image exchange between remote radiologists.
Sparse-induced similarity measure: mono-modal image registration via sparse-induced similarity measure
- Author(s): Aboozar Ghaffari and Emad Fatemizadeh
- Source: IET Image Processing, Volume 8, Issue 12, p. 728 –741
- DOI: 10.1049/iet-ipr.2013.0575
- Type: Article
- + Show details - Hide details
-
p.
728
–741
(14)
Similarity measure is an important key in image registration. Most traditional intensity-based similarity measures (e.g. sum-of-squared-differences, correlation coefficient, mutual information and correlation ratio) assume a stationary image and pixel-by-pixel independence. These similarity measures ignore the correlation among pixel intensities; hence, a perfect image registration cannot be achieved especially in the presence of spatially varying intensity distortions and outlier objects that appear in one image but not in the other. It is supposed here that non-stationary intensity distortion (such as bias field) has a sparse representation in the transformation domain. Based on this assumption, a novel similarity measure is proposed here based on sparse representation in a mono-modal setting. The zero norm (ℓ0) in the transform domain is introduced as a new similarity measure in the presence of non-stationary intensity distortion. The present study defines a sparsity similarity measure that indicates the complexity of the residual image between two registered images in the transform domain such as discrete cosine transform or wavelet. It is attempted here to analytically and statistically illustrate that the proposed similarity measure has important properties such as metric properties in vector space, from correntropy perspective. This measure produces accurate registration results on both artificial and real-world problems which are examined in the present study.
A fractal-based image encryption system
- Author(s): Salwa Kamal Abd-El-Hafiz ; Ahmed G. Radwan ; Sherif H. Abdel Haleem ; Mohamed L. Barakat
- Source: IET Image Processing, Volume 8, Issue 12, p. 742 –752
- DOI: 10.1049/iet-ipr.2013.0570
- Type: Article
- + Show details - Hide details
-
p.
742
–752
(11)
This study introduces a novel image encryption system based on diffusion and confusion processes in which the image information is hidden inside the complex details of fractal images. A simplified encryption technique is, first, presented using a single-fractal image and statistical analysis is performed. A general encryption system utilising multiple fractal images is, then, introduced to improve the performance and increase the encryption key up to hundreds of bits. This improvement is achieved through several parameters: feedback delay, multiplexing and independent horizontal or vertical shifts. The effect of each parameter is studied separately and, then, they are combined to illustrate their influence on the encryption quality. The encryption quality is evaluated using different analysis techniques such as correlation coefficients, differential attack measures, histogram distributions, key sensitivity analysis and the National Institute of Standards and Technology (NIST) statistical test suite. The obtained results show great potential compared to other techniques.
Edge preservation image enlargement and enhancement method based on the adaptive Perona–Malik non-linear diffusion model
- Author(s): Baraka Maiseli ; Ogada Elisha ; Jiangyuan Mei ; Huijun Gao
- Source: IET Image Processing, Volume 8, Issue 12, p. 753 –760
- DOI: 10.1049/iet-ipr.2014.0040
- Type: Article
- + Show details - Hide details
-
p.
753
–760
(8)
In this study, the authors have proposed a new super resolution (SR) model based on the Perona–Malik regularisation scheme. The new model integrates into its regularisation component an adaptive exponential term which automatically adjusts itself depending on the local image features. This lends more sensitivity and adaptability to the proposed model, thereby making the reconstruction process much less punishing against semantically important features. Therefore, regularisation is stronger in homogeneous regions, and weaker in the neighbourhood of boundaries. The proposed method has a promising capability of supressing noise more effectively, while preserving important image features. The approach used differs significantly from the available methods, especially in the manner in which adaptability has been deployed. Noting that SR methods are less sensitive to the local image topography, a factor that causes the super-resolved images to be visually poor, the new method sensitively probes the local features of the image, and determines the necessary level of reconstruction and regularisation. Additionally, the formulation robustly introduces a backward diffusion, a phenomenon proved from literature to have a tendency of sharpening edges. The authors have included empirical reconstruction results to demonstrate that their model produces better images in comparison with other classical methods.
Human perception-based image segmentation using optimising of colour quantisation
- Author(s): Sung In Cho ; Suk-Ju Kang ; Young Hwan Kim
- Source: IET Image Processing, Volume 8, Issue 12, p. 761 –770
- DOI: 10.1049/iet-ipr.2013.0602
- Type: Article
- + Show details - Hide details
-
p.
761
–770
(10)
This study presents an advanced histogram-based image segmentation method that enhances image segmentation quality, while greatly reducing the computational complexity. Unlike existing histogram-based methods, the authors optimise the size of bins in the colour histogram by using human perception-based colour quantisation and the clustering centroids are selected effectively without using a complex process. Additionally, an over-segmentation removal technique based on connected-component labelling is employed. This improves the segmentation quality by connectivity analysis. A comparison between the experimental results on the Berkeley Segmentation Dataset by the proposed method and the benchmark methods demonstrated that the proposed method enhanced the segmentation quality by improving the Probabilistic Rand Index and the Segmentation Covering values compared with those of the benchmark methods. The computation time using the proposed method is reduced by up to 91.63% compared with the computation time using benchmark methods.
Hybrid approach using map-based estimation and class-specific Hough forest for pedestrian counting and detection
- Author(s): Wei-Gang Chen ; Xun Wang ; Hui-Yan Wang ; Hao-Yu Peng
- Source: IET Image Processing, Volume 8, Issue 12, p. 771 –781
- DOI: 10.1049/iet-ipr.2013.0699
- Type: Article
- + Show details - Hide details
-
p.
771
–781
(11)
The system proposed in this study deals with pedestrian counting and detection in intelligent video surveillance systems. It is a hybrid of map-based and detection-based approaches, and combines the advantages of both. After the foreground objects being segmented, the map-based module, which implicitly compensates the perspective distortion by integrally projecting the features onto a given direction, is triggered to estimate the number of pedestrians in each foreground region. Then, a class-specific Hough forest is employed to locate individuals. Experimental results have validated our strategy. The proposed map-based module has the ability of accurately estimating the count for each region. Also, the estimation can speed up the process of locating individuals by providing cues like the number of targets and the approximate size of each target. The proposed detection-based module not only locates pedestrians, but deals with enhancing the accuracy of the counting as well.
Histogram modification using grey-level co-occurrence matrix for image contrast enhancement
- Author(s): Yang Hongbo and Hou Xia
- Source: IET Image Processing, Volume 8, Issue 12, p. 782 –793
- DOI: 10.1049/iet-ipr.2013.0657
- Type: Article
- + Show details - Hide details
-
p.
782
–793
(12)
Histogram modification is an important technique for contrast enhancement. Most changes of histogram are based on global or local region grey-levels information. In this study, a novel grey-level co-occurrence matrix (GCOM)-based histogram equalisation (COHE) method is proposed. A GCOM is a matrix or distribution of co-occurring grey-levels at a given offset, in which each row or column vector is actually a conditional histogram. The procedure of COHE has two steps. First, it is to equalise the modified conditional histograms, which are weighted sums of uniformly distributed histograms and the conditional histograms. An adjusting method of weight parameter is also presented in this study. Conditional histograms equalisations have the advantage of enlarging the difference between given grey-levels and other spatially adjacent grey-levels. Second, COHE algorithm finds mapping to obtain global enhance by weighting all the conditional translated grey-levels with original image histogram. However, it could produce over-enhanced unnatural looking images because of spikes of conditional histogram and original histogram. To deal with this, this study introduces methods of adjusting the conditional histogram and original histogram based on GCOM. Experimental results demonstrate that the proposed method can enhance the images effectively.
New technique for online object tracking-by-detection in video
- Author(s): Maha M. Azab ; Howida A. Shedeed ; Ashraf S. Hussein
- Source: IET Image Processing, Volume 8, Issue 12, p. 794 –803
- DOI: 10.1049/iet-ipr.2014.0238
- Type: Article
- + Show details - Hide details
-
p.
794
–803
(10)
Object detection and tracking is an important task within the field of computer vision, because of its promising application in many areas, such as video surveillance. The need for automated video analysis has generated a great deal of interest in the area of motion tracking. A new technique is proposed for online object tracking-by-detection capable of achieving high detection and tracking rates, using a stationary camera, in a particle filtering framework. The fundamental innovation is that the detection technique integrates the local binary pattern texture feature, the red green blue (RGB) colour feature and the Sobel edge feature, using ‘Choquet’ fuzzy integral to avoid uncertainty in the classification. This is performed by extracting the colour and edge grey scale confidence maps and introducing the texture confidence map. Then, the tracking technique makes use of the continuous confidence detectors, extracted from those confidence maps, along with another three introduced classifier confidence maps, extracted from an online boosting classifier. Finally, both the confidence detectors and the classifier maps are integrated in the particle filtering framework, using the Choquet integral. Experimental results for both indoor and outdoor dataset sequences confirmed the robustness of the proposed technique against illumination variation and scene motion.
Saliency detection framework via linear neighbourhood propagation
- Author(s): Jingbo Zhou ; Shangbing Gao ; Yunyang Yan ; Zhong Jin
- Source: IET Image Processing, Volume 8, Issue 12, p. 804 –814
- DOI: 10.1049/iet-ipr.2013.0599
- Type: Article
- + Show details - Hide details
-
p.
804
–814
(11)
In this study, a novel saliency detection algorithm based on linear neighbourhood propagation is proposed. The proposed algorithm is divided into three steps. First, the authors segment an input image into superpixels which are represented as the nodes in a graph. The weight matrix of the graph, which indicates the similarities between the nodes, is calculated by linear neighbourhood reconstruction. Second, the nodes, which are located at top, bottom, left and right of image boundary, are labelled as boundary priors. Then, based on weight matrix, label propagation is used to propagate the labels to unlabelled nodes. They rank the nodes according to the label information and select the nodes with minor information as saliency priors. Last, based on saliency priors, saliency detection is carried out by label propagation again. The nodes with more information are considered as saliency regions. Experimental results on three benchmark databases demonstrate the proposed method performs well when it is against the state-of-the-art methods in terms of accuracy and robustness.
Image restoration by blind-Wiener filter
- Author(s): Jae-Chern Yoo and Chang Wook Ahn
- Source: IET Image Processing, Volume 8, Issue 12, p. 815 –823
- DOI: 10.1049/iet-ipr.2013.0693
- Type: Article
- + Show details - Hide details
-
p.
815
–823
(9)
Wiener filter yields the minimum-mean-square error between the restored image and the original image. However, to obtain an optimal result, there must be accurate knowledge of the power spectra of the noise and the original image besides the degradation function. Otherwise, it will lead to an undesirable restored result. This study presents a so-called blind-Wiener filter that can restore the original image when we have no knowledge of the power spectra of both noise and original image. It uses the fact that averaging several consecutively measured images together will enhance signal-to-noise ratio (SNR). The number of images to be averaged to reduce noise to an acceptable level was concluded to be around ten. Ten independent random noises were added to a given corrupted image, resulting in ten images with different noises and then each of them was restored by Wiener filter to yield ten Wiener filtered images. Finally, the corrupted image was restored by taking an average over the ten Wiener filtered images. Experiments were conducted in a practical setting to demonstrate the effectiveness of the proposed method. The experimental results show that all the images in the test set were vastly improved and some images gave almost comparable performance to the traditional Wiener filter known as the best restoration method in terms of peak SNR.
Active contours with a joint and region-scalable distribution metric for interactive natural image segmentation
- Author(s): Xin Liu ; Shu-Juan Peng ; Yiu-ming Cheung ; Yuan Yan Tang ; Ji-Xiang Du
- Source: IET Image Processing, Volume 8, Issue 12, p. 824 –832
- DOI: 10.1049/iet-ipr.2013.0594
- Type: Article
- + Show details - Hide details
-
p.
824
–832
(9)
In this study, we present an efficient active contour with a joint and region-scalable distribution metric for interactive natural image segmentation. First, the authors project a red–green–blue image into the CIELab colour space and employ independent component analysis to select two subspace channels. Then, by initialising the evolving curve interactively in terms of a polygonal curve or multiple polygonal curves, they compute a joint probability distribution associated with a region-scalable mask to model the regional statistics and propose a simple but effective distribution metric to regularise the active contours. Subsequently, they convert the resultant level set function into binary pattern and find the larger 8-connected regions as the desired objects. Finally, the selected regions are smoothed with a circular averaging filter such that the final segmentation results can be obtained. The proposed approach not only can deal with the complex appearance and intensity in homogeneity, but also has the advantages of fast convergence and easy implementation. The experiments have shown the precise and reliable segmentation results in comparison with the state-of-the-art competing approaches.
Single-image super-resolution with total generalised variation and Shearlet regularisations
- Author(s): Wensen Feng and Hong Lei
- Source: IET Image Processing, Volume 8, Issue 12, p. 833 –845
- DOI: 10.1049/iet-ipr.2013.0503
- Type: Article
- + Show details - Hide details
-
p.
833
–845
(13)
In this study, the authors proposed a novel regularisation model for resolution enhancement of clean or noisy single image based on the total generalised variation (TGV) and Shearlet transform. The proposed model has two main contributions. Firstly, different from models with total variation regularisation, which assume that images consist of piecewise-constant areas, the author's TGV-based model is aware of higher-order smoothness, thus eliminates the staircase-like artefacts. Secondly, various image features including edges and fine details can be preserved by their model. This is nature since the Shearlets mathematically provide an optimally sparse approximation for the class of piecewise-smooth functions with rich geometric information. Moreover, to solve the proposed model, an efficient numerical scheme is explicitly developed based on the Nesterov's algorithm. A series of numerical experiments validate the effectiveness of the proposed method.
Image processing system dedicated to a visual intra-cortical stimulator
- Author(s): Anthony Ghannoum ; Ebrahim Ghafar-Zadeh ; Mohamad Sawan
- Source: IET Image Processing, Volume 8, Issue 12, p. 846 –855
- DOI: 10.1049/iet-ipr.2013.0838
- Type: Article
- + Show details - Hide details
-
p.
846
–855
(10)
Microstimulation is a feasible method targeting visual impairment. In this paper, the authors focus on intra-cortical stimulation to cover the broader spectrum of the issue with the aim of providing a better suited visual aid. They present an overall modular architecture and focus on creating re-usable image processing tools that can be used for image simplification and recognition tasks. One of the main challenges in the image processing path is the real-time restriction; hence they resort to field-programmable gate array (FPGA) hardware acceleration. Herein they demonstrate and describe the architecture of an image feature extractor based on the difference of Gaussians that is at once accurate, generic and low on resources. This architecture also features a Huffman encoding engine that proves useful when resorting to software–hardware (SW–HW) hybrid implementations, and a technique of calibrating and calculating the phosphene map.
Synthetic aperture radar image segmentation using fuzzy label field-based triplet Markov fields model
- Author(s): Fan Wang ; Yan Wu ; Jianwei Fan ; Xue Zhang ; Qiang Zhang ; Ming Li
- Source: IET Image Processing, Volume 8, Issue 12, p. 856 –865
- DOI: 10.1049/iet-ipr.2013.0686
- Type: Article
- + Show details - Hide details
-
p.
856
–865
(10)
The recently proposed triplet Markov random fields (TMF) model is very suitable for dealing with non-stationary image segmentation. However, influenced by multiplicative speckle noise, synthetic aperture radar image (SAR) is dim and blurred in the boundaries of different areas, making it difficult to locate boundary accurately in the segmentation process. Thus, in this study, the authors propose a new segmentation algorithm using fuzzy label field-based TMF model for SAR images. In the proposed algorithm, the value of each site in the label field is extended from a finite discrete set in the classical TMF model to a continuous one, in order to describe the memberships of each pixel to different classes. A fuzzy energy function is constructed to describe the joint prior distribution of the fuzzy label field and the auxiliary field. The construction of fuzzy energy function also takes into account four direction information and degree of difference between neighbouring pixels. Iterative conditional estimation method and maximum posterior mode criterion are applied to implement parameter estimation and segmentation. Experimental results on simulated data and real SAR images demonstrate the effectiveness of the proposed algorithm.
Most viewed content
Most cited content for this Journal
-
Medical image segmentation using deep learning: A survey
- Author(s): Risheng Wang ; Tao Lei ; Ruixia Cui ; Bingtao Zhang ; Hongying Meng ; Asoke K. Nandi
- Type: Article
-
Block-based discrete wavelet transform-singular value decomposition image watermarking scheme using human visual system characteristics
- Author(s): Nasrin M. Makbol ; Bee Ee Khoo ; Taha H. Rassem
- Type: Article
-
Classification of malignant melanoma and benign skin lesions: implementation of automatic ABCD rule
- Author(s): Reda Kasmi and Karim Mokrani
- Type: Article
-
Digital image watermarking method based on DCT and fractal encoding
- Author(s): Shuai Liu ; Zheng Pan ; Houbing Song
- Type: Article
-
Tomato leaf disease classification by exploiting transfer learning and feature concatenation
- Author(s): Mehdhar S. A. M. Al‐gaashani ; Fengjun Shang ; Mohammed S. A. Muthanna ; Mashael Khayyat ; Ahmed A. Abd El‐Latif
- Type: Article