

IET Image Processing
Volume 9, Issue 3, March 2015
Volumes & issues:
Volume 9, Issue 3
March 2015
Image compression and encryption scheme using fractal dictionary and Julia set
- Author(s): Yuanyuan Sun ; Rudan Xu ; Lina Chen ; Xiaopeng Hu
- Source: IET Image Processing, Volume 9, Issue 3, p. 173 –183
- DOI: 10.1049/iet-ipr.2014.0224
- Type: Article
- + Show details - Hide details
-
p.
173
–183
(11)
An efficient and secure environment is necessary for data transmission and storage, especially for large-column multimedia data. In this study, a novel compression–encryption scheme is presented using a fractal dictionary and Julia set. For the compression in this scheme, fractal dictionary encoding not only reduces time consumption, but also gives good quality image reconstruction. For the encryption in the scheme, the key has large key space and high sensitivity, even to tiny perturbation. Besides, the stream cipher encryption and the diffusion process adopted in this study help spread perturbation in the plaintext, achieving high plain sensitivity and giving an effective resistance to chosen-plaintext attacks.
Histogram of dense subgraphs for image representation
- Author(s): Mouna Dammak ; Mahmoud Mejdoub ; Chokri Ben Amar
- Source: IET Image Processing, Volume 9, Issue 3, p. 184 –191
- DOI: 10.1049/iet-ipr.2014.0189
- Type: Article
- + Show details - Hide details
-
p.
184
–191
(8)
Modelling spatial information of local features is known to improve performance in image categorisation. Compared with simple pairwise features and visual phrases, graphs can capture the structural organisation of local features more adequately. Besides, a dense regular grid can guarantee a more reliable representation than the interest points and give better results for image classification. In this study, the authors introduced a bag of dense local graphs approach that combines the performance of bag of visual words expressing the image classification process with the representational power of graphs. The images were represented with dense local graphs built upon dense scale-invariant feature transform descriptors. The graph-based substructure pattern mining algorithm was applied on the local graphs to discover the frequent local subgraphs, producing a bag of subgraphs representation. The results were reported from experiments conducted on four challenging benchmarks. The findings show that the proposed subgraph histogram improves the categorisation accuracy.
Tuned depth signal analysis on merged transform domain for view synthesis in free viewpoint systems
- Author(s): Faten Chaieb ; Dorsaf Sebai ; Faouzi Ghorbel
- Source: IET Image Processing, Volume 9, Issue 3, p. 192 –201
- DOI: 10.1049/iet-ipr.2013.0516
- Type: Article
- + Show details - Hide details
-
p.
192
–201
(10)
Completely embedded in the 3D era, depth maps coding becomes a must in order to favour 3D admission to different fields of application, ranging from video games to medical imaging. This study presents a novel depth coding approach that, after a decimation step favouring the foreground, decomposes depth maps onto a set of sparse coefficients and redundant mixed discrete cosine and B-splines atoms highly correlated to depth maps piece-wise linear nature. Depth decomposition searches the best rate/distortion tradeoff through minimisation of an adaptive cost function, where its weight parameter is manipulated according to depth homogeneity. The bigger the parameter is, the more the sparsity is favoured at the expense of synthesis quality. Furthermore, handled distortion measure of the cost function quantifies the effect of depth maps coding on rendered views quality. The experiments show the relevance of the proposed method, able to obtain considerable tradeoffs between bitrate and synthesised views distortion.
Active contours with prior corner detection to extract discontinuous boundaries of anatomical structures in X-ray images
- Author(s): Aruni U.A. Niroshika ; Ravinda G.N. Meegama ; Ravindra S. Lokupitiya ; Donna K.S. Kannangara
- Source: IET Image Processing, Volume 9, Issue 3, p. 202 –210
- DOI: 10.1049/iet-ipr.2014.0106
- Type: Article
- + Show details - Hide details
-
p.
202
–210
(9)
Active contours are a form of curves that deforms according to an energy minimising function and are widely used in computer vision and image processing applications to extract features of interests from raw images acquired using an image capturing device. One of the major limitations in active contours is its inability to converge accurately when the object of interest exhibits sharp corners. In this study, a new technique of active contour model to extract boundaries of objects having sharp corners is presented. By incorporating a priori knowledge of significant corners of the object into the deforming contour, the proposed active contour is able to deform towards the boundaries of the object without surpassing the corners. The ability of the new technique to accurately extract features of interest of anatomical structures in medical X-ray images having sharp corners is demonstrated.
Patch-based locality-enhanced collaborative representation for face recognition
- Author(s): Ru-Xi Ding ; He Huang ; Jin Shang
- Source: IET Image Processing, Volume 9, Issue 3, p. 211 –217
- DOI: 10.1049/iet-ipr.2014.0078
- Type: Article
- + Show details - Hide details
-
p.
211
–217
(7)
In the field of face recognition, the small sample size (SSS) problem and non-ideal situations of facial images are recognised as two of the most challenging issues. Recently, Zhu et al. proposed a patch-based collaborative representation (PCRC) method which showed good performance for the SSS and the single sample per person problems; and Peng et al. proposed a locality-constrained collaborative representation (LCCR) method which achieved high robustness for face recognition in non-ideal situations. Inspired by the methods proposed in PCRC and LCCR, this study proposes a patch-based locality-enhanced collaborative representation (PLECR) method to combine and enhance the advantages of both PCRC and LCCR. The PLECR and several related methods are implemented on AR, face recognition technology and extended Yale B databases; and the extensive numerical results show that PLECR is more efficient among these methods for the SSS problem in non-ideal situations, especially for the SSS problem with occlusions.
Adaptive CLEAN algorithm for millimetre wave synthetic aperture imaging radiometer in near field
- Author(s): Jianfei Chen ; Yuehua Li ; Jianqiao Wang ; Yuanjiang Li ; Yilong Zhang
- Source: IET Image Processing, Volume 9, Issue 3, p. 218 –225
- DOI: 10.1049/iet-ipr.2014.0443
- Type: Article
- + Show details - Hide details
-
p.
218
–225
(8)
High-resolution millimetre wave images of contiguous targets often suffer from the influence of sidelobe artefacts, partial correlation between targets and so on. Owing to the characteristics of near-field synthetic aperture imaging radiometers [such as the changing point spread function (PSF) and slender sideline], the existing CLEAN algorithms are unsuitable for near-field synthetic aperture imaging. This study is devoted to establishing a novel CLEAN algorithm (named adaptive CLEAN) to clean the reconstructed millimetre wave images accurately. First, the characteristics of near-field synthetic aperture imaging are studied. Then, the adaptive CLEAN algorithm is established based on these characteristics. In this study, the authors amend the amplitude of the targets and select the matching PSF for them according to their azimuths. Unlike other CLEAN algorithms, the parameters of this adaptive CLEAN algorithm are calculated by a formula, which is concluded from a lot of simulation experiments. Finally, the effectiveness of the proposed adaptive CLEAN algorithm is tested by several simulation experiments, and the superiority is also demonstrated by comparing it with the existing CLEAN algorithm. The results demonstrate that the proposed method is an efficient, feasible algorithm for cleaning the reconstructed images, irrespective of the point or contiguous targets.
Entropy maximisation histogram modification scheme for image enhancement
- Author(s): Zhao Wei ; Huang Lidong ; Wang Jun ; Sun Zebin
- Source: IET Image Processing, Volume 9, Issue 3, p. 226 –235
- DOI: 10.1049/iet-ipr.2014.0347
- Type: Article
- + Show details - Hide details
-
p.
226
–235
(10)
Contrast enhancement plays an important role in image processing applications. The global histogram equalisation (GHE)-based techniques are very popular for their simpleness. In the author's study, the authors originally divide the GHE techniques into two steps, that is, the pixel populations mergence (PPM) step and the grey-levels distribution (GLD) step. In the PPM step, the pixel populations of adjoining grey scales to be mapped to the same grey scale are merged firstly in input histogram. Then, the new grey scales are redistributed according to a corresponding transformation function in the GLD step. This division is meaningful because the entropy of enhanced image is only determined by pixel populations regardless of grey levels. Then, they prove the entropy of enhanced image is reduced because of mergence. Inspired by GHE, they propose a novel entropy maximisation histogram modification scheme, which also consists of PPM and GLD steps. However, the entropy is maximised, that is, the reduction of entropy is minimised under originally presented entropy maximisation rule in their PPM step. In the GLD step, they redistribute the grey scales in the merged histogram using a log-based distribution function to control the enhancement level. Experimental results demonstrate the proposed method is effective.
Synthetic aperture radar image despeckling via total generalised variation approach
- Author(s): Wensen Feng ; Hong Lei ; Hong Qiao
- Source: IET Image Processing, Volume 9, Issue 3, p. 236 –248
- DOI: 10.1049/iet-ipr.2013.0701
- Type: Article
- + Show details - Hide details
-
p.
236
–248
(13)
Speckle reduction is an important task in synthetic aperture radar. One extensively used approach is based on total variation (TV) regularisation, which can realise significantly sharp edges, but on the other hand brings in the undesirable staircasing artefacts. In essence, the TV-based methods tend to create piecewise-constant images even in regions with smooth transitions. In this study, a new method is proposed for speckle reduction via total generalised variation (TGV) penalty. This is reasonable from the fact that the TGV-based model can reduce the staircasing artefacts of TV by being aware of higher-order smoothness. An efficient numerical scheme based on the Nesterov's algorithm is also developed for solving the TGV-based optimisation problem. Monte Carlo experiments show that the proposed scheme yields state-of-the-art results in terms of both performance and speed. Especially when the image has some higher-order smoothness, the authors’ scheme outperforms the TV-based methods.
Shape registration using characteristic functions
- Author(s): Muayed S. Al-Huseiny and Sasan Mahmoodi
- Source: IET Image Processing, Volume 9, Issue 3, p. 249 –260
- DOI: 10.1049/iet-ipr.2014.0467
- Type: Article
- + Show details - Hide details
-
p.
249
–260
(12)
This study presents a fast algorithm for the registration of shapes implicitly represented by their characteristic functions. The algorithm proposed here aims to recover the registration parameters (scaling, rotation and translation) by minimising a dissimilarity term between the two shapes. The proposed algorithm is based on phase correlation and statistical shape moments to compute the registration parameters individually. The registration method proposed here is applied to various registration problems, to address issues such as the registration of shapes with various topologies and registration of complex shapes containing various numbers of sub-shapes. The method proposed here is characterised with a better accuracy, a higher convergence speed, robustness at the presence of excessive noise and a better performance for registration over large databases of shapes, in comparison with other state-of-the-art shape registration techniques in the literature.
Most viewed content

Most cited content for this Journal
-
Block-based discrete wavelet transform-singular value decomposition image watermarking scheme using human visual system characteristics
- Author(s): Nasrin M. Makbol ; Bee Ee Khoo ; Taha H. Rassem
- Type: Article
-
Classification of malignant melanoma and benign skin lesions: implementation of automatic ABCD rule
- Author(s): Reda Kasmi and Karim Mokrani
- Type: Article
-
Digital image watermarking method based on DCT and fractal encoding
- Author(s): Shuai Liu ; Zheng Pan ; Houbing Song
- Type: Article
-
Chaos-based fast colour image encryption scheme with true random number keys from environmental noise
- Author(s): Hongjun Liu ; Abdurahman Kadir ; Xiaobo Sun
- Type: Article
-
License number plate recognition system using entropy-based features selection approach with SVM
- Author(s): Muhammad Attique Khan ; Muhammad Sharif ; Muhammad Younus Javed ; Tallha Akram ; Mussarat Yasmin ; Tanzila Saba
- Type: Article