IET Image Processing
Volume 7, Issue 6, August 2013
Volumes & issues:
Volume 7, Issue 6
August 2013
Spatio-temporal video contrast enhancement
- Author(s): Turgay Celik
- Source: IET Image Processing, Volume 7, Issue 6, p. 543 –555
- DOI: 10.1049/iet-ipr.2012.0687
- Type: Article
- + Show details - Hide details
-
p.
543
–555
(13)
A video contrast enhancement algorithm which automatically enhances the contrast of a video using spatial and temporal information is proposed. The algorithm is based on the observation that the contrast in a video frame can be improved by increasing the grey-level differences between each pixel of the video frame and its neighbouring pixels. Furthermore, such an improvement should be smooth in between consecutive video frames so that continuum of contrast improvement is achieved. A two-dimensional (2D) histogram of a video frame is constructed using mutual relationship between each pixel and its neighbouring pixels. For each video frame, a 2D target histogram is computed by considering 2D histogram of the video frame, 2D uniformly distributed histogram, and the 2D histograms of forward and backward neighbouring video frames. The contrast enhancement of the video frame is achieved by mapping the diagonal elements of the 2D input histogram to the diagonal elements of the 2D target histogram. The proposed algorithm is easy to implement and is thus suitable for real-time contrast enhancement applications.
Lossy and lossless image encoding using multi-scale recurrent pattern matching
- Author(s): Danillo Bracco Graziosi ; Nuno Miguel Rodrigues ; Eduardo A.B. da Silva ; Murilo B. de Carvalho ; Sérgio Manuel Maciel de Faria
- Source: IET Image Processing, Volume 7, Issue 6, p. 556 –566
- DOI: 10.1049/iet-ipr.2012.0538
- Type: Article
- + Show details - Hide details
-
p.
556
–566
(11)
In this study, the authors investigate the use of multi-scale recurrent pattern matching paradigm for lossless image compression. The multi-scale multidimensional parser (MMP) algorithm is a successful implementation of this paradigm for lossy image compression, and can naturally perform lossless compression since it was first derived from a Lempel–Ziv lossless scheme. However, neither its recently adopted coding tools had been adapted for lossless coding nor a thorough analysis of its performance had been carried out. In this work, the authors evaluate MMP's lossless compression capability, proposing modifications for some of its predictions modes, as well as the inclusion of an adaptive prediction mode based on least squares. The residual information is also coded with well-known techniques used in lossless compression. Experimental results for MMP show that the algorithm achieves a good performance for images such as computed generated graphics and scanned documents, whereas keeping a competitive performance for natural images. Since the algorithm's structure is exactly the same for lossless and lossy compression, the obtained results suggest that MMP is able to achieve a high compression performance for a wide range of images and rates, from lossy to lossless, without any prior analysis of the image to be coded.
Medical image registration based on fast and adaptive bidimensional empirical mode decomposition
- Author(s): Jamal Riffi ; Adnane Mohamed Mahraz ; Hamid Tairi
- Source: IET Image Processing, Volume 7, Issue 6, p. 567 –574
- DOI: 10.1049/iet-ipr.2012.0034
- Type: Article
- + Show details - Hide details
-
p.
567
–574
(8)
Image registration plays a crucial role in several areas, yet iconic registration methods are more efficient than those in geometrical registration, but they require great execution time. Regarding reduction in the execution time of iconic registration, the authors have proposed a new method based on mutual information while exploiting adaptive multiresolution decomposition, bidimensional empirical mode decomposition (BEMD) in its fast and adaptive version fast and adaptive BEMD (FABEMD). The idea is that instead of registering two images, the authors proceed to registration of the bidimensional intrinsic mode functions (BIMFs) that results from the FABEMD decomposition. The BIMF selected by the authors’ algorithm is characterised by preservation of the general form of the image, and it contains a tone of grey levels lower than that of the original image, thus the number of combinations of the grey levels, used while calculating entropy is reduced, which in turn reduces execution time of the registration.
Exploiting chrominance planes similarity on listless quadtree coders
- Author(s): Ruzelita Ngadiran ; Said Boussakta ; Ahmed Bouridane ; Fouad Khelifi
- Source: IET Image Processing, Volume 7, Issue 6, p. 575 –585
- DOI: 10.1049/iet-ipr.2012.0016
- Type: Article
- + Show details - Hide details
-
p.
575
–585
(11)
This study proposes an efficient algorithm for colour image compression with listless implementation based on set partition block embedded coding (SPECK). The objective of this work is to develop an algorithm that exploits the redundancy in colour spaces, low complexity quadtree partitioning and reduced memory requirements. Colour images are first transformed into luminance chrominance (YCbCr) planes and a wavelet transform is applied. A reduction of the memory requirement is achieved with the introduction of a state marker that matches each colour plane to eliminate the list with dynamic memory in the original colour SPECK coder (CSPECK). The wavelet coefficients are scanned using Z-order that matches the subband decompositions. The proposed algorithm then encodes the de-correlated colour plane as one unit and generates a mixed bit stream. The linear indexing and initial state marker are modified to jointly test the chrominance plane together. Composite colour coding enables precise control of the bit rate. The performance of the proposed algorithm is comparable with CSPECK, set partitioning in hierarchical trees (SPIHT) and JPEG2000 but with less memory requirements. For progressive lossless, a saving of more than 70% than final working memory against CSPECK and SPIHT highlights the benefit of the proposed algorithm.
Object virtual viewing using adaptive tri-view morphing
- Author(s): Pin Chatkaewmanee and Matthew Nelson Dailey
- Source: IET Image Processing, Volume 7, Issue 6, p. 586 –595
- DOI: 10.1049/iet-ipr.2012.0241
- Type: Article
- + Show details - Hide details
-
p.
586
–595
(10)
This study proposes a new technique for generating an arbitrary virtual view of an object of interest given a set of images taken from around that object. The algorithm extends Xiao and Shah's tri-view morphing scheme to work with wide-baseline imagery. The authors’ method performs feature detection and feature matching across three views, then blends the real views into a virtual view. Tri-view morphing by itself is realistic when occlusion across the three views is minimal, but when it is applied to cases of more complex objects and wide baselines, occlusions lead to significant artefacts. The authors propose a new adaptive algorithm to solve these problems by (i) segmenting the views into object and background, (ii) obtaining fine-grained correspondences across the three views, (iii) constructing, when a border point in one view is occluded in one or two of the other views, a virtual correspondence for that point and (iv) synthesising novel views using barycentric interpolation and automatic elimination of occluded polygons. The result is a system allowing smooth and realistic animation of the virtual object over arbitrary viewing paths.
Surface fitting for individual image thresholding and beyond
- Author(s): Jinhai Cai and Stan Miklavcic
- Source: IET Image Processing, Volume 7, Issue 6, p. 596 –605
- DOI: 10.1049/iet-ipr.2012.0690
- Type: Article
- + Show details - Hide details
-
p.
596
–605
(10)
In this study, the authors propose a novel algorithm for background–foreground segmentation. The work is motivated by the need for information about the background that is obscured by objects, in order to achieve accurate segmentation. The algorithm utilises the principle of estimating the occluded background by surface fitting. Edge detection methods are used to detect boundaries between foreground and background, identifying background points as well as foreground points. This categorisation will guarantee that most points used for surface fitting are from the same category and thus the proposed surface fitting with random sample consensus (RANSAC) algorithm will produce an accurate estimate of the surface. The authors algorithm has been applied to the real-world applications of segmenting plant images with inhomogeneous but smooth background and measuring the relative temperature of plants. Comparisons with experimental results show that the proposed algorithm is able to reduce significantly background inhomogeneities in infra-red images for the accurate estimation of temperature differences between background and plants, which provides important clues for fast and cheap genetic screening. The proposed algorithm is also able to overcome the intensity inhomogeneities for accurate image segmentation, particularly for plant root image segmentation with the preservation of lateral plant roots.
Low-bit depth-high-dynamic range image generation by blending differently exposed images
- Author(s): Jae-Il Jung and Yo-Sung Ho
- Source: IET Image Processing, Volume 7, Issue 6, p. 606 –615
- DOI: 10.1049/iet-ipr.2012.0614
- Type: Article
- + Show details - Hide details
-
p.
606
–615
(10)
Recently, high-dynamic range (HDR) imaging has taken the centre stage because of the drawbacks of low-dynamic range imaging, namely detail losses in under- and over-exposed areas. In this study, the authors propose an algorithm for HDR image generation of a low-bit depth from two differently exposed images. For compatibility with conventional devices, HDR image generations of a large bit depth and bit depth compression are skipped. By using posterior probability-based labelling, luminance adjusting and adaptive blending, the authors directly blend two input images into one while preserving the global intensity order as well as enhancing its dynamic range. From the experiments on various test images, results confirm that the proposed method generates more natural HDR images than other state-of-the-art algorithms regardless of image properties.
Mosaic method of side-scan sonar strip images using corresponding features
- Author(s): Jianhu Zhao ; Aixue Wang ; Hongmei Zhang ; Xiao Wang
- Source: IET Image Processing, Volume 7, Issue 6, p. 616 –623
- DOI: 10.1049/iet-ipr.2012.0468
- Type: Article
- + Show details - Hide details
-
p.
616
–623
(8)
The towing operation mode of side-scan sonar system (SSS) easily results in dislocations and distortions of targets in the SSS strip image, brings difficulty to the mosaic of these strip images by the geocoding method or the tessellation-line method and affects the recognition and understanding for seabed relief. Therefore this study proposes a new method, namely the segment-image mosaic method based on corresponding features of the two adjacent SSS strip images. Through SSS image preprocessing, segment match based on corresponding features and image fusion in the common coverage area based on wavelet transformation, this new method overcomes the drawbacks of the traditional methods of image mosaic, fulfils well the mosaic of SSS strip images and finally a whole-area SSS image is formed. Experiments have verified that the mosaic image formed by the new method can correctly reflect the position, shape and distribution of seabed targets, which is helpful to understand seabed relief.
Effective two-step method for face hallucination based on sparse compensation on over-complete patches
- Author(s): Mohamed Naleer Haju Mohamed ; Yao Lu ; Feng Lv
- Source: IET Image Processing, Volume 7, Issue 6, p. 624 –632
- DOI: 10.1049/iet-ipr.2012.0554
- Type: Article
- + Show details - Hide details
-
p.
624
–632
(9)
Sparse representation has been successfully applied to image d using low- and high-resolution training face images based on sparse representation. In this study, the sparse residual compensation is adopted to face hallucination. Firstly, a global face image is constructed by optimal coefficients of the interpolated training images. Secondly, the high-resolution residual image (local face image) is found by using an over-complete patch dictionary and the sparse representation. Finally, a hallucinated face image is obtained by combining these two steps. In addition, the more details of the face image in high frequency parts are recovered using a residual compensation strategy. In the authors’ experimental work, it is observed that balance sparsity parameter (λ) has affected the residual compensation. Further, the proposed algorithm can acquire a high-resolution image even though the number of training image pairs is comparatively smaller. The experiments show that the authors’ method is more effective than the other existing two-step face hallucination methods.
Multi-focus image fusion based on non-subsampled shearlet transform
- Author(s): Gao Guorong ; Xu Luping ; Feng Dongzhu
- Source: IET Image Processing, Volume 7, Issue 6, p. 633 –639
- DOI: 10.1049/iet-ipr.2012.0558
- Type: Article
- + Show details - Hide details
-
p.
633
–639
(7)
In this study, a new multi-focus image fusion algorithm based on the non-subsampled shearlet transform (NSST) is presented. First, an initial fused image is acquired by using a conventional multi-resolution image fusion method. The pixels of those source multi-focus images, which have smaller square error with the corresponding pixels of the initial fused image, are considered in the focused regions. Based on this principle, the focused regions are determined, and the morphological opening and closing are employed for post-processing. Then the focused regions and the focused border regions in each source image are identified and used to guide the fusion process in NSST domain. Finally, the fused image is obtained using the inverse NSST. Experimental results show that this proposed method can not only extract more important detailed information from source images, but also avoid the introduction of artificial information effectively. It significantly outperforms the discrete wavelet transform (DWT)-based fusion method, the non-subsampled contourlet-transformbased fusion method and the NSST-based fusion method (see Miao et al. 2011) in terms of both visual quality and objective evaluation.
Most viewed content
Most cited content for this Journal
-
Medical image segmentation using deep learning: A survey
- Author(s): Risheng Wang ; Tao Lei ; Ruixia Cui ; Bingtao Zhang ; Hongying Meng ; Asoke K. Nandi
- Type: Article
-
Block-based discrete wavelet transform-singular value decomposition image watermarking scheme using human visual system characteristics
- Author(s): Nasrin M. Makbol ; Bee Ee Khoo ; Taha H. Rassem
- Type: Article
-
Classification of malignant melanoma and benign skin lesions: implementation of automatic ABCD rule
- Author(s): Reda Kasmi and Karim Mokrani
- Type: Article
-
Digital image watermarking method based on DCT and fractal encoding
- Author(s): Shuai Liu ; Zheng Pan ; Houbing Song
- Type: Article
-
Chaos-based fast colour image encryption scheme with true random number keys from environmental noise
- Author(s): Hongjun Liu ; Abdurahman Kadir ; Xiaobo Sun
- Type: Article