Online ISSN
1751-9667
Print ISSN
1751-9659
IET Image Processing
Volume 2, Issue 3, June 2008
Volumes & issues:
Volume 2, Issue 3
June 2008
-
- Author(s): P. Hobson
- Source: IET Image Processing, Volume 2, Issue 3, p. 105 –106
- DOI: 10.1049/iet-ipr:20089012
- Type: Article
- + Show details - Hide details
-
p.
105
–106
(2)
- Author(s): P. Bagheri Zadeh ; T. Buggy ; A. Sheikh Akbari
- Source: IET Image Processing, Volume 2, Issue 3, p. 107 –115
- DOI: 10.1049/iet-ipr:20070181
- Type: Article
- + Show details - Hide details
-
p.
107
–115
(9)
The authors present a novel hybrid statistical, DCT and vector quantisation-based video-coding technique. In intra mode of operation, an input frame is divided into a number of non-overlapping pixel blocks. A discrete cosine transform then converts the coefficients in each block into the frequency domain. Coefficients with the same frequency index at different blocks are put together generating a number of matrices, where each matrix contains the coefficients of a particular frequency index. The matrix, which contains the DC coefficients, is losslessly coded. Matrices containing high frequency coefficients are coded using a novel statistical encoder. In inter mode of operation, overlapped block motion estimation / compensation is employed to exploit temporal redundancy between successive frames and generates a displaced frame difference (DFD) for each inter-frame. A wavelet transform then decomposes the DFD-frame into its frequency subbands. Coefficients in the detail subbands are vector quantised while coefficients in the baseband are losslessly coded. To evaluate the performance of the codec, the proposed codec and the adaptive subband vector quantisation (ASVQ) video codec, which has been shown to outperform H.263 at all bitrates, were applied to a number of test sequences. Results indicate that the proposed codec outperforms the ASVQ video codec subjectively and objectively at all bitrates. - Author(s): M. Mrak ; T. Zgaljic ; E. Izquierdo
- Source: IET Image Processing, Volume 2, Issue 3, p. 116 –129
- DOI: 10.1049/iet-ipr:20070185
- Type: Article
- + Show details - Hide details
-
p.
116
–129
(14)
The application of different downsampling filters in video coding directly models visual information at lower resolutions and influences the compression performance of a chosen coding system. In wavelet-based scalable video coding the spatial scalability is achieved by the application of wavelets as downsampling filters. However, characteristics of different wavelets influence the performance at targeting spatio-temporal decoding points. An analysis of different downsampling filters in popular wavelet-based scalable video coding schemes is presented. Evaluation is performed for both intra- and inter-coding schemes using wavelets and standard downsampling strategies. On the basis of the obtained results a new concept of inter-resolution prediction is proposed, which maximises the average performance using a combination of standard downsampling filters and wavelet-based coding. - Author(s): M.G. Day and J.A. Robinson
- Source: IET Image Processing, Volume 2, Issue 3, p. 131 –138
- DOI: 10.1049/iet-ipr:20070186
- Type: Article
- + Show details - Hide details
-
p.
131
–138
(8)
The authors introduce residue-free video coding, in which motion-compensated predictions from surrounding frames and spatial predictions from the current frame are combined adaptively on a pixel-by-pixel basis. The consequence is that residue frames, blocks or regions are never explicitly formed. The authors describe a practical embodiment of a residue-free coder – temporal prediction trees – in which the local adaptation is conditioned frame to frame by a control parameter derived from global motion statistics. Using fixed-block-size motion compensation, the resulting coder is competitive with conventional residue-based compression, and at higher data rates is able to outperform H.264/AVC for high-activity sequences. - Author(s): M.A. Dabbah ; W.L. Woo ; S.S. Dlay
- Source: IET Image Processing, Volume 2, Issue 3, p. 139 –149
- DOI: 10.1049/iet-ipr:20070203
- Type: Article
- + Show details - Hide details
-
p.
139
–149
(11)
The authors present a secure facial recognition system. The biometric data are transformed to the cancellable domain using high-order polynomial functions and co-occurrence matrices. The proposed method has provided both high-recognition accuracy and biometric data protection. Protection of data relies on the polynomial functions, where the new reissued cancellable biometric can be obtained by changing the polynomial parameters. Besides the protection of data, the reconstructed co-occurrence matrices also contributed to the accuracy enhancement. Hadamard product is used to reconstruct the new measure and has shown high flexibility in proving a new relationship between two independent covariance matrices. The proposed cancellable biometric is treated in the same manner as the original biometric data, which enables replacement of original data by the novel cancellable algorithm with no change to the authentication system. The two-dimensional principal component analysis recognition algorithm is used at the authentication stage. Results have shown high non-reversibility of data with improved accuracy over the original data and raised the performance recognition rate to 97%. - Author(s): R. Razavi ; M. Fleury ; M. Ghanbari
- Source: IET Image Processing, Volume 2, Issue 3, p. 150 –162
- DOI: 10.1049/iet-ipr:20070183
- Type: Article
- + Show details - Hide details
-
p.
150
–162
(13)
A personal area network (PAN) is a feature of an augmented reality system, transmitting modified video for real-time display. Low-delay communication of encoded video over a Bluetooth wireless PAN is achieved in favourable channel conditions by a combination of dynamic packetisation of video slices together with centralised and predictive rate control. The result is minimised packet delay (below 0.05 s) and high-quality 40 dB video, with packet loss limited to 4% from radio frequency noise. Where channel conditions result in error bursts, dynamic rate change is introduced to reduce the need for packet retransmission and improve power efficiency. - Author(s): A. Nayak ; E. Trucco ; A. Ahmad ; A.M. Wallace
- Source: IET Image Processing, Volume 2, Issue 3, p. 165 –174
- DOI: 10.1049/iet-ipr:20070207
- Type: Article
- + Show details - Hide details
-
p.
165
–174
(10)
A novel appearance-based simulator of burst illumination laser sequences, SimBIL, is presented and the sequences it generates are compared with those of a physical model-based simulator that the authors have developed concurrently. SimBIL uses a database of 3D, geometric object models as faceted meshes, and attaches example-based representations of material appearances to each model surface. The representation is based on examples of intensity–time profiles for a set of orientations and materials. The dimensionality of the large set of profile examples (called a profile eigenspace) is reduced by principal component analysis. Depth and orientation of the model facets are used to simulate time gating, deciding which object parts are imaged for every frame in the sequence. Model orientation and material type are used to index the profile eigenspaces and assign an intensity–time profile to frame pixels. To assess comparatively the practical merit of SimBIL sequences, the authors compare range images reconstructed by a reference algorithm using sequences from SimBIL, from the physics-based simulator, and real BIL sequences.
Editorial: Visual information engineering
Statistical, DCT and vector quantisation-based video codec
Influence of downsampling filter characteristics on compression performance in wavelet-based scalable video coding
Residue-free video coding with pixelwise adaptive spatio-temporal prediction
Image-based facial recognition in the domain of high-order polynomial one-way mapping
Low-delay video control in a personal area network for augmented reality
SimBIL: appearance-based simulation of burst-illumination laser sequences
Most viewed content for this Journal
Article
content/journals/iet-ipr
Journal
5
Most cited content for this Journal
-
Medical image segmentation using deep learning: A survey
- Author(s): Risheng Wang ; Tao Lei ; Ruixia Cui ; Bingtao Zhang ; Hongying Meng ; Asoke K. Nandi
- Type: Article
-
Block-based discrete wavelet transform-singular value decomposition image watermarking scheme using human visual system characteristics
- Author(s): Nasrin M. Makbol ; Bee Ee Khoo ; Taha H. Rassem
- Type: Article
-
Classification of malignant melanoma and benign skin lesions: implementation of automatic ABCD rule
- Author(s): Reda Kasmi and Karim Mokrani
- Type: Article
-
Digital image watermarking method based on DCT and fractal encoding
- Author(s): Shuai Liu ; Zheng Pan ; Houbing Song
- Type: Article
-
Tomato leaf disease classification by exploiting transfer learning and feature concatenation
- Author(s): Mehdhar S. A. M. Al‐gaashani ; Fengjun Shang ; Mohammed S. A. Muthanna ; Mashael Khayyat ; Ahmed A. Abd El‐Latif
- Type: Article