Online ISSN
1751-9640
Print ISSN
1751-9632
IET Computer Vision
Volume 3, Issue 4, December 2009
Volumes & issues:
Volume 3, Issue 4
December 2009
-
- Author(s): A. Robles-Kelly
- Source: IET Computer Vision, Volume 3, Issue 4, page: 175 –175
- DOI: 10.1049/iet-cvi.2009.9051
- Type: Article
- + Show details - Hide details
-
p.
175
(1)
- Author(s): J. Cai and R. Walker
- Source: IET Computer Vision, Volume 3, Issue 4, p. 176 –188
- DOI: 10.1049/iet-cvi.2009.0036
- Type: Article
- + Show details - Hide details
-
p.
176
–188
(13)
In this study, the authors propose a novel video stabilisation algorithm for mobile platforms with moving objects in the scene. The quality of videos obtained from mobile platforms, such as unmanned airborne vehicles, suffers from jitter caused by several factors. In order to remove this undesired jitter, the accurate estimation of global motion is essential. However, it is difficult to estimate global motions accurately from mobile platforms because of increased estimation errors and noises. Additionally, large moving objects in the video scenes contribute to the estimation errors. Currently, only very few motion estimation algorithms have been developed for video scenes collected from mobile platforms, and this study shows that these algorithms fail when there are large moving objects in the scene. In this study, a theoretical proof is provided which demonstrates that the use of delta optical flow can improve the robustness of video stabilisation in the presence of large moving objects in the scene. The authors also propose to use sorted arrays of local motions and the selection of feature points to separate outliers from inliers. The proposed algorithm is tested over six video sequences, collected from one fixed platform, four mobile platforms and one synthetic video, of which three contain large moving objects. Experiments show that our proposed algorithm performs well to all these video sequences. - Author(s): S.N. Basah ; A. Bab-Hadiashar ; R. Hoseinnezhad
- Source: IET Computer Vision, Volume 3, Issue 4, p. 189 –200
- DOI: 10.1049/iet-cvi.2009.0030
- Type: Article
- + Show details - Hide details
-
p.
189
–200
(12)
In common motion segmentation and estimation applications, where the exact nature of objects' motions and the camera parameters are not known a priori, the most general motion model (the fundamental matrix) is applied. Although the estimation of a fundamental matrix and its use for motion segmentation are well understood, the conditions governing the feasibility of segmentation for different types of motions are yet to be discovered. In this work, the authors study the feasibility of separating motions of a 3D object from its static background using the fundamental matrix. The authors theoretically prove that a pure translational motion cannot be separated from its static background and the success of motion-background segmentation depends on the rotational part of the motion. An extensive set of controlled experiments using both synthetic and real images was conducted to validate the theoretical results. In addition, the authors quantified the conditions for successful motion-background segmentation in terms of the minimum required rotation angle. These results are useful for practitioners designing motion segmentation or estimation solutions for computer vision problems. - Author(s): H.T. Ho and D. Gibbins
- Source: IET Computer Vision, Volume 3, Issue 4, p. 201 –212
- DOI: 10.1049/iet-cvi.2009.0044
- Type: Article
- + Show details - Hide details
-
p.
201
–212
(12)
A framework for extracting salient local features from 3D models is presented in this study. In the proposed method, the amount of curvature at a surface point is specified by a positive quantitative measure known as the curvedness. This value is invariant to rigid body transformation such translation and rotation. The curvedness at a surface position is calculated at multiple scales by fitting a manifold to the local neighbourhoods of different sizes. Points corresponding to local maxima and minima of curvedness are selected as suitable features and a confidence measure of each keypoint is also calculated based on the deviation of its curvedness from the neighbouring values. The advantage of this framework is its applicability to both 3D meshes and unstructured point clouds. Experimental results on a different number of models are shown to demonstrate the effectiveness and robustness of our approach. - Author(s): Y. Qian ; F. Yao ; S. Jia
- Source: IET Computer Vision, Volume 3, Issue 4, p. 213 –222
- DOI: 10.1049/iet-cvi.2009.0034
- Type: Article
- + Show details - Hide details
-
p.
213
–222
(10)
Hyperspectral imagery generally contains enormous amounts of data because of hundreds of spectral bands. Band selection is often adopted to reduce computational cost and accelerate knowledge discovery and other tasks such as subsequent classification. An exemplar-based clustering algorithm termed affinity propagation for band selection is proposed. Affinity propagation is derived from factor graph, and operates by initially considering all data points as potential cluster centres (exemplars) and then exchanging messages between data points until a good set of exemplars and clusters emerges. Affinity propagation has been applied to computer vision and bioinformatics, and shown to be much faster than other clustering methods for large data. By combining the information about the discriminative capability of each individual band and the correlation/similarity between bands, the exemplars generated by affine propagation have higher importance and less correlation/similarity. The performance of band selection is evaluated through a pixel image classification task. Experimental results demonstrate that, compared with some popular band selection methods, the bands selected by affinity propagation best characterise the hyperspectral imagery from the pixel classification standpoint.
Editorial: Selected papers from the Digital Image Computing: Techniques and Applications Conference 2008 (DICTA 2008)
Robust video stabilisation algorithm using feature point selection and delta optical flow
Conditions for motion-background segmentation using fundamental matrix
Curvature-based approach for multi-scale feature extraction from 3D meshes and unstructured point clouds
Band selection for hyperspectral imagery using affinity propagation
Most viewed content for this Journal
Article
content/journals/iet-cvi
Journal
5
Most cited content for this Journal
-
Brain tumour classification using two-tier classifier with adaptive segmentation technique
- Author(s): V. Anitha and S. Murugavalli
- Type: Article
-
Driving posture recognition by convolutional neural networks
- Author(s): Chao Yan ; Frans Coenen ; Bailing Zhang
- Type: Article
-
Local directional mask maximum edge patterns for image retrieval and face recognition
- Author(s): Santosh Kumar Vipparthi ; Subrahmanyam Murala ; Anil Balaji Gonde ; Q.M. Jonathan Wu
- Type: Article
-
Fast and accurate algorithm for eye localisation for gaze tracking in low-resolution images
- Author(s): Anjith George and Aurobinda Routray
- Type: Article
-
‘Owl’ and ‘Lizard’: patterns of head pose and eye pose in driver gaze classification
- Author(s): Lex Fridman ; Joonbum Lee ; Bryan Reimer ; Trent Victor
- Type: Article