Online ISSN
1751-9640
Print ISSN
1751-9632
IET Computer Vision
Volume 6, Issue 2, March 2012
Volumes & issues:
Volume 6, Issue 2
March 2012
-
- Author(s): R. Niese ; A. Al-Hamadi ; A. Farag ; H. Neumann ; B. Michaelis
- Source: IET Computer Vision, Volume 6, Issue 2, p. 79 –89
- DOI: 10.1049/iet-cvi.2011.0064
- Type: Article
- + Show details - Hide details
-
p.
79
–89
(11)
Facial expression recognition is a useful feature in modern human computer interaction (HCI). In order to build efficient and reliable recognition systems, face detection, feature extraction and classification have to be robustly realised. Addressing the latter two issues, this work proposes a new method based on geometric and transient optical flow features and illustrates their comparison and integration for facial expression recognition. In the authors’ method, photogrammetric techniques are used to extract three-dimensional (3-D) features from every image frame, which is regarded as a geometric feature vector. Additionally, optical flow-based motion detection is carried out between consecutive images, what leads to the transient features. Artificial neural network and support vector machine classification results demonstrate the high performance of the proposed method. In particular, through the use of 3-D normalisation and colour information, the proposed method achieves an advanced feature representation for the accurate and robust classification of facial expressions. - Author(s): M. Yu ; S.M. Naqvi ; A. Rhuma ; J. Chambers
- Source: IET Computer Vision, Volume 6, Issue 2, p. 90 –100
- DOI: 10.1049/iet-cvi.2011.0046
- Type: Article
- + Show details - Hide details
-
p.
90
–100
(11)
In this study, the authors introduce a video-based robust fall detection system for monitoring an elderly person in a smart room environment. Video features, namely the centroid and orientation of a voxel person, are extracted. The boundary method, which is an example of one class classification technique, is then used to determine whether the incoming features lie in the ‘fall region’ of the feature space, and thereby effectively distinguishing a fall from other activities, such as walking, sitting, standing, crouching or lying. Four different types of boundary methods, k-centre, kth nearest neighbour, one class support vector machine and single class minimax probability machine (SCMPM) are assessed on representative test datasets. The comparison is made on the following three aspects: (i) true positive rate, false positive rate and geometric means in detection. (ii) Robustness to noise in the training dataset. (iii) The computational time for the test phase. From the comparison results, the authors show that the SCMPM achieves the best overall performance. By applying one class classification techniques with three-dimensional (3-d) features, the authors can obtain a more efficient fall detection system with acceptable performance, as shown in the experimental part; besides, it can avoid the drawbacks of other traditional fall detection methods. - Author(s): W. Ye ; C. Paulson ; D. Wu
- Source: IET Computer Vision, Volume 6, Issue 2, p. 101 –110
- DOI: 10.1049/iet-cvi.2010.0028
- Type: Article
- + Show details - Hide details
-
p.
101
–110
(10)
A target detection algorithm is developed based on a supervised learning technique that maximises the margin between two classes, that is, the target class and the non-target class. Specifically, the proposed target detection algorithm consists of (i) image differencing, (ii) maximum-margin classifier, and (iii) diversity combining. The image differencing is to enhance and highlight the targets so that the targets are more distinguishable from the background. The maximum-margin classifier is based on a recently developed feature weighting technique called Iterative RELIEF; the objective of the maximum-margin classifier is to achieve robustness against uncertainties and clutter. The diversity combining utilises multiple images to further improve the performance of detection, and hence it is a type of multi-pass change detection. The authors evaluate the performance of the proposed detection algorithm, using the CARABAS-II synthetic aperture radar (SAR) image data and the experimental results demonstrate superior performance of the proposed algorithm, compared to the benchmark algorithm. - Author(s): R. Kapoor and A. Dhamija
- Source: IET Computer Vision, Volume 6, Issue 2, p. 111 –120
- DOI: 10.1049/iet-cvi.2008.0070
- Type: Article
- + Show details - Hide details
-
p.
111
–120
(10)
This study introduces a new potential function-based modelling approach for real-time object tracking with single camera. Real-time tracking requires the least complex techniques for processing and classification and still provide accurate results. Particle filter-based algorithms allow accurate estimations of the displacement and scaling of the object for tracking, but at the cost of high computational complexity and complicated modelling. Also, the existing single-camera tracking systems lack the ability to predict the direction of motion of the object and their performance is significantly affected by occlusions. This study proposes a new method to address these four key issues. The method is principally based upon the potential function, which has been modified for motion image sequences. Potential function uses the current estimates of non-linear scaling and drift vector with a priori knowledge of the object to compute the tracking parameters in the form of diffusion matrices. The concept of attractors and repellers inside a potential field has been used in analogy to classify different directions of motion in the image plane, such that the object tends to drift towards the attractors and away from repellers. Attractor for every consecutive pair of frames is estimated using the set of transformations (displacement and scaling) occurred due to the motion in a particular direction. The proposed technique works well with minimal tracking errors and a computational complexity of O(1). - Author(s): E. Cuevas ; F. Wario ; D. Zaldivar ; M. Pérez-Cisneros
- Source: IET Computer Vision, Volume 6, Issue 2, p. 121 –132
- DOI: 10.1049/iet-cvi.2010.0226
- Type: Article
- + Show details - Hide details
-
p.
121
–132
(12)
Circle detection over digital images has received considerable attention from the computer vision community over the last few years devoting a tremendous amount of research seeking for an optimal detector. This article presents an algorithm for the automatic detection of circular shapes from complicated and noisy images with no consideration of conventional Hough transform (HT) principles. The proposed algorithm is based on Learning Automata (LA) which is a probabilistic optimisation method that explores an unknown random environment by progressively improving the performance via a reinforcement signal (objective function). The approach uses the encoding of three non-collinear points as a candidate circle over the edge image. A reinforcement signal (matching function) indicates if such candidate circles are actually present in the edge map. Guided by the values of such reinforcement signal, the probability set of the encoded candidate circles is modified through the LA algorithm so that they can fit to the actual circles on the edge map. Experimental results over several complex synthetic and natural images have validated the efficiency of the proposed technique regarding accuracy, speed and robustness. - Author(s): V. Babaee Kashany and H.R. Pourreza
- Source: IET Computer Vision, Volume 6, Issue 2, p. 133 –139
- DOI: 10.1049/iet-cvi.2010.0107
- Type: Article
- + Show details - Hide details
-
p.
133
–139
(7)
Camera parameters estimation is an important issue in machine vision. This study proposes a new method to find translation and rotation matrix of camera in a sport scene based on points at infinity. Vanishing point (point at infinity) of parallel lines is the image of the point at infinity, which corresponds to the projection of the intersection of parallel lines at infinity. According to projective geometry constraint, camera rotation of the projection matrix is directly computed using two vanishing points and rotation matrix of camera extracted from those points at infinity. Computer simulation and real data experiments are carried out to validate the proposed method. - Author(s): H.D. Taghirad ; S.F. Atashzar ; M. Shahbazi
- Source: IET Computer Vision, Volume 6, Issue 2, p. 140 –152
- DOI: 10.1049/iet-cvi.2010.0183
- Type: Article
- + Show details - Hide details
-
p.
140
–152
(13)
Three-dimensional (3D) pose estimation of a rigid object by only one camera has a vital role in visual servoing systems, and extended Kalman filter (EKF) is vastly used for this task in an unstructured environment. In this study, the stability of the EKF-based 3D pose estimators is analysed in detail. The most challenging issue of the state-of-the-art EKF-based 3D pose estimators is the possibility of its divergence because of the measurement and model noises. By analysing the stability of conventional EKF-based pose estimators a composite technique is proposed to guarantee the stability of the procedure. In the proposed technique, the non-linear-uncertain estimation problem is decomposed into a non-linear-certain observation in addition to a linear-uncertain estimation problem. The first part is handled using the extended Kalman observer and the second part is accomplished by a simple Kalman filter. Finally, some experimental and simulation results are given in order to verify the robustness of the method and compare the performance of the proposed method in noisy and uncertain environment to the conventional techniques. - Author(s): Z.-H. Xiong ; I. Cheng ; W. Chen ; A. Basu ; M.-J. Zhang
- Source: IET Computer Vision, Volume 6, Issue 2, p. 153 –163
- DOI: 10.1049/iet-cvi.2010.0115
- Type: Article
- + Show details - Hide details
-
p.
153
–163
(11)
Using stereo disparity or depth information to detect and track moving objects is receiving increasing attention in recent years. However, this approach suffers from some difficulties, such as synchronisation between two cameras and doubling of the image-data size. Besides, traditional stereo-imaging systems have a limited field of view (FOV), which means that they need to rotate the cameras when an object moves out of view. In this research, the authors present a depth-space partitioning algorithm for performing object tracking using single-camera omni-stereo imaging system. The proposed method uses a catadioptric omni-directional stereo-imaging system to capture omni-stereo image ‘pairs.’ This imaging system has 360° FOV, avoiding the need for rotating cameras when tracking a moving object. In order to estimate omni-stereo disparity, the authors present a depth-space partitioning strategy. It partitions three-dimensional depth space with a series of co-axial cylinders, models the disparity estimation as a pixel-labelling problem and establishes an energy minimisation function for solving this problem using graph cuts optimisation. Based on the omni-stereo disparity-estimation results, the authors detect and track-moving objects based on omni-stereo disparity motion vector, which is the difference between two consecutive disparity maps. Experiments on moving car tracking justify the proposed method.
Facial expression recognition based on geometric and optical flow features in colour image sequences
One class boundary method classifiers for application in a video-based fall detection system
Target detection for very high-frequency synthetic aperture radar ground surveillance
Fast tracking algorithm using modified potential function
Circle detection on images using learning automata
Camera parameters estimation in soccer scenes on the basis of points at infinity
Robust solution to three-dimensional pose estimation using composite extended Kalman observer and Kalman filter
Depth space partitioning for omni-stereo object tracking
Most viewed content for this Journal
Article
content/journals/iet-cvi
Journal
5
Most cited content for this Journal
-
Brain tumour classification using two-tier classifier with adaptive segmentation technique
- Author(s): V. Anitha and S. Murugavalli
- Type: Article
-
Driving posture recognition by convolutional neural networks
- Author(s): Chao Yan ; Frans Coenen ; Bailing Zhang
- Type: Article
-
Local directional mask maximum edge patterns for image retrieval and face recognition
- Author(s): Santosh Kumar Vipparthi ; Subrahmanyam Murala ; Anil Balaji Gonde ; Q.M. Jonathan Wu
- Type: Article
-
Fast and accurate algorithm for eye localisation for gaze tracking in low-resolution images
- Author(s): Anjith George and Aurobinda Routray
- Type: Article
-
‘Owl’ and ‘Lizard’: patterns of head pose and eye pose in driver gaze classification
- Author(s): Lex Fridman ; Joonbum Lee ; Bryan Reimer ; Trent Victor
- Type: Article