Online ISSN
1751-9640
Print ISSN
1751-9632
IET Computer Vision
Volume 2, Issue 2, June 2008
Volumes & issues:
Volume 2, Issue 2
June 2008
-
- Author(s): S. Velastin
- Source: IET Computer Vision, Volume 2, Issue 2, p. 35 –36
- DOI: 10.1049/iet-cvi:20089012
- Type: Article
- + Show details - Hide details
-
p.
35
–36
(2)
- Author(s): R.M. Mutelo ; W.L. Woo ; S.S. Dlay
- Source: IET Computer Vision, Volume 2, Issue 2, p. 37 –49
- DOI: 10.1049/iet-cvi:20070075
- Type: Article
- + Show details - Hide details
-
p.
37
–49
(13)
A new technique called two-dimensional Gabor Fisher discriminant (2DGFD) is derived and implemented for image representation and recognition. In this approach, the Gabor wavelets are used to extract facial features. The principal component analysis (PCA) is applied directly on the Gabor transformed matrices to remove redundant information from the image rows and a new direct two-dimensional Fisher linear discriminant (direct 2DFLD) method is derived in order to further remove redundant information and form a discriminant representation more suitable for face recognition. The conventional Gabor-based methods transform the Gabor images into a high-dimensional feature vector. However, these methods lead to high computational complexity and memory requirements. Furthermore, it is difficult to analyse such high-dimensional data accurately. The novel 2DGFD method was tested on face recognition using the ORL, Yale and extended Yale databases, where the images vary in illumination, expression, pose and scale. In particular, the 2DGFD method achieves 98.0% face recognition accuracy when using 20×3 feature matrices for each Gabor output on the ORL database and 97.6% recognition accuracy compared with 91.8% and 91.6% for the 2DPCA and 2DFLD method on the extended Yale database. The results show that the proposed 2DGFD method is computationally more efficient than the Gabor Fisher classifier method by approximately 8 times on the ORL, 135 times on the Yale and 1.2801×108 times on the extended Yale B data sets. - Author(s): R.R. Sahay and A.N. Rajagopalan
- Source: IET Computer Vision, Volume 2, Issue 2, p. 50 –59
- DOI: 10.1049/iet-cvi:20070072
- Type: Article
- + Show details - Hide details
-
p.
50
–59
(10)
Traditional shape-from-focus (SFF) uses focus as the singular cue to derive the shape profile of a 3D object from a sequence of images. However, the stack of low-resolution (LR) observations is space-variantly blurred because of the finite depth of field of the camera. The authors propose to exploit the defocus information in the stack of LR images to obtain a super-resolved image as well as a high-resolution (HR) depth map of the underlying 3D object. Appropriate observation models are used to describe the image formation process in SFF. Local spatial dependencies of the intensities of pixels and their depth values are accounted for by modelling the HR image and the HR structure as independent Markov random fields. Taking as input the LR images from the stack and the LR depth map, the authors first obtain the super-resolved image of the 3D specimen and use it subsequently to reconstruct a HR depth profile of the object. - Author(s): E.R. Davies
- Source: IET Computer Vision, Volume 2, Issue 2, p. 60 –74
- DOI: 10.1049/iet-cvi:20070071
- Type: Article
- + Show details - Hide details
-
p.
60
–74
(15)
A new transformation for finding global valleys in 1D distributions, with particular application to the thresholding of grey-scale images is described. The applied criterion function estimates the global significance of all the valleys that are located, and thus can cope with multi-mode distributions. Examples make it clear that one of the main advantages of the resulting global valley method (GVM) is that it permits partially hidden minima to be reliably located without complicated analysis. Overall, the GVM is demonstrated to have very good stability properties and high sensitivity for the detection of subsidiary minima – including those arising near the ends of distributions that arise from practically important image detail (such as defects or contaminants in an automated inspection scenario). The global analysis around which the method is formulated is responsible for achieving these capabilities with modest computational demands. - Author(s): J. Chapran ; M.C. Fairhurst ; R.M. Guest ; C. Ujam
- Source: IET Computer Vision, Volume 2, Issue 2, p. 75 –87
- DOI: 10.1049/iet-cvi:20070069
- Type: Article
- + Show details - Hide details
-
p.
75
–87
(13)
An analysis of features extracted from handwriting samples according to writer demographics and writing task characteristics is presented. The individual demographics studied here include age, gender and handedness, while the handwriting tasks considered include writing the individual signature, form-filling, cheque-completion and constructing free-form written text. By analysing different features of handwriting, the authors establish a link between a writer's individual characteristics including demographic properties, the handwriting task being attempted and quantifiable features of handwriting such as pen velocity, acceleration and slant. Additionally, imitated or ‘forged’ handwriting is also analysed on exactly the same basis. The analysis is performed on a newly collected database of handwriting samples collected from a population of 150 writers, and which can be utilised in both forensic document inspection and automatic handwriting analysis research. All handwriting samples, including forgery attempts, were recorded both temporally as a series of pen positional coordinates and scanned at a resolution of 600 dpi to enable both dynamic and static processing. - Author(s): V. Leung ; A. Colombo ; J. Orwell ; S.A. Velastin
- Source: IET Computer Vision, Volume 2, Issue 2, p. 88 –98
- DOI: 10.1049/iet-cvi:20070070
- Type: Article
- + Show details - Hide details
-
p.
88
–98
(11)
Some urban scenes exhibit periodic variations that can be relevant to visual surveillance applications. One example is the variation in the background elements, such as those caused by moving escalators, lights and scrolling advertisements. When modelled correctly, the incorporation of these periodic elements as a Markov model in a foreground detection component can improve the performance significantly. Another area where the periodicity in the scene can be used is anomaly detection. In some underground metro stations where the flow of people is periodic, deviations from this periodicity can be interpreted as abnormal movements of people. This can be achieved by using a higher-dimensional model for the underlying data structure, and mapping it to a one-dimensional signal for interpretation. This approach is tested, and the results show that abnormal behaviour can be automatically detected. - Author(s): A. Ziani ; C. Motamed ; J.-C. Noyer
- Source: IET Computer Vision, Volume 2, Issue 2, p. 99 –107
- DOI: 10.1049/iet-cvi:20070074
- Type: Article
- + Show details - Hide details
-
p.
99
–107
(9)
The authors propose a high-level scenario recognition algorithm for video sequence interpretation. The recognition of scenarios is based on a Bayesian networks approach. The model of a scenario contains two main layers. The first one allows events from the observed visual features to be highlighted and the second layer is focused on the temporal reasoning stage. The temporal layer uses specific nodes permitting an event-based approach. These nodes focus on the lifetime of events highlighted from the results of the first layer. The temporal layer then estimates the qualitative and quantitative relations between the different temporal events helpful for the recognition task. The global recognition algorithm is illustrated over real indoor image sequences of an abandoned baggage scenario. - Author(s): L. Patino ; H. Benhadda ; E. Corvee ; F. Bremond ; M. Thonnat
- Source: IET Computer Vision, Volume 2, Issue 2, p. 108 –128
- DOI: 10.1049/iet-cvi:20070062
- Type: Article
- + Show details - Hide details
-
p.
108
–128
(21)
Extracting the hidden and useful knowledge embedded within video sequences and thereby discovering relations between the various elements to help an efficient decision-making process is a challenging task. The task of knowledge discovery and information analysis is possible because of recent advancements in object detection and tracking. The authors present how video information is processed with the ultimate aim to achieve knowledge discovery of people activity and also extract the relationship between the people and contextual objects in the scene. First, the object of interest and its semantic characteristics are derived in real-time. The semantic information related to the objects is represented in a suitable format for knowledge discovery. Next, two clustering processes are applied to derive the knowledge from the video data. Agglomerative hierarchical clustering is used to find the main trajectory patterns of people and relational analysis clustering is employed to extract the relationship between people, contextual objects and events. Finally, the authors evaluate the proposed activity extraction model using real video sequences from underground metro networks (CARETAKER) and a building hall (CAVIAR).
Editorial: Visual information engineering
Discriminant analysis of the two-dimensional Gabor features for face recognition
Harnessing defocus blur to recover high-resolution information in shape-from-focus technique
Stable bi-level and multi-level thresholding of images using a new global transformation
Task-related population characteristics in handwriting analysis
Modelling periodic scene elements for visual surveillance
Temporal reasoning for scenario recognition in video-surveillance using Bayesian networks
Extraction of activity patterns on large video recordings
Most viewed content for this Journal
Article
content/journals/iet-cvi
Journal
5
Most cited content for this Journal
-
Brain tumour classification using two-tier classifier with adaptive segmentation technique
- Author(s): V. Anitha and S. Murugavalli
- Type: Article
-
Driving posture recognition by convolutional neural networks
- Author(s): Chao Yan ; Frans Coenen ; Bailing Zhang
- Type: Article
-
Local directional mask maximum edge patterns for image retrieval and face recognition
- Author(s): Santosh Kumar Vipparthi ; Subrahmanyam Murala ; Anil Balaji Gonde ; Q.M. Jonathan Wu
- Type: Article
-
Fast and accurate algorithm for eye localisation for gaze tracking in low-resolution images
- Author(s): Anjith George and Aurobinda Routray
- Type: Article
-
‘Owl’ and ‘Lizard’: patterns of head pose and eye pose in driver gaze classification
- Author(s): Lex Fridman ; Joonbum Lee ; Bryan Reimer ; Trent Victor
- Type: Article