IET Computer Vision
Volume 11, Issue 3, April 2017
Volumes & issues:
Volume 11, Issue 3
April 2017
-
- Author(s): Ferkous Chokri and Merouani Hayet Farida
- Source: IET Computer Vision, Volume 11, Issue 3, p. 189 –198
- DOI: 10.1049/iet-cvi.2016.0244
- Type: Article
- + Show details - Hide details
-
p.
189
–198
(10)
The goal of this study is to propose a computer-aided diagnosis system to differentiate between four breast imaging reporting and data system (Bi-RADS) classes in digitised mammograms. This system is inspired by the approach of the doctor during the radiologic examination as it was agreed in BI-RADS, where masses are described by their form, their boundary and their density. The segmentation of masses in the authors’ approach is manual because it is supposed that the detection is already made. When the segmented region is available, the features extraction process can be carried out. 22 visual characteristics are automatically computed from shape, edge and textural properties; only one human feature is used in this study, which is the patient's age. Classification is finally done using a multi-layer perceptron according to two separate schemes; the first one consists of classify masses to distinguish between the four BI-RADS classes (2, 3, 4 and 5). In the second one the authors classify abnormalities on two classes (benign and malign). The proposed approach has been evaluated on 480 mammographic masses extracted from the digital database for screening mammography, and the obtained results are encouraging.
- Author(s): Anzhi Wang ; Minghui Wang ; Gang Pan ; Xiaoyan Yuan
- Source: IET Computer Vision, Volume 11, Issue 3, p. 199 –206
- DOI: 10.1049/iet-cvi.2016.0263
- Type: Article
- + Show details - Hide details
-
p.
199
–206
(8)
Most of approaches to salient object detection focused on two-dimensional images, while rare attention was attached to the light field which can provide exclusive visual information for salient object detection and other computer vision applications. An effective algorithm of salient object detection is proposed for light field data. First, boundary connectivity is calculated on all-focus image. Then, background probability based on boundary connectivity is achieved by computing geodesic distance. Second, the authors rank the similarity of the superpixels of both all-focus image and depth map via graph-based manifold ranking to carry out two initial saliency maps. Third, weighted by background probability, the two initial saliency maps are fused to produce final saliency results, integrated by objectness cue. The authors also exploit how to integrate effectively objectness with other visual features, and compare two fusion strategies: linear fusion and Bayesian integration. Experiments show that light field features are helpful for saliency detection, and Bayesian integration framework is the better choice than linear fusion method. Meanwhile, the way how to combine multiple features is crucial. The proposed algorithm handles challenging natural scenarios such as cluttered background, similar foreground and background, and so on, and produces visual favourable results in comparison with the eight state-of-the-art methods.
- Author(s): Rajesh Rohilla ; Vanshaj Sikri ; Rajiv Kapoor
- Source: IET Computer Vision, Volume 11, Issue 3, p. 207 –219
- DOI: 10.1049/iet-cvi.2016.0201
- Type: Article
- + Show details - Hide details
-
p.
207
–219
(13)
Particle filters (PFs) are sequential Monte Carlo methods that use particle representation of state-space model to implement the recursive Bayesian filter for non-linear and non-Gaussian systems. Owing to this property, PFs have been extensively used for object tracking in recent years. Although PFs provide a robust object tracking framework, they suffer from shortcomings. Particle degeneracy and particle impoverishment brought by the resampling step result in abysmal construction of posterior probability density function (PDF) of the state. To overcome these problems, this work amalgamates two characteristics of population-based heuristic optimisation algorithms: exploration and exploitation with PF implementing dynamic resampling method. The aim of optimisation is to distribute particles in high-likelihood area according to the cognitive effect and improve quality of particles, while the objective of dynamic resampling is to maintain diversity in the particle set. This work uses very efficient spider monkey optimisation to achieve this. Furthermore, to test the efficiency of the proposed algorithm, experiments were carried out on one-dimensional state estimation problem, bearing only tracking problem, standard videos and synthesised videos. Metrics obtained show that the proposed algorithm outplays simple PF, particle swarm optimisation based PF, and cuckoo search based PF, and effectively handles different challenges inherent in object tracking.
- Author(s): Xiaodong Gu ; Xinyu Huang ; Alade Tokuta
- Source: IET Computer Vision, Volume 11, Issue 3, p. 220 –225
- DOI: 10.1049/iet-cvi.2016.0241
- Type: Article
- + Show details - Hide details
-
p.
220
–225
(6)
Recently, discriminative correlation filter based trackers have achieved extremely successful results in many competitions and benchmarks. These methods utilise a periodic assumption of the training samples to efficiently learn a classifier. However, this assumption will produce unwanted boundary effects which severely degrade the tracking performance. Correlation filters with limited boundaries and spatially regularised discriminative correlation filters were proposed to reduce boundary effects. However, their methods use the fixed scale mask or pre-designed weights function, respectively, which are unsuitable for large-scale variation. In this study, the authors proposed multiscale spatially regularised correlation filters (MSRCF) for visual tracking. The authors’ augmented objective can reduce the boundary effect even in large-scale variation, leading to more discriminative model. The proposed multiscale regularisation matrix makes MSRCF fast convergence. The authors’ online tracking algorithm performs favourably against state-of-the-art trackers on OTB-2013 and OTB-2015 Benchmark in terms of efficiency, accuracy and robustness.
- Author(s): Can Erhan ; Evangelos Sariyanidi ; Onur Sencan ; Hakan Temeltas
- Source: IET Computer Vision, Volume 11, Issue 3, p. 237 –245
- DOI: 10.1049/iet-cvi.2016.0237
- Type: Article
- + Show details - Hide details
-
p.
237
–245
(9)
In the context of autonomous mobile robot navigation, loop closing is defined as the correct identification of a previously visited location. Loop closing is essential for the accurate self-localisation of a robot; however, it is also challenging due to perceptual aliasing, which occurs when the robot traverses in environments with visually similar places (e.g. forests, parks, office corridors). In this study, the authors apply the local Zernike moments (ZMs) for loop closure detection. When computed locally, ZMs provide a high discrimination ability, which enables the distinguishing of similar-looking places. Particularly, they show that increasing the density over which the local ZMs are computed improves loop closing accuracy significantly. Furthermore, they present an approximation of ZMs that allows the usage of integral images, which enable real-time operation. Experiments on real datasets with strong perceptual aliasing show that the proposed ZM-based descriptor outperforms state-of-the-art methods in terms of loop closure accuracy. They also release the source-code of the implementation for research purposes.
- Author(s): Ahmet Yigit and Alptekin Temizel
- Source: IET Computer Vision, Volume 11, Issue 3, p. 255 –263
- DOI: 10.1049/iet-cvi.2016.0238
- Type: Article
- + Show details - Hide details
-
p.
255
–263
(9)
Tracking groups of people is a challenging problem. Groups may grow or shrink dynamically with merging and splitting of individuals and conventional trackers are not designed to handle such cases. In this study, the authors present a conjoint individual and group tracking (CIGT) framework based on particle filter and online learning. CIGT has four complementary phases: two-phase association, false positive elimination, tracking and learning. First, reliable tracklets are created and detection responses are associated to tracklets in two-phase association. Then, hierarchal false positive elimination is performed for unassociated detection responses. In the tracking phase, CIGT calculates multiple weights from the observation and jointly models individuals and groups. Particle advection is used in the motion model of CIGT to facilitate tracking of dense groups. In the learning phase, the discriminative appearance model, consisting of shape, colour and texture features, is extracted and used in AdaBoost online learning. Using the discriminative learning model, state estimation is performed on both individuals and groups. The experimental results show that the performance of the proposed framework compares favourably with other individual and group-tracking methods for both real and synthetic datasets.
Mammographic mass classification according to Bi-RADS lexicon
Salient object detection with high-level prior based on Bayesian fusion
Spider monkey optimisation assisted particle filter for robust object tracking
Multiscale spatially regularised correlation filters for visual tracking
Patterns of approximated localised moments for visual loop closure detection
Individual and group tracking with the evaluation of social interactions
Most viewed content
Most cited content for this Journal
-
Brain tumour classification using two-tier classifier with adaptive segmentation technique
- Author(s): V. Anitha and S. Murugavalli
- Type: Article
-
Driving posture recognition by convolutional neural networks
- Author(s): Chao Yan ; Frans Coenen ; Bailing Zhang
- Type: Article
-
Local directional mask maximum edge patterns for image retrieval and face recognition
- Author(s): Santosh Kumar Vipparthi ; Subrahmanyam Murala ; Anil Balaji Gonde ; Q.M. Jonathan Wu
- Type: Article
-
Fast and accurate algorithm for eye localisation for gaze tracking in low-resolution images
- Author(s): Anjith George and Aurobinda Routray
- Type: Article
-
‘Owl’ and ‘Lizard’: patterns of head pose and eye pose in driver gaze classification
- Author(s): Lex Fridman ; Joonbum Lee ; Bryan Reimer ; Trent Victor
- Type: Article